Meta says that it is cracking down on violence following Hamas attacks

X CEO Linda Yaccarino: Attacks on the EU’s Social Media Platform Meta, Mark Zuckerberg, Elon Musk and Shou Zi Chew

X CEO Linda Yaccarino says the social media platform formerly known as Twitter has identified and removed “hundreds” of Hamas-affiliated accounts, and has “taken action to remove or label tens of thousands of pieces of content” in the wake of terrorist attacks carried out by Hamas against Israel. Yaccarino’s letter comes in response to concerns raised by EU commissioner Thierry Breton that X is being used to “disseminate illegal content and disinformation,” in possible violation of the EU’s tough new Digital Services Act (DSA).

In the letters, European Union Commissioner Thierry Breton reminded the bosses of Meta, X, the platform formerly known as Twitter, and TikTik, of their obligations to combat misinformation under an EU law known as the Digital Services Act, or the DSA.

X has used Community Notes in an attempt to fight misinformation on its platform, and as a result, over 700 notes are being displayed on the platform relating to the attacks. But a report from NBC News has shed light on the strain the volunteer-powered system is under, with some community notes taking hours or even days to be approved, and other posts failing to be labeled at all.

While the letter from X’s CEO struck a diplomatic tone, Musk himself has been more forthright in his responses to Breton, pushing for the commissioner to list specific violations on the platform publicly. Musk wrote that he takes his actions in the open. There are no back room deals.

A top European Union official has fired off letters to top social media executives Mark Zuckerberg, Elon Musk and Shou Zi Chew over the flood of misinformation on their platforms related to the Israel-Hamas war, warning that EU laws can impose severe financial penalties if the spread of falsehoods goes unchecked.

In a similar note to Zuckerberg, Breton gave Meta’s chief executive 24 hours to lay out the company’s plan to staunch the tide of war-related misinformation and AI-generated posts carrying fake content manipulated to look real.

The EU, Breton wrote to Zuckerberg, has “been made aware of reports of a significant number of deep fakes and manipulated content which circulated on your platforms and a few still appear online.”

“First, given that your platform is used extensively by children and teenagers, you have a particular obligation to protect them from violent content depicting hostage taking and other graphic videos, which are reportedly widely circulating on your platform without appropriate safeguards,” Breton wrote.

Since Hamas militants attacked Israel on Saturday morning, fabricated photos and video clips and other bogus content purporting to portray the violence in the region have been wreaking havoc on social media platforms, making sorting fact from fiction a daunting task.

The challenge of the European Commission to the Digital Services Act: Meta, Twitter and TikTok under the October 7th cyber-crime

Musk changed the platform’s verification policies last year. Any person willing to pay a monthly fee will receive a blue check mark, previously used for credible news organizations and notable people. Now, having a paid-for “verification” mark boosts the reach of posts, an arrangement that misinformation experts say has contributed to considerable chaos on the site.

The EU’s Digital Services Act is one of the toughest online safety laws in the world and the swift response came in the face of it. The law carries stiff fees for tech companies that violate the rules.

Under the law, social media platforms like Facebook, Instagram, Twitter and TikTok must quickly remove posts inciting violence, featuring manipulated media seen as propaganda, or any kind of hate speech, or be hit with financial penalties that far exceed what any U.S. authority has ever imposed: Up to 6% of a company’s annual global revenue.

Our teams are constantly working to keep our platforms safe, take action on content that violates our local law, and coordinate with third-party fact-checkers in the region. We’ll continue this work as this conflict unfolds,” said Meta spokesman Al Tolan.

“The risk of fake and manipulated images and facts being used to influence elections is very important to the DSA,” Breton wrote, “I remind you that the DSA requires that the risk is taken very seriously.”

Meta says it “removed or marked as disturbing” over 795,000 pieces of content in the three days following October 7th for violating its policies in Hebrew and Arabic and says Hamas is banned from its platforms. The company also says it’s taking more temporary measures like blocking hashtags and prioritizing Facebook and Instagram Live reports relating to the crisis. The company says it is able to remove content without disabling accounts due to the higher volume of content being removed.

Meta has not had a perfect track record in moderation. The company has faced criticism regarding its changing moderation policy and slow responses to members of its Trusted Partner program that allow expert organizations to raise concerns over content on Facebook and Instagram.

X’s outline of its moderation around the conflict does not mention the languages spoken by its response team. The European Commission sent a request for information under its Digital Services Act due to the alleged spread of illegal content and misinformation.

Previous post Left behind, in an Israeli town
Next post Microsoft and publishers are changing the game