Meta says that you need to reveal your fakes if you want them to pull them
Identifying Artificial Intelligence and AI-Generated Media in the Run Up to Elections: A Comment on Meta and the Oversight Board
He said that Meta is starting to internally test the use of large language models trained on its Community Standards. “It appears to be a highly effective and rather precise way of ensuring that what is escalated to our human reviewers really is the kind of edge cases for which you want human judgment.”
There are already plenty of examples of viral, AI-generated posts of politicians, but Clegg downplayed the chances of the phenomena overrunning Meta’s platform in an election year. “I think it’s really unlikely that you’re going to get a video or audio which is entirely synthetic of very significant political importance which we don’t get to see pretty quickly,” he said. “I just don’t think that’s the way that it’s going to play out.”
As election season ramps up around the world, Meta will begin marking photos uploaded to social media platforms with artificial intelligence. The company will penalize users who don’t disclose if there is a piece of audio or video made with artificial intelligence.
And he thinks companies should be prepared for bad actors to target whatever method they try to use to identify content provenance. Multiple forms of identification might need to be used in concert to robustly identify artificial intelligence generated images, for example by combining watermarking andhash-based technology used to create watch lists for child sex abuse material. Artificial intelligence-generated media are not as developed as images, such as audio and video. While companies have begun to include signals in their image generators, they haven’t begun to include them in their audio and video tools that generate audio and video at the same scale, so we cannot yet detect and label this content from other companies. “While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it.”
Audio and video content designed to deceive the public is something we will be watching very closely in the run up to the election. Is it possible that something could happen where it is detected and labeled, but we are accused of having dropped the ball? Yeah, I think that is possible, if not likely.”
The company will respond to the recommendations from the Oversight Board in 60 days according to the byLaw, according to McAlister. The technical focus on only watermarkedAI-generated images suggests that Meta’s plan for the Gen-ai era is incomplete.
Meta requires that political ads include a disclosure if they are created using altered images, video or audio.
If it’s determined that digitally created or altered image, video or audio content is deceiving the public, we could add a more prominent label so people have more information and context.
What’s more, Meta’s labels apply to only static photos. The company can’t label that data in a way that is fair because the industry isn’t starting to include it.
Meta already labels AI-generated images made using its own generative AI tools with the tag “Imagined with AI,” in part by looking for the digital “watermark” its algorithms embed into their output. Meta is going to label some artificial intelligence images made with tools from other companies that add watermarks into their technology.
When an AI-generated image of the pope in a puffy white coat went viral last year, internet users debated whether the pontiff was really that stylish. The fake images of Trump being arrested caused a lot of confusion, despite the person who created them saying they were made with artificial intelligence.
Meta, which owns all three platforms, said on Tuesday it will start labeling images created with artificial intelligence in the coming months. The move comes as tech companies — both those that build AI software and those that host its outputs — are coming under growing pressure to address the potential for the cutting-edge technology to mislead people.
Fading by Deep Fakes: The Role of Generative AI in Voting Machine Intelligence and Political Fraud Detection
Millions of people will vote in a large number of elections around the world this year. Regulators warned that using deepfakes could be used to amplify efforts to deceive and manipulate voters.
Hany Farid, a professor at the UC Berkeley School of Information who has advised the C2PA initiative, says that anyone interested in using generative AI maliciously will likely turn to tools that don’t watermark their output or betray its nature. For example, the creators of the fake robocall using President Joe Biden’s voice targeted at some New Hampshire voters last month didn’t add any disclosure of its origins.