How artificial intelligence will affect the elections in a few decades

OpenAI disrupts covert influence operations: How Russian spammers and Chinese spammouflage are going to lose their foot in political information censorship

“These operations may be using new technology, but they’re still struggling with the old problem of how to get people to fall for it,” said Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team.

“You can generate the content, but if you don’t have the distribution systems to land it in front of people in a way that seems credible, then you’re going to struggle getting it across,” Nimmo said. That dynamic playing out is what we are seeing here.

He said that while artificial intelligence can give some benefits to threat actors, such as increasing the volume of what they can produce and improving translations, it does not help them overcome their main challenge of distribution.

OpenAI only disrupted use of Artificial Intelligence-generated content. “This wasn’t a case of giving up on human generation and shifting to AI, but of mixing the two,” Nimmo said.

In the past three months, OpenAI banned accounts linked to five covert influence operations, which it defines as “attempt[s] to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.”

The Russian network that was banned was focused on sending junk mail to Telegram. It used Openai tools to develop a program that automatically posted on Telegram, and also generated comments on its accounts on the app. Like Doppelganger, the operation’s efforts were broadly aimed at undermining support for Ukraine, via posts that weighed in on politics in the U.S. and Moldova.

The Spamouflage accounts used AI to debug code for a website targeting Chinese dissidents, to analyze social media posts, and to research news and current events. Some posts from fake Spamouflage accounts only received replies from other fake accounts in the same network.

Both Doppelganger and Spamouflage used OpenAI tools to generate comments in multiple languages that were posted across social media sites. The Russian network uses Artificial Intelligence to translate Russian articles into English and French and also to post them on their Facebook page.

That includes two operations well known to social media companies and researchers: Russia’s Doppelganger and a sprawling Chinese network dubbed Spamouflage.

Experts know that generative AI is poised to drastically change the information landscape, but we’re still learning how exactly that will happen. Problems that have long plagued tech platforms—like mis- and disinformation, scammy or hateful content—are likely to be amplified, despite the guardrails that companies say they’ve put in place.

In her research, the network would use real-seeming Facebook profiles to post articles, often around divisive political topics. “The actual articles are written by generative AI,” she says. “And mostly what they’re trying to do is see what will fly, what Meta’s algorithms will and won’t be able to catch.”

But influence campaigns on social media often innovate over time to avoid detection, learning the platforms and their tools, sometimes better than the employees of the platforms themselves. The initial campaigns may be small or ineffective, but they seem to be still in the experiment stage, according to Jessica Walton, a researcher with the CyberPeace Institute.

AI as a Force of Disinformation: OpenAI’s Report on Artificial Intelligence and its Use to Expose Discrimination in China, Iran, and Israel

OpenAI’s report is the first of its kind from the company, which has swiftly become one of the leading players in AI. ChatGPT has gained more than 100 million users since its public launch in November 2022.

Bad actors have used OpenAI’s tools, which include ChatGPT, to generate social media comments in multiple languages, make up names and bios for fake accounts, create cartoons and other images, and debug code.

Artificial intelligence is being used to influence the public in Russia, China, Iran and Israel, according to a new report.

And while it’s a modest relief that these actors haven’t mastered generative AI to become unstoppable forces for disinformation, it’s clear that they’re experimenting, and that alone should be worrying.

In other cases, ChatGPT was used to create code and content for websites and social media. There was a case whereSpamoflauge tried to create a site for people in the Chinese diaspora who were critical of the country’s government.

Taken altogether, the report paints a picture of several relatively ineffective campaigns with crude propaganda, seemingly allaying fears that many experts have had about the potential for this new technology to spread mis- and disinformation, particularly during a crucial election year.

Previous post The jury found Trump guilty of all counts in the trial
Next post Legal experts agree that Trump’s conviction won’t lead to a prison sentence