The Year of the Generative Election is 24 years away
Artificial Intelligence vs. Politics: The Rise of the Dead and Eminem Endorsing Opposition Parties in South Africa
We are still trying to understand how generative AI will change the information landscape. Problems that have long plagued tech platforms—like mis- and disinformation, scammy or hateful content—are likely to be amplified, despite the guardrails that companies say they’ve put in place.
The global electorate now has to contend with this new tech. Artificial Intelligence can be used for everything from sabotage to satire to the seemingly mundane. It has been seen that artificial intelligence humiliates female politicians and makes world leaders promote the joys of passive-income scam. AI has been used to deploy bots and even tailor automated texts to voters.
Hi! I’m Vittoria Elliott. I’m a reporter on the WIRED Politics desk, and I’m taking over for Makena this week to talk about politicians rising from the dead in India and the rapper Eminem endorsing opposition parties in South Africa.
The OpenAI threat report: Influence campaigns on social media are running up against the limits of generative AI, and why they aren’t good
The rest of the world will be voting for a new president in August, but Americans will have their eyes set on November. India, the world’s largest democracy, is wrapping up its vote; South Africa and Mexico are both heading to the polls this week; and the EU is ramping up for its parliamentary elections in June. There are more online people in the election than ever before.
She used a network to post real-seeming Facebook profiles, often around divisive political topics. She says the articles are written by an artificial intelligence. They are looking at what will fly and what will not be able to catch it.
Today, OpenAI released its first threat report, detailing how actors from Russia, Iran, China, and Israel have attempted to use its technology for foreign influence operations across the globe. The report named five networks that were shut down. In the report, OpenAI reveals that established networks like Russia’s Doppleganger and China’s Spamoflauge are experimenting with how to use generative AI to automate their operations. They’re also not very good at it.
And while it’s a modest relief that these actors haven’t mastered generative AI to become unstoppable forces for disinformation, it’s clear that they’re experimenting, and that alone should be worrying.
The OpenAI report reveals that influence campaigns are running up against the limits of generative AI, which doesn’t reliably produce good copy or code. It is difficult with dialects and also with basic grammar, and OpenAI named one network “Bad Grammar” due to this. The Bad Grammar network once revealed that it was an artificial intelligence language model, and that it assisted and provided the desired comment.
One network used a method that would allow it to automate Telegram posts, a popular app that has been used by extremists and influence networks. Sometimes this worked, but other times it resulted in the same account posting as two different characters, giving away the game.
Influence campaigns on social media often learn the platforms and their tools better than the employees of the platforms, as they try to avoid detection. While these initial campaigns may be small or ineffective, they appear to be still in the experimental stage, says Jessica Walton, a researcher with the CyberPeace Institute who has studied Doppleganger’s use of generative AI.
While the report paints a picture of several ineffectual campaigns with crude propaganda, it seems that they are not going to spread misinformation in a key election year.