There is a dark side to open source artificial intelligence image generators
Removing Pornographic, Malicious, and NSFW Images via AI Detection and Removal: A Comment on Hood
Hood thinks there is a problem with generativeai because it needs to prevent creation of misleading images but also detect them and remove them. The Meta system for watermarking had been found to be easily circumvented.
It can be trained to be very gruesome and bad in a lot of ways. He likes the experimentation that has been unleashed by open source technology. That freedom allows the use of explicit images of women for harassment.
Meanwhile, smaller AI models known as LoRAs make it easy to tune a Stable Diffusion model to output images with a particular style, concept, or pose—such as a celebrity’s likeness or certain sexual acts. They are widely available on AI model marketplaces such as Civitai, a community-based site where users share and download models. There is a creator of a Taylor Swift plug-in who told others not to use it for explicit images. However, once downloaded, its use is out of its creator’s control. It will be difficult to stop someone from potentially hijacking that, because of the way that open source works.
That kind of activity has inspired some users in communities dedicated to AI image-making, including on Reddit and Discord, to attempt to push back against the sea of pornographic and malicious images. Creators also express worry about the software gaining a reputation for NSFW images, encouraging others to report images depicting minors on Reddit and model-hosting sites.
Are AI Tools Still Generating Misleading Election Images? A Study by the Center for Countering Digital Hate (CCDH), a Non-profit tracking fake election campaigns on social platforms
Despite years of evidence to the contrary, many Republicans still believe that President Joe Biden’s win in 2020 was illegitimate. Brandon Gill, the son of a right-wing pundit and promoter of a film called 2000 Mules, was one of the election denying candidates who won their primaries. Going into this year’s elections, election fraud claims are a staple of candidates running on the right.
The problem could become worse as a result of the advent of generativeai. The Center for Countering Digital Hate (CCDH), a nonprofit that tracks hate speech on social platforms, looked at a report from generative artificial intelligence companies to see how they prevent their image- creating tools from being used to spread election- related misinformation.
While some of the images featured political figures, namely President Joe Biden and Donald Trump, others were more generic and, Callum Hood, head researcher at CCDH, worries, could be more misleading. Some images, created by the researchers, showed militias outside of a polling place, ballots thrown in the trash, or voting machines being tampered with. Researchers were able to prompt StabilityAI’s Dream Studios to generate an image of President Biden in a hospital bed.
Hood says that there’s a weakness around images that can be used to prove false claims of a stolen election. Most of the platforms don’t have clear policies on that, as well as safety measures.
According to a study, Midjourney was most likely to produce election-related images at about 65 percent of the time. Researchers were only able to prompt ChatGPT Plus to do so 28 percent of the time.
Source: AI Tools Are Still Generating Misleading Election Images](https://lostobject.org/2024/02/23/after-some-diversity-errors-the-ability-to-build-artificial-intelligence-images-of-people-has-been-paused/)
What Are We Meanting When We’re Not Using Technicolor To Improve the Security of Public Figure Elections? Comments on Two Cases
“It shows that there can be significant differences between the safety measures these tools put in place,” says Hood. “If one so effectively seals these weaknesses, it means that the others haven’t really bothered.”
In January, Openai announced it was taking steps to make sure its technology wasn’t used in a way that undermined the democratic process, including forbidding images that would discourage people from participating. Midjourney was reported to be considering banning the creation of political images as a whole. Dream studio doesn’t have an election policy but it does prohibit generating misleading content. And while Image Creator prohibits creating content that could threaten election integrity, it still allows users to generate images of public figures.