How can you tell the difference between a real picture and a fake one?

Ai-Generated Fake Faces Have Been a Hallmark of Online Influence Operations on Facebook, says Ben Nimmo

Facebook parent company Meta says more than two-thirds of the influence operations it found and took down this year used profile pictures that were generated by a computer.

“There’s this paradoxical situation where the threat actors think that by using these AI generated pictures, they’re being really clever and they’re finding a way to hide. But in fact, to any trained investigator who’s got those eyeballs skills, they’re actually throwing up another signal which says, this account looks fake and you need to look at it,” Nimmo said.

Ben Nimmo, who leads global threat intelligence at Meta, said it looked like threat actors are thinking of a better way to hide.

“They probably thought it was a person who didn’t exist and nobody would complain about it, and people wouldn’t be able to find it the same way,” Nimmo said.

The fakes have been used to push Russian and Chinese propaganda and harass activists on Facebook and Twitter. They’re being used by marketing fraudsters on the professional networking site.

Some of the tell-tale signs of a computer-generated profile picture are: strange ears, hair, and eyes, and strange clothing and background.

It’s a big part of how threat actors have evolved since Facebook started taking down networks of fake accounts to covertly influence its platform, he says. It’s taken down more than 200 such networks since then.

“We’re seeing online operations spread themselves over more social media platforms as a way to make themselves more visible and to garner more followers,” Nimmo said. In addition to popular petition websites, there are also upstart and alternative social media sites.

“Threat actors [are] just trying to diversify where they put their content. And I think it’s in the hope that something somewhere won’t get caught,” he said.

Source: https://www.npr.org/2022/12/15/1143114122/ai-generated-fake-faces-have-become-a-hallmark-of-online-influence-operations

Artificial Intelligence and the Trump Family: What can we learn from a Twitter thread by Eliot Higgins about Donald Trump’s arrest?

The future of work with a critical partner is now in question. Under new owner Musk, the service is undergoing major upheaval. He made cuts to the company’s trust and safety workforce, which was made up of teams focused on non-English languages. Key leaders in security, privacy, and trust have left.

Nathaniel Gleicher, head of security policy at Meta, said that most of the people they’ve dealt with there have moved on. “As a result, we have to wait and see what they announce in these threat areas.”

Artificial intelligence generated images of Donald Trump’s arrest are not real. Some of the creations are pretty convincing. Others look more like stills from a video game or a lucid dream. A Twitter thread by Eliot Higgins, a founder of Bellingcat, that shows Trump getting swarmed by synthetic cops, running around on the lam, and picking out a prison jumpsuit was viewed over 3 million times on the social media platform.

What does Higgins think viewers can do to tell the difference between fake, AI images, like the ones in his post, from real photographs that may come out of the former president’s potential arrest?

“Having created a lot of images for the thread, it’s apparent that it often focuses on the first object described—in this case, the various Trump family members—with everything around it often having more flaws,” Higgins said over email. Outside of the focal point is a good place to look. Does the rest of the image appear to be an afterthought?

Even though the newest versions of software like Midjourney are making more progress than ever, mistakes in the small details are still a sign of fake images. Many artists point out that the computers still need to replicate the human body in a consistent, natural way as the popularity of Artificial Intelligence grows.

You can sometimes see over-the-top facial expressions when seeing an image generated by a machine. If you ask for an expression, Midjourney tends to render them in an exaggerated way, with skin folds from things like smiling being very pronounced. The pained expression on Melania Trump’s face looks more like a re-creation of Edvard Munch’s The Scream or a still from some unreleased A24 horror movie than a snapshot from a human photographer.

Keeping in mind that anyone with large amounts of photos circulating online may appear more convincing in deepfaked photos compared to less visible people on the internet. “It’s clear that the more famous a person is, the more images the AI has had to learn from,” Higgins said. “So very famous people are rendered extremely well, while less famous people are usually a bit wonky.” It might make more sense to have a photo dump of selfies after a fun night out with friends. It is possible that the generators have already got your images from the web.

What is the policy on using Artificial Intelligence to generate images associated with a presidential election? The social media platform’s current policy reads, in part, “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’).” Multiple exceptions are carved out for meme, commentary and posts not intended to deceive viewers.

It was almost comical a few years back that the average person was able to make realistic deepfakes of world leaders at home. As AI images become harder to differentiate from the real deal, social media platforms may need to reevaluate their approach to synthetic content and attempt to find ways of guiding users through the complex and often unsettling world of generative AI.

Previous post News publishers are not happy with Bing’s media diet
Next post Long term global challenges are posed by walled-in China