Nonconsensual deepfake porn makes it in the spotlight

Do we really need to stop logging off? A warning warning from social media companies against the online safety bill and the consequences for our online communities

It isn’t just famous or highly visible women who are facing enough online abuse to consider leaving social media. A YouGov poll commissioned by the dating app Bumble showed that almost half of women age 18 to 24 received unsolicited sexual images within the past year. During the debate on the online safety bill, a male fellow MP was asked if he had ever received a dick pic, and Alex Davies-Jones asked him if he had ever received one. She said that it’s not a rhetorical question for most women.

In its annual “worldwide threat assessment,” top US intelligence officials have warned in recent years of the threat posed by so-called deepfakes – convincing fake videos made using artificial intelligence.

Better safety-by-design measures can help people control their images. For example, Twitter recently allowed people to control how they are tagged in photos. The Private Detector is an app used by dating app Bumble that will allow users to decide which nude photos they want to see. The online safety bill that is being proposed in the UK can push social media companies to address these risks. The bill is far from perfect and does ask platform companies to assess the risks and come up with solutions such as human content moderation, dealing with user complaints and pushing for better systems to take care of users.

This regulatory approach is not guaranteed to keep women from logging off in great numbers in 2023. If they do, not only will they miss the benefits of being online, our online communities will suffer.

Why are women so bad? How do we stand against fake pornographic images? Atrioc, a male video game streamer, discovered she had been accessed by a hacked Twitter account in 2017

“Adversaries and strategic competitors,” they warned in 2019, might use this technology “to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.”

A faked video showing a politician having sex with another person is not difficult to imagine.

The threat is not too far away. The recent viral success of ChatGPT, an A.I. chatbot that can answer questions and write prose, is a reminder of how powerful this kind of technology can be.

The long-simmering issue exploded into public view last week when it emerged Atrioc, a high-profile male video game streamer on the hugely popular platform Twitch, had accessed deepfake videos of some of his female Twitch streaming colleagues. He later apologized.

Last week, she realized her face had been used in pornographic videos without her knowledge.

“It’s kind of like if you watched anything shocking happening to yourself. She said that if you watched a video of yourself being murdered, you might consider jumping off a cliff.

Indeed, the very term “deepfake” is derived from the username of an anonymous Reddit contributor who began posting manipulated videos of female celebrities in pornographic scenes in 2017.

The professor at the University of California, Berkeley and digital forensics expert said he was puzzled by how bad people were on the Internet.

I think we have to start thinking about why technology like this allows for such horrible things to happen to humans. And if we’re going to have these technologies ingrained in our lives the way they seem to be, I think we’re going to have to start to think about how we can be better human beings with these types of devices,” he said.

“It’s all rape culture,” Cole said, “I don’t know what the actual solution is other than getting to that fundamental problem of disrespect and non-consent and being okay with violating women’s consent.”

Source: https://www.cnn.com/2023/02/16/tech/nonconsensual-deepfake-porn/index.html

Zuckerberg and the AI Arms Race: Why Are You Doing This? What Are You Waiting to See? When Did Zuckerberg Break His Moose?

There is skepticism. The development of artificial intelligence is quickly moving ahead of the original technology revolution, even though problems of the technology sector from a decade ago haven’t been solved yet.

“Move fast and break things,” was Facebook founder Mark Zuckerberg’s motto back in the company’s early days. The danger of his platform came into focus, which led to the change of his slogan to, “move fast with stable infrastructure.”

Whether it was willful negligence or ignorance, Silicon Valley was not prepared for the onslaught of hate and disinformation that has festered on its platforms. The same tools it had built to bring people together have also been weaponized to divide.

And while there has been a good deal of discussion about “ethical AI,” as Google and Microsoft look set for an AI arms race, there’s concern things could be moving too rapidly.

“The people who are developing these technologies – the academics, the people in the research labs at Google and Facebook – you have to start asking yourself, ‘why are you developing this technology?,’” Farid suggested.

If harms outweigh benefits, should you bomb the internet with your technology and wait to see what happens?

Previous post Labor Secretary Marty Walsh is leaving the Biden administration
Next post There is a long, strange history of spy balloons