Musk wants to pay for sex with his idiotic plan to make it on the social networking site
Editor’s Note: An Intruder Breaks into the California House of Speaker Nancy Pelosi’s Office and Heaps a Window
Editor’s Note: Kara Alaimo, an associate professor in the Lawrence Herbert School of Communication at Hofstra University, writes about issues affecting women and social media. She was spokeswoman for international affairs in the Treasury Department during the Obama administration. The opinions expressed in this commentary are her own. CNN has more opinion.
An intruder broke into the California home of House Speaker Nancy Pelosi early Friday and attacked her husband Paul Pelosi with a hammer while she was in Washington, DC. The speaker’s office says that Paul Pelosi will make a full recovery after undergoing surgery to repair a skull fracture and serious injuries to his arm and hands.
The source briefed on the attack says that the attacker, David DePape, said he would wait until Nancy got home. DePape was taken into custody on suspicion of attempted homicide, assault with a deadly weapon, elder abuse and several additional felonies, according to SFPD Chief William Scott.
We are all at risk of experiencing occasional harassment—but for some, harassment is an everyday part of life online. Women in public life experience chronic abuse: ongoing, persistent, and often coordinated attacks that are threatening and sexually explicit. Scottish Prime Minister Nicola Sturgeon and former New Zealand Prime Minister Jacinda Ardern, for example, have both suffered widely reported abuses online. Similarly, a recent UNESCO report detailing online violence against women journalists found that Nobel Prize–winning journalist Maria Ressa and UK journalist Carole Cadwalladr faced attacks that were “constant and sustained, with several peaks per month delivering intense abuse.”
An increase in threats against members of congress was reported by the US Capitol Police. Many of the lawmakers on the receiving end of these threats are women and people of color.
When an attacker broke a window in Susan Collins’ home, she spoke about the threats of violence that had started with abusive phone calls. She stated that she wouldn’t be surprised if a senator or House member were killed.
Democratic Rep. Pramila Jayapal has been harassed by a man who showed up repeatedly outside her home, armed with a handgun. The husband said he heard the voice of two men yelling obscenities and suggesting to stop harassing her neighborhood if she killed herself.
When they sign up for a job, they sign up for a lot. It was hard to describe the frightening events that took place when someone showed up to your door with a gun.
Alexandria Ocasio-Cortez has been receiving so many threats that she has a round-the-clock security team and sleeps in different locations. Her own colleague, Republican Rep. Paul Gosar of Arizona, tweeted an altered anime video of himself appearing to kill her last year. (Gosar deleted the video and did not apologize. After the House censured him and then removed him from two committee assignments last year, he retweeted a video that included the vote.
Earlier this week, the New York Post said they fired a rogue employee who changed the headline of an online editorial to read, “We must assassinate AOC for America.”
Online Sex Work is Not a Refuge: Facebook, Twitter, and YouTube are Fending for the Prevention of Sexual Explosive Production on Social Media
Pelosi is disliked by a certain section of the right. In 2019, the House Speaker, who has famously clashed with Former President Donald Trump, became the subject of manipulated videos that made her appear as if she were stumbling over and slurring her words. These videos were amplified by both Trump and Giuliani on social media and were viewed millions of times. During the January 6 attack on the Capitol last year, Trump supporters ransacked her office and yelled, “Where are you, Nancy?” – a chilling echo of the words DePape uttered on Friday: “Where is Nancy?”
Social media companies claim they don’t tolerate this kind of hate. There is a reality: it continues to exist on their platforms. They need to use human moderators to take down the abuse. Any time users see online hate like Gosar’s video, we should immediately use the available reporting tools so these social platforms can take it down.
The move toward monetization also threatens to ruin a refuge: Since the FBI seized Backpage in April 2018, three days before President Trump signed FOSTA/SESTA into law, Twitter has become the sole major social media platform to tolerate sex workers. Even in the absence of direct payouts, Twitter has long been a safe haven for sex workers (adult content creators as well as in-person providers) in an increasingly puritanical digital landscape. But in order for monetization to work, Twitter must overhaul its content moderation practices and intensify them, in direct contrast to Musk’s oath to protect “free speech.”
The FBI should also investigate and prosecute the abuse of women both online and off. If the agency needs more funding to do it, Congress should levy a tax against social networks to fund an expansion of resources. I imagine the many lawmakers who have been threatened and harassed would be happy to cast a vote in favor of such a bill.
One of many ways homophobia, transphobia, and whorephobia—the systemic oppression of sex workers—overlap is that we are all perceived as threats to children. Online sex work is not immune from this bias. Earlier this year, Casey Newton and Zoë Schiffer reported in The Verge that Twitter had been developing an “OnlyFans-style” subscription project, but efforts were stymied by fears over child sexual exploitation material (CSEM, sometimes referred to as child sexual abuse material, or CSAM). The research team dubbed the Red Team determined that the Adult Content Monetization project could not be implemented before getting a handle on CSEM, which they believed would make it worse. In May the project was tabled a few weeks after Musk made an offer to buy TWoP.
It isn’t just famous or highly visible women who are facing enough online abuse to consider leaving social media. A survey commissioned by the dating app Bumble showed that almost half of women over the age of 18 received sexual images from someone in the past year. UK Member of Parliament Alex Davies-Jones put the phrase “dick pic” into the historical record during the debate on the UK Online Safety Bill when she asked a male fellow MP if he had ever received one. She said it is not a rhetorical question for most women.
AI-enabled intimate image abuse that combines images to create or generate new, often realistic images—so-called deepfakes—are other weapons for online abuse that disproportionately impact women. Estimates from Sensity AI suggest that 90 to 95 percent of all online deepfake videos are nonconsensual porn, and around 90 percent of those feature women. Our ability to combat it has been surpassed by the technology that’s being created to make realistic deepfakes. What we now see is a perverse democratization of the ability to cause harm: The barriers to entry for creating deepfakes are low, and the fakes are increasingly realistic. The tools for fighting this abuse can no longer keep up with the times.
People control their image and their messages with better safety by design measures. People now have the ability to take control of how they are tagged in photos. Dating app Bumble has a tool called the Private Detector that allows users to choose which nude images they want to see. Legislation, such as the UK’s proposed Online Safety Bill, can push social media companies to address these risks. The bill is far from perfect, but it does ask platform companies to assess the risks and to develop solutions such as better human content moderation, dealing with users complaints, and pushing for better systems to care for users.
The regulatory approach may not be enough to keep women from logging off in great numbers. If they do, not only will they miss the benefits of being online, our online communities will suffer.
The Bipartisan Policy Center is where the former public policy director of Facebook is currently working. BPC accepts funding from some tech companies, including Meta and Google, in their efforts to get authoritative information about elections to their users. The author is the one who expressed the views in this commentary. Read more opinion articles on CNN.
Every day, the companies that work on content moderation have to balance competing interests and different views of the world in order to make a good choice.
It is important not to just look at what is in the piece of content and attempt to solve the problem. Instead, a multi-pronged approach is needed looking not just at the content but also the behavior of people on the platform, how much reach content should get, and more options for users to take more control over what they see in their newsfeeds.
Everyone has the right to free speech and it is important that a platform makes that possible. Every platform has to moderate content, even if it is the number one value.
Bounds on Hateful Content in Feeds of Online Mobs: A Counterexample to the Twitter Scenario
Some content, like child pornography, must be removed under the law. However, users — and advertisers — also don’t want some legal but horrible content in their feeds, such as spam or hate speech.
Moreover, no one likes when an online mob harasses them. All that will do is drive people away or silence them. That is not a platform for free speech. A recent example is Twitter, where its former head of trust and safety fled his home due to the number of threats he received following Elon Musk’s criticism against him. Meta has increased their efforts to shut down brigading, which is when users coordinate harassment online.
Second, there are more options beyond leaving the content up or taking it down. Meta says that this is removal, reduce and inform; instead of taking potentially problematic, but not violating, content down, platforms can reduce its reach or add informative labels to it to give a user more context.
This option is necessary as many of the most engaging posts are borderline — meaning they go right up to the line of the rules. The platform may not be comfortable with removing clickbait but it will want to take other actions because some would not want to see it.
What Makes Facebook and Twitter Free, or Why Do They Live? Why We Live and What We Don’t Live in, or How Do We Look at Online Safety?
The reduced reach is argued to be a scandal by some. Renee Di Resta from the stanford internet observatory has written that free speech doesn’t mean free reach.
This leads to more transparency. Who is making these decisions, and how are they ranking competing priorities? The issue around shadow banning — the term used by many to describe when content isn’t shown to as many people as it might otherwise be without the content creator knowing — isn’t just one person upset that their content is getting less reach.
They are upset because they don’t know what happened. Platforms need to do more. If anyone is eligible to be recommended to users, they can see on their accounts. Because of these rules, accounts that share pornographic material, clickbait and other types of content won’t get recommended to others who don’t follow them.
Lastly, platforms can give users more control over the types of moderation they are comfortable with. Political scientist Francis Fukuyama calls this “middleware.” Given that every user enjoys different content, middleware would allow people to decide the types of content they see in their feeds. This will give them the chance to decide what online safety is important to them. Some platforms, for example, allow users to switch from a ranking feed to a chronological one.
In 2023, the UK will pass legislation aimed at tackling similar harms, finally making progress on a regulatory body for tech companies. The Online Safety Bill will not provide sufficient measures to protect vulnerable people online.
Other countries have passed legislation to make platform change possible. Already, we’ve seen Germany enact NetzDG in 2017, the first country in Europe to take a stance against hate speech on social networks—platforms with more than 2 million users have a seven-day window to remove illegal content or face a maximum fine of up to 50 million euros. In 2021, EU lawmakers set out a package of rules on Big Tech giants through the Digital Markets Act, which stops platforms from giving their own products preferential treatment, and, in 2022, we’ve seen progress with the EU AI Act, which involved extensive consultation with civil society organizations to adequately address concerns around marginalized groups and technology, a working arrangement that campaigners in the UK have been calling for. In Nigeria, the federal government issued a new internet code of practice as an attempt to address misinformation and cyberbullying, which involved specific clauses to protect children from harmful content.
The biggest companies have been allowed to mark their own homework for the last ten years. They have protected their power by hiding behind the adage, “Move fast and break things.”
The Carnegie UK Trust noted that while the term “significant harm” is used in the bill, there are no specific processes to define what this is or how platforms would have to measure it. There is alarm over the bill’s proposal to drop a requirement that Ofcom should encourage the use of technologies and systems for regulating access to electronic material. Other groups have raised concerns about the removal of clauses around education and future proofing—making this legislation reactive and ineffective, as it won’t be able to account for harms that may be caused by platforms that haven’t gained prominence yet.
In 2023, legislation aimed at tackling some of these harms will come into effect in the UK, but it won’t go far enough. Campaigners, think tanks, and experts in this area have raised numerous concerns around the effectiveness of the Online Safety Bill as it currently stands. The bill does not specifically mention minoritized groups, such as women and the LGBTQIA community, despite the fact that they tend to be disproportionately affected by online abuse.
I run a charity and a survey shows that nearly two million people have been threatened online in the past year. Twenty-three percent of those surveyed were members of the LGBTQIA community, and 25 percent of those surveyed said that they had experienced racist abuse online.
The researchers and practitioners who study the responsible use of technology and work with social media companies say it is a chronic abuse because there is not a single triggering moment, debate or position that sparks the steady blaze of attacks. Acute cases are what we call online abuse cases and the tools that have to address them. Acute abuse is a reaction to a debate, a position, or an idea: a new book, article or public statement. Abuse dies down once it starts.