How Section 230 Protects Targeted Content and Personalized Technology: The Loss of the Spanish Student Shooting-Second Amendment
A set of rulings against the tech industry could significantly narrow Section 230 and its legal protections for websites and social media companies. If that happens the online platforms will face new lawsuits over how they present content to users. Such a result would represent the most consequential limitations ever placed on a legal shield that predates today’s biggest social media platforms and has allowed them to nip many content-related lawsuits in the bud.
The case was brought by the family of a college student, Nohemi Gonzalez, who was killed in a restaurant during the November 2015 terrorist attacks in Paris. The family’s lawyers argued that YouTube, a subsidiary of Google, had used algorithms to push Islamic State videos to interested viewers, using the information that the company had collected about them.
Lawyers for the family argued in their petition for Supreme Court review that the videos that people viewed on YouTube were a central way in which the organization engaged in support and recruits from areas outside of Syria and Iraq.
They wrote that section 230 bars claim websites are publishers of third-party content. “Publishers’ central function is curating and displaying content of interest to users. Section 230 is an important statute and if the petitions are allowed, they would ruin it.
Section 230, written in 1995 and passed in early 1996, unsurprisingly does not explicitly mention algorithmic targeting or personalization. Yet a review of the statute’s history reveals that its proponents and authors intended the law to promote a wide range of technologies to display, filter, and prioritize user content. Section 230 protections for targeted content and personalized technology require Congress to make changes to the law.
These attacks on the First Amendment are already affecting some of the most vulnerable Americans, but they have far-reaching implications for everyone. The rules written by the Texas and Florida are not written in a way that will ensure they only apply to big tech. Under some interpretations, Texas’ law means Wikipedia wouldn’t be allowed to remove edits that violate its standards. Republican lawmakers are even attacking spam filters as biased, so the effects aren’t just theoretical — if courts rule the wrong way, your inbox may be about to get a lot messier.
The bigger issue, though not the only one is that the internet allows people to chat at a scale not seen before in human history. The shortcomings and tradeoffs of the laws governing that speech have never been so evident, and their troublesome edge cases never so numerous. The people who make and enforce those laws abdicated their responsibilities in favor of wielding power, which they often used to justify their actions.
For years, much of the criticism of Section 230 has come from conservatives who say that the law lets social media platforms suppress right-leaning views for political reasons.
The First Amendment protects the public from unlawfully publishing false information about covids or satanic panics: Joseph Biden vs. Amy Klobuchar
The publisher or speaker of information should not be treated as the user or provider of an interactive computer service.
Courts have interpreted the law since it was passed. It effectively means that web services — as well as newspapers, gossip blogs, listserv operators, and other parties — can’t be sued for hosting or reposting somebody else’s illegal speech. The law was passed after a pair of seemingly contradictory defamation cases, but it’s been found to cover everything from harassment to gun sales. In addition, it means courts can dismiss most lawsuits over web platform moderation, particularly since there’s a second clause protecting the removal of “objectionable” content.
This is a key used for illegal speech. Many well-deserved critiques of the internet and social media, that it facilitates false stories about covids or satanic panics, allows huge crowds of people with angry messages or facilitates hate speech at a large scale, don’t. The defamation suits against Fox News are just one of the cases still being considered. Defamation isn’t easy to meet. Joe Biden claimed on his campaign trail that section 230 of the law allowed Facebook to be used to distribute misinformation. Last year, he took Facebook to task for “killing people” by allowing the spread of covid vaccine misinformation; shortly thereafter, Democratic senator Amy Klobuchar proposed stripping Section 230 protections for health misinformation.
Making false claims about science isn’t always illegal, which makes it difficult for companies to remove misinformation. There’s a good reason why the First Amendment protects shaky scientific claims. Imagine researchers and news outlets getting sued for publishing good-faith assumptions that were later proven incorrect, like covid not being airborne, because of how constantly our early understanding of covid shifted.
Removing Section 230 protections is a sneaky way for politicians to get around the First Amendment. Without 230, the cost of operating a social media site in the United States would skyrocket due to litigation. Unable to invoke a straightforward 230 defense, sites could face protracted lawsuits over even unambiguously legal content. Even if they would have won in court, web platforms would be incentivized to remove posts that are illegal, such as the bad review of a restaurant or the accusations of sexual harassment. All of this would burn time and money in perhaps existential ways. It’s no wonder platform operators do what it takes to keep 230 alive. When politicians gripe, the platforms respond.
The Role of Social Media in Defamation: A Case Study of Johnny Depp, Amber Heard, and the Rule of Attainable Profit and Loss
It’s also not clear whether it matters. Jones’ declaration of corporate bankruptcy tied up a large amount of his money and left families in Sandy Hook struggling to chase it. He treated the court proceedings contemptuously and used them to hawk dubious health supplements to his followers. Legal fees and damages have almost certainly hurt his finances, but the legal system has conspicuously failed to meaningfully change his behavior. It gave him another chance to declare himself a martyr.
The defamation case brought against Johnny Depp by Amber Heard, the victim of abuse, is unrelated to this one. Heard was less polished than Jones, but she lacked the same level of celebrity or social media savvy. The case turned into a ritual public humiliation of Heard — fueled partly by the incentives of social media but also by courts’ utter failure to respond to the way that things like livestreams contributed to the media circus. Defamation claims can meaningfully hurt people who have to maintain a reputation, while the worst offenders are already beyond shame.
“I would be prepared to make a bet that if we took a vote on a plain Section 230 repeal, it would clear this committee with virtually every vote,” said Rhode Island Democratic Sen. Sheldon Whitehouse at a hearing last week of the Senate Judiciary Committee. Our problem is that we want more than 230-plus. We want to repeal 230 and then have ‘XYZ.’ And we don’t agree on what the ‘XYZ’ are.”
The speech reforms proposed by the Republicans are ludicrously bad. We’ve learned just how bad over the past year, after Republican legislatures in Texas and Florida passed bills effectively banning social media moderation because Facebook and Twitter were using it to ban some posts from conservative politicians, among countless other pieces of content.
The First Amendment is likely to render these bans unconstitutional. They are government speech regulations! While Florida’s law was being blocked, Texas’ Fourth Circuit Court of Appeals decided to uphold its law without explaining its reasoning. Months later, that court actually published its opinion, which legal commentator Ken White called “the most angrily incoherent First Amendment decision I think I’ve ever read.”
The Supreme Court temporarily blocked the Texas law, but its recent statements on speech have not been reassuring. The Texas or Florida case will most likely be heard by a court that also includes Clarence Thomas, who believes that the government should be allowed to treat social media like a public utility. (Leave aside that conservatives previously raged against the idea of treating ISPs like a public utility in order to regulate them; it will make your brain hurt.)
Three justices, including Thomas, voted against putting the law on hold. (Liberal Justice Elena Kagan did, too, but some have interpreted her vote as a protest against the “shadow docket” where the ruling happened.)
But only a useful idiot would support the laws in Texas and Florida on those grounds. The rules are rigged to make political targets look good. They attack “Big Tech” platforms for their power, conveniently ignoring the near-monopolies of other companies like internet service providers, who control the chokepoints letting anyone access those platforms. The movement is intellectually bankrupt and even exempted Disney from speech laws due to its spending power in Florida, then proposed blowing the system up to penalize the company for stepping out of line.
And even as they rant about tech platform censorship, many of the same politicians are trying to effectively ban children from finding media that acknowledges the existence of trans, gay, or gender-nonconforming people. On top of getting books pulled from schools and libraries, Republican state delegate in Virginia dug up a rarely used obscenity law to stop Barnes & Noble from selling the graphic memoir Gender Queer and the young adult novel A Court of Mist and Fury — a suit that, in a victory for a functional American court system, was thrown out earlier this year. There is a panic over grooming that affects all Americans. Even as Texas is trying to stop Facebook from kicking off violent insurrectionists, it’s suing Netflix for distributing the Cannes-screened film Cuties under a constitutionally dubious law against “child erotica.”
Software-based services are impossible to regulate if you take all software code to be speech according to the First Amendment. The idea of using Section 230 to defend against claims of providing faulty physical goods and services isn’t always worked out, but it is still open for companies with core services that are unrelated to speech, just software.
Balk’s Law is obviously an oversimplification. Internet platforms change us — they incentivize specific kinds of posts, subjects, linguistic quirks, and interpersonal dynamics. Even though the internet is humanity at scale, it is crammed into a few places owned by powerful companies. It turns out that humans at scale can be very ugly. It’s possible to have terrible abuse from one person or to have it spread out into a campaign of threats, lies and terrorism involving thousands of other people, none of which is a viable legal case.
There are two internet moderation cases on the Supreme Court’s docket. There are two hearings scheduled for February 21st and February 22nd, respectively.
He is a fellow at the Bipartisan Policy Center and a former public policy director at Facebook. BPC accepts funding from some tech companies, including Meta and Google, in their efforts to get authoritative information about elections to their users. The author of this commentary has their own views. Read more opinion articles on CNN.
Content moderation companies on social media have a lot of options to choose from, but need to make the best choice possible out of a lot of bad options.
When thinking about this problem, it’s important not to just tackle it by looking at what any piece of content says. Instead, a multi-pronged approach is needed looking not just at the content but also the behavior of people on the platform, how much reach content should get, and more options for users to take more control over what they see in their newsfeeds.
First, a platform needs to ensure that everyone has the right to free speech and can safely express what they think. Every platform — even those that claim free expression is their number one value — must moderate content.
Online harassment is not a platform for hate speech, nor is there a centralized mechanism for removing harassing content in Twitter feeds
Some content, like child pornography, must be removed under the law. However, users — and advertisers — also don’t want some legal but horrible content in their feeds, such as spam or hate speech.
When an online mob harasses them, no one likes it. All that will do is drive people away or silence them. That is not a platform for free speech. The former head of trust and safety atTwitter fled his home due to the number of threatening emails he received after Musk criticized him. Some platforms, such as Meta, have increased their efforts to shut down online harassment.
Second, there are more options beyond leaving the content up or taking it down. Meta says platforms can remove potentially problematic but not violating content and give a user more context by adding informative labels, instead of taking it down.
This option is needed as many of the most engaging posts are borderline, meaning they go right up to the rules. The platform isn’t comfortable removing content such as clickbait, and they might have to take action if some people don’t like it.
Speech, Safe Moderation, and the Laws of the Internet: The Third Point: How Digital Media Has Come to a Head: What Happens When Social Media Gets Wrong
Some people argue that the reduction in reach is a scandal. But others, such as Renee DiResta from the Stanford Internet Observatory, has famously written that free speech does not mean free reach.
This leads to the third point: transparency. Who is making these decisions, and how are they ranking competing priorities? The issue around shadow banning — the term used by many to describe when content isn’t shown to as many people as it might otherwise be without the content creator knowing — isn’t just one person upset that their content is getting less reach.
They are upset that they don’t know what has happened. More needs to be done by platforms on this front. If you are eligible to be recommended, you can see on your accounts. They have rules that will make it hard for those who don’t follow them to recommend accounts that share sexually explicit material or clickbait.
Lastly, platforms can give users more control over the types of moderation they are comfortable with. Political scientist Francis Fukuyama calls this “middleware.” People would be allowed to decide the kinds of content they see in their feeds with the help of middleware. This will allow them to determine what they need to feel safe online. Some platforms, such as Facebook, already allow people to switch from a ranked feed to a chronological one.
Tackling the problem of speech and safety is extremely difficult. We are in the middle of developing our societal norms for the speech we are ok with online and how we hold people accountable.
We will need more information regarding how platforms make these decisions if we are to figure this out. Regulators, civil society and academic organizations outside of these platforms need to say how they would make some of these difficult calls, governments need to find ways to regulate platforms, and we need more ways to control the types of content we want to see.
Tech companies that are involved in litigation cite the statute to argue that they should not have to answer for helping terrorists by hosting or recommending terrorist content.
The law’s central provision states that websites cannot be treated as the publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.
The Bipartisan Hate of Section 230 and the Legal and Procedural Problems of the FOIA, the FCC, and Twitter
There were a number of legal and procedural problems that the executive order had to deal with, including the fact that the FCC isn’t a part of the judicial branch and that it doesn’t regulate social media or moderation decisions.
The result is a bipartisan hatred for Section 230, even if the two parties cannot agree on why Section 230 is flawed or what policies might appropriately take its place.
The US Supreme Court has the chance to decide how far Section 230 can be altered by changing this term, after the deadlock threw much of the momentum for change into the courts.
Tech critics have called for added legal exposure and accountability. “The massive social media industry has grown up largely shielded from the courts and the normal development of a body of law. It is highly irregular for a global industry that wields staggering influence to be protected from judicial inquiry,” wrote the Anti-Defamation League in a Supreme Court brief.
It would be a bad idea for the tech giants and all of Big Tech’s competitors because it would undermine what the internet has allowed to flourish. It would potentially put many websites and users into unwitting and abrupt legal jeopardy, they say, and it would dramatically change how some websites operate in order to avoid liability.
Recommendations are the very thing that make a vibrant place, according to the company. Users determine which posts gain prominence and which fade into obscurity by upvoting and downvoting content.
The brief argued that the legal regime would make it impossible for moderators to recommend a defamatory post and that people would cease using the site.