The Supreme Court will hear arguments in a case that has an impact on the internet
Do you really need a tech lawyer? On the case of Nohemi Gonzalez, a YouTuber in the age of the Internet (and how to sue them)
Passed in 1996 in the early days of the World Wide Web, Section 230 of the Communications Decency Act was meant to nurture startups and entrepreneurs. The legislation’s text recognized that the internet was in its infancy and risked being choked out of existence if website owners could be sued for things that other people posted.
According to the plaintiffs in the case — the family of Nohemi Gonzalez, who was killed in a 2015 ISIS attack in Paris — YouTube’s targeted recommendations violated a US antiterrorism law by helping to radicalize viewers and promote ISIS’s worldview. The allegation seeks to carve out content recommendations so that they do not receive protections under Section 230
Lawyers for the family petitioned the Supreme Court to review a ruling they say shows how the organization recruited and trained recruits from outside of Syria and Iraq.
The seemingly simple language of Section 230 belies its sweeping impact. Courts have repeatedly accepted Section 230 as a defense against claims of defamation, negligence and other allegations. In the past, it’s protected AOL, Craigslist, Google and Yahoo, building up a body of law so broad and influential as to be considered a pillar of today’s internet.
Conservatives are still looking for opportunities to end Section 229 even though the efforts of the Trump-era never came to fruition. They are not alone. Since 2016, when social media platforms’ role in spreading Russian election disinformation broke open a national dialogue about the companies’ handling of toxic content, Democrats have increasingly railed against Section 230.
Some of the most vulnerable Americans have been adversely affected by the attacks on the First Amendment. The Texas and Florida laws are written poorly, so that they only apply to Big Tech. Under some interpretations, Texas’ law means Wikipedia wouldn’t be allowed to remove edits that violate its standards. Republican lawmakers are even attacking spam filters as biased, so the effects aren’t just theoretical — if courts rule the wrong way, your inbox may be about to get a lot messier.
Tech freedom advocates have fought for years against laws that would stifle online communication, a project based on the assumption that this communication is a social good. The limits of this assumption have never been clearer, and the backlash threatens to make things even worse.
Many of the actions that infuriated Republicans were those shielded by the First Amendment to the US constitution, which guarantees free speech. Lawmakers targeted the section of the protection that was unassailable.
The Law of Attributing No Intellectual Property to an Interactive Computer Service: An Interpretation of Its Legal Meaning and Implications for Web Platform Moderation
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The law was passed in 1996, and courts have been interpreting it ever since. It means that websites, newspapers and other parties can’t be sued for hosting or reposting illegal speech. The law was passed after two defamation cases, but was found to cover everything from harassment to gun sales. In addition, it means courts can dismiss most lawsuits over web platform moderation, particularly since there’s a second clause protecting the removal of “objectionable” content.
Things have changed since that time of the internet. Antisemitic extremists and terrorists have weaponized platforms. They have been used to incite racially and religiously motivated attacks across the US. And the American people are paying the price in terms of lives lost.
How Did Jones and Heard Lose Their Jobs in the Light of the First Amendment? An Overview of a Big Defamation Case
But making false claims about pandemic science isn’t necessarily illegal, so repealing Section 230 wouldn’t suddenly make companies remove misinformation. The First Amendment protects questionable scientific claims. Imagine if researchers and news outlets are sued for publishing good-faith assumptions that were later proven wrong, like covid not being airborne.
If the Court finds that the tech companies can’t be sued under anti terrorism law, the debate over Section 230 will be over, because the claims would be thrown out.
It isn’t clear if it matters. Jones declared corporate bankruptcy during the procedure, tying up much of his money indefinitely and leaving Sandy Hook families struggling to chase it. He treated the court proceedings contemptuously and used them to hawk dubious health supplements to his followers. Legal fees and damages have almost certainly hurt his finances, but the legal system has conspicuously failed to meaningfully change his behavior. If anything, it provided yet another platform for him to declare himself a martyr.
Contrast this with the year’s other big defamation case: Johnny Depp’s lawsuit against Amber Heard, who had identified publicly as a victim of abuse (implicitly at the hands of Depp). The case of Heard was less cut-and-dried than Jones’, but she did not have the same level of social media savvy. The case turned into a ritual public humiliation of Heard — fueled partly by the incentives of social media but also by courts’ utter failure to respond to the way that things like livestreams contributed to the media circus. Defamation claims can meaningfully hurt people who have to maintain a reputation, while the worst offenders are already beyond shame.
Social Media and the First Amendment: Why the Florida High Court of Appeals voted No to Section 230, but Why the Rhode Island Republican Voted No
The Rhode Island Democrat said he would be prepared to take a vote on Section 230 repeal, which would clear this committee with virtually every vote. “The problem, where we bog down, is that we want 230-plus. We want to get rid of 230 and have XYZ. We don’t agree on what the “XYZ” are.
Republican proposals to change speech are ludicrously bad. Over the past year we have learned that Texas and Florida passed bills effectively banning social media moderation due to the fact that they were being used to ban posts from conservative politicians.
As it stands, the First Amendment should almost certainly render these bans unconstitutional. They are government speech regulations! While Florida’s law was blocked by the appeals court, Texas’ Fourth Circuit Court of Appeals made a strange decision to uphold its law. Months later, that court actually published its opinion, which legal commentator Ken White called “the most angrily incoherent First Amendment decision I think I’ve ever read.”
The Supreme Court temporarily blocked the Texas law, but its recent statements on speech haven’t been terribly reassuring. Clarence Thomas is expected to argue that the government should have the right to treat social networking sites like public utilities in the case of the Texas or Florida case. Conservatives used to argue against treating internet service providers as a public utility in order to regulate them; it will make your brain hurt.
Thomas and two other justices voted against putting the law on hold. (Liberal Justice Elena Kagan did, too, but some have interpreted her vote as a protest against the “shadow docket” where the ruling happened.)
But only a useful idiot would support the laws in Texas and Florida on those grounds. The rules are transparently rigged to punish political targets at the expense of basic consistency. internet service providers control the chokepoints where anyone can access Big Tech platforms, so they attacks them in order to gain power. There is no saving a movement so intellectually bankrupt that it exempted media juggernaut Disney from speech laws because of its spending power in Florida, then subsequently proposed blowing up the entire copyright system to punish the company for stepping out of line.
And even as they rant about tech platform censorship, many of the same politicians are trying to effectively ban children from finding media that acknowledges the existence of trans, gay, or gender-nonconforming people. On top of getting books pulled from schools and libraries, Republican state delegate in Virginia dug up a rarely used obscenity law to stop Barnes & Noble from selling the graphic memoir Gender Queer and the young adult novel A Court of Mist and Fury — a suit that, in a victory for a functional American court system, was thrown out earlier this year. The panic over ‘grooming’ affects all Americans. Texas is trying to stop Facebook from kicking off violent insurrectionists, but it is suing for the distribution of the film Cuties under a constitutionally dubious law that prohibits the depiction of child erotica.
This is once again a tradeoff that the First Amendment makes very clear: almost all software is speech, which renders software-based services impossible to regulate. Airbnb and Amazon have both used Section 230 to defend against claims of providing faulty physical goods and services, an approach that hasn’t always worked but that remains open for companies whose core services have little to do with speech, just software.
Balk’s Law is a simplification. Internet platforms help us by giving incentives for specific kinds of posts. But still, the internet is humanity at scale, crammed into spaces owned by a few powerful companies. Humankind at scale can be very ugly. Vicious abuse can be either from a single person, or spread out into a campaign of threats, lies, and terrorism involving thousands of people, none of it rising to the level of a viable legal case.
The Supreme Court has scheduled arguments for two major internet moderation cases in February of 2023. As noted by Bloomberg reporter Greg Stohr, hearings for Gonzalez v. Google and Twitter v. Taamneh have been respectively scheduled for February 21st and February 22nd, respectively.
In its petition, the site says that it isn’t a violation of anti-terrorism law to simply fail at banning terrorists using a platform for general purpose services. It is difficult to see how a provider of ordinary services could avoid being liable for terrorism under that framework, according to the social networking site.
The law’s central provision holds that websites (and their users) cannot be treated legally as the publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.
The case against Section 230: a political battle between the FCC, the Biden administration and the Court of First Instance of Twitter v. Taamneh
The FCC isn’t part of the judicial branch, and that it does not regulate social media or content moderation decisions, are two of the legal issues faced by the executive order.
The result is a bipartisan hatred for Section 230, even if the two parties cannot agree on why Section 230 is flawed or what policies might appropriately take its place.
The deadlock has thrown much of the momentum for changing Section 230 to the courts — most notably, the US Supreme Court, which now has an opportunity this term to dictate how far the law extends.
Tech critics have called for added legal exposure and accountability. “The massive social media industry has grown up largely shielded from the courts and the normal development of a body of law. The Anti-Defamation League wrote in a brief that it is irregular for a global industry that wields staggering influence to be protected from judicial inquiry.
The closely watched Twitter and Google cases carry significant stakes for the wider internet. An expansion of apps and websites’ legal risk for hosting or promoting content could lead to major changes at sites including Facebook, Wikipedia and YouTube, to name a few.
Recommendations are the most essential thing that makes Reddit a vibrant place, according to the company. Users decide which posts gain prominence and which fades into obscurity.
People would stop using Reddit, and moderators would stop volunteering, the brief argued, under a legal regime that “carries a serious risk of being sued for ‘recommending’ a defamatory or otherwise tortious post that was created by someone else.”
On Wednesday, the Court will hear Twitter v. Taamneh, which will decide whether social media companies can be sued for aiding and abetting a specific act of international terrorism when the platforms have hosted user content that expresses general support for the group behind the violence without referring to the specific terrorist act in question.
The allegation seeks to carve out content recommendations so that they do not receive protections under Section 230, potentially exposing tech platforms to more liability for how they run their services.
The Biden administration has also weighed in on the case. Section 230 of the Communications Decantment Act protects the internet companies from lawsuits if they don’t remove third-party content, including the content it has recommended. The government argued that the company’s own speech doesn’t mean the same protection as that of others.
The company says that its help to the terrorist group isn’t a violation of the antiterror law because it doesn’t relate to the content at issue. The Biden administration, in its brief, has agreed with that view.
The case of Musk, Google, Twitter, and a Supreme Court ruling against the Mexican-based micro-messaging company, SM, Social Media
Several petitions seeking to have the Texas law reviewed by the court are currently pending. The court asked the Biden administrations to submit its views after they delayed a decision on whether to hear those cases.
The new owner of the micro-messaging company will be looking on to see how their legal performance compares to previous owners. Similar to Gonzalez, the suit concerns an Islamic IS attack in Turkey and whether or not it received material aid from the micro-blogging site. Before Musk purchased the platform, he petitioned the court to take up Gonzalez in case they ruled against him.
Representing the terrorism victims against Google and Twitter, lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was enacted, social media companies wanted people to subscribe to their services, but today the economic model is different.
“Now most of the money is made by advertisements, and social media companies make more money the longer you are online,” he says, adding that one way to do that is by algorithms that recommend other related material to keep users online longer.
Social media companies must no longer have their cake and eat it too. It is important that they are held to account for failing to use the resources and technology to prevent inciting violence despite earning a large amount of money on their platforms.
Why the US Supreme Court hasn’t ruled out extremism in social media: Why the tech industry should not stand up against the courts
“The attorney general, the director of the FBI, the director of national intelligence, and the then-White House chief of staff . . . Those people are the government officials. He says that he told them that.
She says that there is no place for extremist content on any of their products and platforms and that they have invested in human review to make sure that happens.
Prado says that social media companies are no different than those in 1996, when interactive internet was an infancy industry. But, she says, if there is to be a change in the law, that is something that should be done by Congress, not the courts.
There are many “strange bedfellows” among the tech company allies in this week’s cases. The Chamber of Commerce and the American Liberties Union were two of 48 groups that requested the court to leave the status quo in place.
The Biden administration had a different position. Columbia law professor Timothy Wu summarizes the administration’s position this way: “It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230.”
Links, grouping content together and sorting through billions of pieces of data for search engines are good, but they don’t recommend content that urges illegal conduct.
If the Supreme Court were to adopt that position, it would be very threatening to the economic model of social media companies today. The tech industry has no way to tell between recommendation and aggregation.
It’s likely that the companies would defend their conduct in court. To get over the hurdle of showing enough evidence to justify a trial is different than filing suit. The Supreme Court made it difficult to jump that hurdle. The court will hear a second case this week on Wednesday.
The US Supreme Court was laughing on February 21 when Elena Kagan said that the court was not familiar with all of these issues. We are not, like, the nine greatest experts on the internet.”
One prominent example of this supposedly “biased” enforcement is Facebook’s 2018 decision to ban Alex Jones, host of the right-wing Infowars website who later was slapped with $1.5 billion in damages after harassing the families of the victims of a mass shooting.
Editor’s Note: Former Amb. Marc Ginsberg is the founder and president of the Coalition for a Safer Web, a non-profit organization whose mission is dedicated to developing technologies and policies to expedite the permanent de-platforming of hate and extremist incitement on social media platforms. The views expressed in this commentary are his own. View more opinion on CNN.
The platforms have a limited liability exposure to customers who choose to use their space, as they are benign providers of digital space. It was thought that the internet service companies would be in big trouble due to a torrent of lawsuits filed against them for publishing defamatory content.
Advertisers can either rely on the platforms to remove offensive content or hope that watchdog groups will flag extremist content that they wouldn’t want their brands to be associated with. In the meantime, days to months can go by before a platform removes offensive accounts. In other words, advertisers have no ironclad confidence from social media companies that their ads will not wind up sponsoring extremist accounts.