The Supreme Court heard about the internet speech case

An electronic rights group’s lawsuit against ISIS in the Bataclan shooting: a federal court ruling on section 230, and a case brought by a student killed in Paris

“The free and open internet as we know it couldn’t exist without Section 230,” the Electronic Frontier Foundation, a digital rights group, has written. “Important court rulings on Section 230 have held that users and services cannot be sued for forwarding email, hosting online reviews, or sharing photos or videos that others find objectionable. It helps to quickly resolve lawsuits that have no legal basis.

The case was brought by the family of a college student who was killed in Paris during a terrorist attacks that also included the Bataclan concert hall. Attorneys for the family argued that the information the company collected about the Islamic State group was used to push videos to interested viewers.

“Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” lawyers for the family argued in their petition seeking Supreme Court review.

What Do Websites Have to Say About the Sandy Hook Shooting? Two Cases of Section 230, the Internet, Defamation, and Special Restrictions

At the center of two cases to be argued over two days is Section 230 of the 1996 Communications Decency Act, passed by Congress when internet platforms were just beginning. Section 230 draws a distinction between other sources of information and interactive computer service providers. Section 230 says that websites are not Publishers or speakers and cannot be sued for material that appears on those sites. Essentially, the law treats web platforms the same way that it treats the telephone. Like phone companies, websites that host speakers cannot be sued for what they say or do.

Section 230 was passed in 1996 and does not mention personalization. A review of the statute’s history shows that its authors intended the law to promote various technologies to display, filter, and prioritize user content. Section 230 is used for protections against targeted content or personalized technology, and removing them would require Congress to change the law.

Last month, longtime fabulist Alex Jones was hit with a judgment for nearly $1 billion for lying about the 2012 shooting at Sandy Hook, Connecticut. His behavior — and I say this as a reporter, one of the groups most leery of libel lawsuits — is exactly why defamation isn’t protected by the First Amendment. private citizens who suffered a terrible loss with ridiculous, long, totally unsupported claims was targeted by Jones. His lies caused harm that continues a decade later and in theory, the damages outweigh the tens of millions he made off those claims.

The bigger issue, though not the only one, is that the internet allows people to speak to each other at a scale unprecedented in human history. The laws governing that speech have never been so bad, and their problematic edge cases have never been as common. And instead of trying to reckon with a new world, the people who make and enforce those laws have abdicated their principles and responsibilities in favor of wielding raw power — and, often, abdicating a lot of their common sense as well.

There are some proposed changes to speech law that are upfront about their aims, like New York Attorney General Letitia James’ call to ban distributing live videos filmed by mass shooters. Legal experts like Danielle Citron have also proposed fixing specific problems created by Section 230, like its de facto protections for small sites that solicit nonconsensual pornography or other illegal content. There are serious criticisms of these approaches and they are honest attempts to address real legal tradeoffs.

The First Amendment and Its Role in the Defamation of Tech Companies and the Internet of Things: The Case of Johnny Depp and Amber Heard

A provider or a user of an interactive computer service should not be the publisher or speaker of any information provided by another information provider.

The law was passed in 1996, and courts have interpreted it expansively since then. It means that websites, newspapers, gossip blogging, listserv operators, and other parties will not be sued for hosting or reposting illegal speech. The law was passed after a pair of seemingly contradictory defamation cases, but it’s been found to cover everything from harassment to gun sales. Since there is a second clause protecting the removal of objectionable content, courts can dismiss most lawsuits over web platform moderation.

The thing is, these complaints get a big thing right: in an era of unprecedented mass communication, it’s easier than ever to hurt people with illegal and legal speech. The legal system has become a part of the problem, even if more people were encouraged to sue Facebook.

Making false claims about science is not necessarily a crime, so companies wouldn’t be forced to remove them. The First Amendment protects shaky scientific claims. Imagine if researchers and news outlets got sued for publishing good-faith assumptions that were later proved to be incorrect, like covid not being airborne, because of how constantly our understanding of covid shifted.

Other tech platforms such as Meta and Google have argued in the Twitter case that if the Court finds the tech companies cannot be sued under US antiterrorism law, at least under these circumstances, it would avoid a debate over Section 230 altogether in both cases, because the claims at issue would be tossed out.

It’s also not clear whether it matters. Sandy Hook families are struggling to pursue Jones, who declared corporateruptcy during the procedure, tying up much of his money indefinitely. He marketed dubious health supplements from the court proceedings. Legal fees and damages have almost certainly hurt his finances, but the legal system has conspicuously failed to meaningfully change his behavior. It gave him another platform to proclaim himself a martyr.

Contrast this with the year’s other big defamation case: Johnny Depp’s lawsuit against Amber Heard, who had identified publicly as a victim of abuse (implicitly at the hands of Depp). Amber Heard’s case was less cut-and-dried than Jones’, but she lacked Jones’ shamelessness or social media acumen. The case turned into a ritual public humiliation of Heard — fueled partly by the incentives of social media but also by courts’ utter failure to respond to the way that things like livestreams contributed to the media circus. Defamation claims can meaningfully hurt people who have to maintain a reputation, while the worst offenders are already beyond shame.

Do we really want to see Social Media as a Utility? The case of a Republican-Proposed Social Media Law in Texas and Florida

“I would be prepared to make a bet that if we took a vote on a plain Section 230 repeal, it would clear this committee with virtually every vote,” said Rhode Island Democratic Sen. Sheldon Whitehouse at a hearing last week of the Senate Judiciary Committee. We want more than 220-plus. We want to repeal 230 and then have ‘XYZ.’ We don’t know what the “XyZ” are.

Republican-proposed speech reforms are ludicrously, bizarrely bad. Over the past year, we have learned how bad social media moderation has become after Republican legislature in Texas and Florida effectively banned it because they felt it was being used to ban conservative politicians.

As it stands, the First Amendment should almost certainly render these bans unconstitutional. They are government speech regulations! But while an appeals court blocked Florida’s law, Texas’ Fourth Circuit Court of Appeals threw a wrench in the works with a bizarre surprise decision to uphold its law without explaining its reasoning. Months later, that court actually published its opinion, which legal commentator Ken White called “the most angrily incoherent First Amendment decision I think I’ve ever read.”

The Supreme Court temporarily prevented the Texas law from taking effect, and its recent statements on speech have not been reassuring. Clarence Thomas, who advocated that the government should be able to treat public services like a utility, is likely to be involved in the Texas or Florida case. (Leave aside that conservatives previously raged against the idea of treating ISPs like a public utility in order to regulate them; it will make your brain hurt.)

Thomas, as well as two other conservative justices, voted against putting the law on hold. Elena Kagan also did, but some think that her vote was a protest against the ruling.

Only an idiot would support laws in Texas and Florida. The rules are rigged so that political targets are punished at the expense of consistency. They attack “Big Tech” platforms for their power, conveniently ignoring the near-monopolies of other companies like internet service providers, who control the chokepoints letting anyone access those platforms. There is no saving a movement so intellectually bankrupt that it exempted media juggernaut Disney from speech laws because of its spending power in Florida, then subsequently proposed blowing up the entire copyright system to punish the company for stepping out of line.

And even as they rant about tech platform censorship, many of the same politicians are trying to effectively ban children from finding media that acknowledges the existence of trans, gay, or gender-nonconforming people. A Virginia Republican state delegate dug up a rarely used obscenity law to stop Barnes & Noble from selling two books, including a graphic memoir about gay rights, which resulted in a victory for the GOP. A disingenuous panic over “grooming” doesn’t only affect LGBTQ Americans. Even as Texas tries to keep violent insurrectionists from taking place, it is still attempting to block Facebook because of its unconstitutional law against child erotica.

But once again, there’s a real and meaningful tradeoff here: if you take the First Amendment at its broadest possible reading, virtually all software code is speech, leaving software-based services impossible to regulate. Airbnb and Amazon have both used Section 230 to defend against claims of providing faulty physical goods and services, an approach that hasn’t always worked but that remains open for companies whose core services have little to do with speech, just software.

Balk’s Law is obviously an oversimplification. Internet platforms change us — they incentivize specific kinds of posts, subjects, linguistic quirks, and interpersonal dynamics. But still, the internet is humanity at scale, crammed into spaces owned by a few powerful companies. And it turns out humanity at scale can be unbelievably ugly. Vicious abuse might come from one person, or it might be spread out into a campaign of threats, lies, or stochastic terrorism involving thousands of different people, none of it quite rising to the level of a viable legal case.

The First Two Circuits of Section 230: Why Content isn’t Hosted or Posted on Facebook, Wikipedia, YouTube and the Internet?

The Supreme Court is scheduled to hear back-to-back oral arguments in two cases that could have a major impact on online speech and moderation.

Tech companies involved in the litigation cite the statute to argue that they shouldn’t have to face any lawsuits if they didn’t host or recommend terrorist content.

The law’s main provision states that websites and their users can’t be treated as publishers or speakers of other people’s content. The person or entity that creates the content is in control of the legal responsibility for publishing it, not platforms on which the content is shared or the users who re-share it.

The FCC is not part of the judicial branch and it doesn’t regulate social media, which is one of the legal problems faced by the executive order.

The result is a bipartisan hatred for Section 230, even if the two parties cannot agree on why Section 230 is flawed or what policies might appropriately take its place.

The deadlock has thrown much of the momentum for changing Section 230 to the courts — most notably, the US Supreme Court, which now has an opportunity this term to dictate how far the law extends.

Tech critics have called for more legal exposure. “The massive social media industry has grown up largely shielded from the courts and the normal development of a body of law. It is highly irregular for a global industry that wields staggering influence to be protected from judicial inquiry,” wrote the Anti-Defamation League in a Supreme Court brief.

The closely watched Twitter and Google cases carry significant stakes for the wider internet. An expansion of apps and websites’ legal risk for hosting or promoting content could lead to major changes at sites including Facebook, Wikipedia and YouTube, to name a few.

Twitter v. Taamneh: a case against Twitter for Aiding and Abetting a terrorist act of international terrorism

Recommendations are the very thing that make Reddit a vibrant place according to the company. Users who vote up and down determine which posts get prominence and which fade into obscurity.

People would stop using Reddit, and moderators would stop volunteering, the brief argued, under a legal regime that “carries a serious risk of being sued for ‘recommending’ a defamatory or otherwise tortious post that was created by someone else.”

On Wednesday, the Court will hear Twitter v. Taamneh, which will decide whether social media companies can be sued for aiding and abetting a specific act of international terrorism when the platforms have hosted user content that expresses general support for the group behind the violence without referring to the specific terrorist act in question.

The allegation seeks to carve out content recommendations so that they do not receive protections under Section 230, potentially exposing tech platforms to more liability for how they run their services.

The Biden administration has also weighed in on the case. Section 230 of the Computer Fraud and Abuse Act protects both the internet giants and their employees from lawsuits for failing to remove third-party content, according to a brief filed in December. But, the government’s brief argued, those protections do not extend to Google’s algorithms because they represent the company’s own speech, not that of others.

Twitter has said that just because ISIS happened to use the company’s platform to promote itself does not constitute Twitter’s “knowing” assistance to the terrorist group, and that in any case the company cannot be held liable under the antiterror law because the content at issue in the case was not specific to the attack that killed Alassaf. The Biden administration, in its brief, has agreed with that view.

Twitter and the Texas Terrorism-Against-Taamneh Case: A U.S. Court ruling explains how social media companies were warned about ISIS videos

A number of petitions are currently pending asking the Court to review the Texas law and a similar law passed by Florida. The Court last month delayed a decision on whether to hear those cases, asking instead for the Biden administration to submit its views.

Twitter v. Taamneh, meanwhile, will be a test of Twitter’s legal performance under its new owner Elon Musk. Similar to Gonzalez, the suit concerns a separate attack by the Islamic State in Turkey. Twitter filed its petition before Musk bought the platform, aiming to shore up its legal defenses in case the court took up Gonzalez and ruled unfavorably for Google on it.

Representing the terrorism victims against Google and Twitter, lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was enacted, social media companies wanted people to subscribe to their services, but today the economic model is different.

He said that in order to keep users online longer, one way to do that was by using a system called a recommendation system, which would recommend other related material.

He says that modern social media company executives knew the risks of what they were doing. In 2016, he says, they met with high government officials who told them of the dangers posed by ISIS videos, and how they were used for recruitment, propaganda, fundraising, and planning.

“The attorney general, the director of the FBI, the director of national intelligence, and the then-White House chief of staff . . . those government officials . . . told them exactly that,” he says.

Do Social Media Companies Really Believe in Extremism? The Second Case in the High Court of a High-Second-Law Court

“We don’t believe in extremist content on our products or platforms so we’ve invested in Human Review and smart detection technology to make sure that happens.”

Prado acknowledges that social media companies today are nothing like the social media companies of 1996, when the interactive internet was an infant industry. But, she says, if there is to be a change in the law, that is something that should be done by Congress, not the courts.

There are many “strange bedfellows” among the tech company allies in this week’s cases. The Chamber of Commerce, the libertarian American Civil Liberties Union, and dozens of other groups filed briefs urging the court to keep the status quo.

But the Biden administration has a narrower position. The administration has said it is best to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230.

It’s okay to link a certain piece of content together, sorted through billions of pieces of data for search engines, but that sort of thing is not recommendable if it encourages illegal conduct.

If the Supreme Court were to adopt that position, it would be very threatening to the economic model of social media companies today. There is no easy way to distinguish between recommendation and aggregation.

It’s likely that these companies would defend their actions in court. But filing suit, and getting over the hurdle of showing enough evidence to justify a trial–those are two different things. What’s more, the Supreme Court has made it much more difficult to jump that hurdle. The second case the court hears this week, on Wednesday, deals with just that problem.

Supreme Court justices appeared broadly concerned Tuesday about the potential consequences of allowing websites to be sued for their automatic suggestions of user content, highlighting the challenges faced by attorneys who want to hold Google accountable for suggesting terrorist videos on YouTube.

How – or if – the court draws that line could have significant implications for the way websites choose to rank, display and promote content to their users as they seek to avoid a litigation minefield.

But Eric Schnapper, representing the plaintiffs, argued that a ruling for Gonzalez would not have far-reaching effects because even if websites could face new liability as a result of the ruling, most suits would likely be thrown out anyway.

At one point Kagan suggested that Schnapper was trying to gut the entire statute: “Does your position send us down the road such that 230 can’t mean anything at all?” She asked what she was going to do.

You’re creating a world of lawsuits. “Really, anytime you have content, you also have these presentational and prioritization choices that can be subject to suit.”

The suits would not be suits that have much chance of prevailing if the court makes clear that even after there is a recommendation.

Schnapper was told by several justices to clarify how the court should treat recommendation software if it promotes an video of a terrorist to a group of people who aren’t particularly interested in terrorism.

Schnapper attempted several explanations, including at one point digressing into a hypothetical about the difference between YouTube videos and video thumbnail images, but many of the justices were lost about what he was calling for.

Roberts added: “It may be significant if the algorithm is the same across … the different subject matters, because then they don’t have a focused algorithm with respect to terrorist activities… Then it might be harder for you to say that there’s selection involved for which you can be held responsible.”

Barrett raised the issue again in a question for Justice Department lawyer Stewart. She questioned if your position was related to the idea that you should like or spread the word for users or that they should not protect themselves in that situation. Correct?”

Stewart said there was distinction between an individual user making a conscious decision to amplify content and an algorithm that is making choices on a systemic basis. Stewart didn’t provide a clear answer about how the changes to Section 230 would affect individual users.

The anti terrorism act is the point that’s at issue in this case. But I suspect there will be many, many times more defamation suits,” Chief Justice John Roberts said, while pointing to other types of claims that also may flood the legal system if tech companies no longer had broad Section 230 immunity.

Justice Samuel Alito posed for Schnapper a scenario where a competitor of a restaurant created a video making false claims about the restaurant violating health code and YouTube refusing to take the video down despite knowing its defamatory.

During the hearing, Kagan asked what would happen if a platform recommended a video that was actually a false restaurant competitor and it was called the greatest video of all time.

Some justices appear to be uneasy with how the courts are interpreting Section 230, as evidenced by the tough grilling that Schnapper and Stewart received.

Congress was not concerned about that the internet would never get off the ground if everybody sued, Jackson said.

The brief, written by Oregon Democratic Sen. Ron Wyden and former California Republican Rep. Chris Cox, explained that Section 230 arose as a response to early lawsuits over how websites managed their platforms, and was intended to shelter the nascent internet.

Social Media and the Justices: What the Future of Internet will Look Like if the Supreme Court Were Not So Close to Section 230

JusticeSoniaTwas asked if platforms could face liability if they created a search engine that was against the law. A dating site that didn’t match individuals of different races was put forth by her. In her questioning of Blatt, JusticeBarrett went back to the hypothetical.

Multiple justices – even as they were not sympathetic to the tech foes’ arguments – suggested that that Google and its allies were playing Chicken Little in the stark warnings they gave the court about how a ruling against Google would transform the internet.

Alito questioned if the internet would be ruined if a video was posted and then refused to be taken down.

Do we need to reach Section 230 if you lose tomorrow? Would you admit that you wouldn’t win on that ground? Schnapper was asked by Barrett.

The justices decided Tuesday what the future of internet would look like if the Supreme Court narrowed the scope of the law that created the age of social media.

On several occasions, the justices said they were confused by the arguments before them – a sign that they may find a way to dodge weighing in on the merits or send the case back to the lower courts for more deliberations. At the very least they seemed spooked enough to tread carefully.

The family sued after being injured by an act of international terrorism under a federal law.

Questions About Artificial Intelligence, Thumbnail Pop-Ups, and other Technologies: The High Court Benchmark Against Google Ads

Concerns about the effectiveness of Artificial Intelligence, thumbnail pop-ups, and other technologies were raised during oral arguments. The justices seemed deeply frustrated at the end of the day with the scope of the arguments before them and unsure of the road ahead.

Justice Samuel Alito wondered if he was completely confused by the argument you were making. At another point, Justice Ketanji Brown Jackson said that he was completely confused. Justice Clarence Thomas was still confused halfway through the arguments.

The Supreme Court justice suggested to the Congress that they step in. We are a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the internet,” she said to laughter.

Chief Justice John Roberts tried to make an analogy with a book seller. He suggested that Google recommending certain information is no different than a book seller sending a reader to a table of books with related content.

Previous post A brutal new phase in the war could be marked by Putins rage against civilians
Next post Ron gets checks from top Republican donors as he launches his presidential campaign