The social media world was in the Supreme Court

The Gonzalez Family and the Families of Other Terrorist Victims filed a U.S. Supreme Court Review of Section 230 of the Communications Decency Act

At the heart of the legal battle is Section 230 of the Communications Decency Act, a nearly 30-year-old federal law that courts have repeatedly said provide broad protections to tech platforms but that has since come under scrutiny alongside growing criticism of Big Tech’s content moderation decisions.

This week’s cases attempt to thread that needle. The Gonzalez family and the families of other terrorism victims have filed a lawsuit against social media companies, saying they aided and abetted terrorism. The families allege that the companies did more than simply provide platforms for communication. Rather, they contend, that by recommending ISIS videos to those who might be interested, they were seeking to get more viewers and increase their ad revenue.

“Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” lawyers for the family argued in their petition seeking Supreme Court review.

Section 230 is a complex language that is simple to understand. Courts have repeatedly accepted Section 230 as a defense against claims of defamation, negligence and other allegations. It built up a body of law so wide and influential that it became a pillar of today’s internet.

Tech critics have called for added legal exposure and accountability. The social media industry has been largely shielded from the courts and normal body of law. The Anti-Defamation League said in a court brief that it’s very irregular for a global industry that has huge influence to be protected.

Section 230, written in 1995 and passed in early 1996, unsurprisingly does not explicitly mention algorithmic targeting or personalization. Yet a review of the statute’s history reveals that its proponents and authors intended the law to promote a wide range of technologies to display, filter, and prioritize user content. Section 229 protections for targeted content or types of personalized technology would need to be changed by Congress.

The First Amendment Doesn’t Matter if the Legal System is Failed or Does it Matter? How Laws and Laws End up Fighting for Civil Liberty

The First Amendment can’t be successful if the legal system doesn’t work. It doesn’t matter if people can’t be meaningfully censured for serious violations or if verdicts are vestigial afterthoughts in case of clout. It is useless if the courts do not take it seriously.

Rather than seriously grappling with the effects of technology on democracy, lawmakers and courts often use a cultural backlash against “Big Tech” to wage political warfare. Scratch the surface of supposedly “bipartisan” internet regulation, and you’ll find a mess of mutually exclusive demands fueled by reflexive outrage. Some people who defend the First Amendment are also the ones that are open to allowing it to be dismantled.

Virtually every American politician professes to love the First Amendment. Many of them profess to hate another law: Section 230 of the Communications Decency Act. But the more they say about 230, the clearer it becomes that they actually hate the First Amendment and think Section 230 is just fine.

The user or provider of an interactive computer service will not be treated as the publisher or speaker of the information.

The law was passed in 1996, and courts have interpreted it expansively since then. It effectively means that websites, newspapers, gossips and other companies can not be sued for hosting or reposting illegal speech. The law was passed after a pair of seemingly contradictory defamation cases, but it’s been found to cover everything from harassment to gun sales. It means courts can dismiss lawsuits over web platform moderation, especially since there is a second clause protecting the removal of objectionable content.

The thing is, these complaints get a big thing right: in an era of unprecedented mass communication, it’s easier than ever to hurt people with illegal and legal speech. The legal system has become part of the problem, and this is why more people are resorting to suing Facebook.

Making false claims isn’t necessarily illegal so repeal Section 230 doesn’t make companies remove misinformation. There’s a good reason why the First Amendment protects shaky scientific claims. Think of how constantly our early understanding of covid shifted — and now imagine researchers and news outlets getting sued for publishing good-faith assumptions that were later proven incorrect, like covid not being airborne.

Removing Section 230 protections is a sneaky way for politicians to get around the First Amendment. Due to litigation the cost of operating a social media site in the United States will go up. The 230 defense cannot be invoked with a straight face, so sites face lengthy lawsuits over legal content. And when it comes to categories of speech that are dicier, web platforms would be incentivized to remove posts that might be illegal — anything from unfavorable restaurant reviews to MeToo allegations — even if they would have ultimately prevailed in court. It would burn time and money in other ways. It’s no wonder platform operators do what it takes to keep 230 alive. When politicians gripe, the platforms respond.

How bad are state laws regarding social media and the First Amendment? The Case of Depp’s defamation lawsuit against Amber Heard

It’s not clear if it matters. Sandy Hook families were left unable to pursue Jones due to his declaration of corporate bankruptcy which left much of his money tied up indefinitely. He used the court proceedings to sell dubious health supplements to his followers. Legal fees and damages are likely to hurt his finances but the legal system does not seem to have changed his behavior. He was given another platform to state himself a martyr.

The case of Johnny Depp’s defamation lawsuit against Amber Heard, who had publicly identified as a victim of abuse, is similar to the one of this year. Her case was not as cut-and-dried as Jones’ but she did not display the same level of social media savvy. The case turned into a ritual public humiliation of Heard — fueled partly by the incentives of social media but also by courts’ utter failure to respond to the way that things like livestreams contributed to the media circus. Defamation claims can meaningfully hurt people who have to maintain a reputation, while the worst offenders are already beyond shame.

I only talk about Democratic and bipartisan proposals to reform Section 230 if they have some substance to them.

Republican-proposed speech reforms are not good. We’ve learned just how bad over the past year, after Republican legislatures in Texas and Florida passed bills effectively banning social media moderation because Facebook and Twitter were using it to ban some posts from conservative politicians, among countless other pieces of content.

As it stands, the First Amendment should almost certainly render these bans unconstitutional. They are government speech regulations! The Fourth Circuit Court of Appeals in Texas upheld its law without explaining its reasoning, despite the appeals court blocking Florida’s law. Months later, that court actually published its opinion, which legal commentator Ken White called “the most angrily incoherent First Amendment decision I think I’ve ever read.”

The Texas law was temporarily blocked by the Supreme Court, but its recent statements on speech aren’t reassuring. It’s almost certain to take up either the Texas or Florida case, and the case will likely be heard by a court that includes Clarence Thomas, who’s gone out of his way to argue that the government should be able to treat Twitter like a public utility. Conservatives previously objected to the idea of treating Internet service providers like public utilities in order to regulate them.

Thomas, as well as two other conservative justices, voted against putting the law on hold. Some people think that the vote of Elena Kagan was a protest against the “shadow docket” where the ruling happened.

Only a stupid person would support the laws in Florida and Texas. Basic consistency is sacrificed by the rules when they are rigged to punish political targets. They attack Big Tech platforms for their power because they control the chokepoints that let anyone access those platforms, and they ignore the near-monopolies of other companies. There is no saving a movement so intellectually bankrupt that it exempted media juggernaut Disney from speech laws because of its spending power in Florida, then subsequently proposed blowing up the entire copyright system to punish the company for stepping out of line.

And even as they rant about tech platform censorship, many of the same politicians are trying to effectively ban children from finding media that acknowledges the existence of trans, gay, or gender-nonconforming people. The Republican state delegate in Virginia dug up an obscenity law and stopped Barnes & Noble from selling the graphic memoirGender Queer and the young adult novelA Court of Mist and Folly, a victory for the conservative party. A disingenuous panic over “grooming” doesn’t only affect LGBTQ Americans. Even as Texas is trying to stop Facebook from kicking off violent insurrectionists, it’s suing Netflix for distributing the Cannes-screened film Cuties under a constitutionally dubious law against “child erotica.”

Even if you take the First Amendment to its broadest possible meaning, almost all of the software code is speech, so software-based services are impossible to regulate. Airbnb and Amazon have both used Section 230 to defend against claims of providing faulty physical goods and services, an approach that hasn’t always worked but that remains open for companies whose core services have little to do with speech, just software.

It is obvious that a law is oversimplification. Internet platforms change us — they incentivize specific kinds of posts, subjects, linguistic quirks, and interpersonal dynamics. But still, the internet is at scale and crammed into spaces owned by powerful companies. Humans at scale can be very ugly. Vicious abuse might come from one person, or it might be spread out into a campaign of threats, lies, or stochastic terrorism involving thousands of different people, none of it quite rising to the level of a viable legal case.

The Supreme Court will hear two internet moderation cases in February of 2023. As noted by Bloomberg reporter Greg Stohr, hearings for Gonzalez v. Google and Twitter v. Taamneh have been respectively scheduled for February 21st and February 22nd, respectively.

In its petition, Twitter argues that regardless of Google’s outcome with Section 230, it’s not a violation of anti-terrorism law to simply fail at banning terrorists using a platform for general-purpose services. It’s difficult to say what a provider of ordinary services can do to avoid terrorism liability under that framework, and a lawsuit could always argue the platform might have worked harder to flush out criminals.

The law’s central provision holds that websites (and their users) cannot be treated legally as the publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.

Comments on Section 230 of the First Amendment: The Biden administration weighed in on the case of Twitter v. Taamneh

The FCC is not part of the judicial branch, and it does not regulate social media or moderation decisions, were some of the legal and procedural problems faced by the executive order.

The bipartisan hatred of Section 230 occurs even if both parties can’t agree on what should happen to it.

The deadlock has thrown much of the momentum for changing Section 230 to the courts — most notably, the US Supreme Court, which now has an opportunity this term to dictate how far the law extends.

For the tech companies, and even for many of their competitors, it would be a bad idea because it would undermine the internet. It could put a lot of websites and users in legal jeopardy, as well as change the way websites operate in order to avoid liability.

It was written by the company and several volunteer subreddits thatRecommendations are very important to making the website a vibrant place. Users determine which posts get prominence and which go into obscurity by upvoting and downvoting content.

The brief argued that there is a serious risk of being sued for suggesting a defamatory post, and that people would no longer use the site.

The second case, Twitter v. Taamneh, will decide whether social media companies can be sued for aiding and abetting a specific act of international terrorism when the platforms have hosted user content that expresses general support for the group behind the violence without referring to the specific terrorist act in question.

If the allegation is true, it may expose tech platforms to more liability because they don’t have protections under Section 230.

The Biden administration weighed in on the case as well. In a brief filed in December, it argued that Section 230 does protect Google and YouTube from lawsuits “for failing to remove third-party content, including the content it has recommended.” The government argued in its brief that the company’s own speech is not protected by those protections.

The company can not be blamed for aiding the terrorist group because the content on its platform was not relevant to what happened in the case. The Biden administration has agreed with that view according to its brief.

The Texas law and the case of Turkey by the Islamic State: Twitter sued the Biden administration after a lawsuit by the Turkish court against the Musk-Google

The Texas law has been petitioned to the court by a number of people. The court asked the Biden administration to submit its views instead of making a decision on whether to hear the cases.

The legal performance of the new owner ofTwitter will be a test in this case. Like Gonzalez, the suit is concerning a separate attack in Turkey by the Islamic State. Twitter filed its petition before Musk bought the platform, aiming to shore up its legal defenses in case the court took up Gonzalez and ruled unfavorably for Google on it.

The lawyer for the victims of terrorism will tell the supreme court that even though the economic model is different, when Section 230 was passed, social media companies wanted people to pay for their services.

“Now most of the money is made by advertisements, and social media companies make more money the longer you are online,” he says, adding that one way to do that is by algorithms that recommend other related material to keep users online longer.

He says that modern social media executives were aware of the dangers of their actions. He says that in 2016 they met with high government officials who warned them of the dangers of theISIS videos and how they were used for recruitment, propaganda, and planning.

The then-White House chief of staff, the director of national intelligence and the attorney general were involved in the case. those government officials . . . He said that he told them that.

The Social Media Industries of the 21st Century and the Second Case of the “Suppression of Extremism and Discrimination” in the U.S.

“We believe that there’s no place for extremist content on any of our products or platforms,” she says, noting that Google has “heavily invested in human review” and “smart detection technology,” to “make sure that happens.”

When the internet was an infant industry, Prado acknowledges that social media companies were nothing like they are today. She says that change in the law should be done by Congress, not the courts.

The tech company allies in this week’s cases are very strange bedfellows. A number of groups, including the conservative Chamber of Commerce, have submitted letters to the court in favor of keeping the status quo.

The position of the Biden administration is smaller than that of the other administration. The administration’s stance is that it’s best to be more passive and organize information, but leave the protections of 230 behind when it comes to truly recommending content.

In short, hyperlinks, grouping certain content together, sorting through billions of pieces of data for search engines, that sort of thing is OK, but actually recommending content that shows or urges illegal conduct is another.

The economic model of social media companies would be in danger if that position were adopted by the Supreme Court. There isn’t an easy way to distinguish between recommend and annapolis.

And it likely would mean that these companies would constantly be defending their conduct in court. But filing suit, and getting over the hurdle of showing enough evidence to justify a trial–those are two different things. What’s more, the Supreme Court has made it much more difficult to jump that hurdle. The second case being heard this week is about that problem.

Previous post Politicians say there will be a new drug war
Next post A spacewalk was called off after a leak was discovered on the capsule