A challenge to the social media platforms shield has been taken by the Supreme Court

Changing Section 230 of the Communications Decency Act is Fine: How Google, Facebook, and the Illusion of the First Amendment are attacking Big Tech

The Communications Decency Act, which was originally enacted in 1984 and is known as Section 230, is at the center of the legal battle as it is often said that it provides broad protections to tech platforms.

The case surrounding Google is about whether or not it can be sued because of the promotion of terrorist videos on its platform.

The lawyers for the family argued in their petition to the Supreme Court that videos on the internet were used to recruit from areas other than the territories it held in Syria and Iraq.

Section 230 explicitly protects users from being held liable for the content posted by third parties. A change that exposes technology to new lawsuits could have implications for users, according to several amicus briefs.

Platform moderation has become one of the few enforcement mechanisms to punish bad behavior. When bad behavior goes unpunished, unaddressed, and spirals out into something worse and worse, people look for something to blame. The First Amendment cannot be blamed by politicians since they can’t blame 230. That’s the lesson, the takeaway: whenever politicians talk about regulating Big Tech or changing 230, they are almost always talking about imperiling the First Amendment.

The concerns and concern-trolling around “cancel culture” and “illiberalism” have never been louder. The First Amendment protects the civil liberties of the citizens, yet they are being gutted by legislators and judges.

Tech freedom advocates have fought for years against laws that would stifle online communication, a project based on the assumption that this communication is a social good. The boundaries of this assumption have never been more clear, and the backlash is threatening to make things worse.

A lot of American politicians love the First Amendment. Section 230 of the Communications Decency Act is hated by many of them. But the more they say about 230, the clearer it becomes that they actually hate the First Amendment and think Section 230 is just fine.

Defending the First Amendment with Social Media: How to Stop Spaming on the Internet and Lose Your Role in Social Media Disputes

No provider of an interactive computer service will be treated as the publisher or speaker of the information provided to them.

In 1996, the law was enacted and courts have interpreted it since. It effectively means that internet services cant be sued for hosting or posting someone’s illegal speech. After the passage of a defamation law, it was discovered that it covers everything from harassment to gun sales. In addition, it means courts can dismiss most lawsuits over web platform moderation, particularly since there’s a second clause protecting the removal of “objectionable” content.

In an era of unprecedented mass communication, it is easier than ever to hurt people with illegal and legal speech. The legal system has become part of the problem, and that’s why people have been encouraged to use the legal system to file a lawsuit against Facebook.

But making false claims about pandemic science isn’t necessarily illegal, so repealing Section 230 wouldn’t suddenly make companies remove misinformation. The First Amendment protects shaky scientific claims. Think of how constantly our early understanding of covid shifted — and now imagine researchers and news outlets getting sued for publishing good-faith assumptions that were later proven incorrect, like covid not being airborne.

Removing Section 230 protections is a sneaky way for politicians to get around the First Amendment. Due to litigation, the cost of running a social media site in the US would go up a lot. The sites might face lengthy lawsuits over legal content, even though they are unable to invoke a simple 230 defense. If they won the court case, web platforms would be incentivized to remove posts about things like restaurants that are bad, even if they were illegal. It would burn time and money in many ways. It’s no wonder platform operators do what it takes to keep 230 alive. When politicians gripe, the platforms respond.

The Ugly and Bad: Real or Fake? A Case Study Of Amber Depp, Jack Jones, And The State of the Media: On the Ludicrousness of the First Amendment

It’s also not clear whether it matters. Jones declared corporate bankruptcy during the procedure, tying up much of his money indefinitely and leaving Sandy Hook families struggling to chase it. He used the court proceedings to promote his health supplement business. Legal fees and damages have almost certainly hurt his finances, but the legal system has conspicuously failed to meaningfully change his behavior. If anything, it provided yet another platform for him to declare himself a martyr.

Johnny Depp filed a lawsuit againstAmber Heard, whom he knew was going to identify as a victim of abuse, in order to try and silence her. Amber Heard’s case was less cut-and-dried than Jones’, but she lacked Jones’ shamelessness or social media acumen. The case turned into a ritual public humiliation because of the incentives of social media but also by courts’ utter failure to respond to the way that things like livestreams contributed to the media circus. People who have to maintain a reputation are more likely to be hurt byDefamation claims.

Up until this point, I’ve almost exclusively addressed Democratic and bipartisan proposals to reform Section 230 because those at least have some shred of substance to them.

Republican-proposed speech reforms are ludicrously, bizarrely bad. We’ve learned just how bad over the past year, after Republican legislatures in Texas and Florida passed bills effectively banning social media moderation because Facebook and Twitter were using it to ban some posts from conservative politicians, among countless other pieces of content.

The bans should be rendered unconstitutional by the First Amendment. They are government speech regulations! But while an appeals court blocked Florida’s law, Texas’ Fourth Circuit Court of Appeals threw a wrench in the works with a bizarre surprise decision to uphold its law without explaining its reasoning. Months later, that court actually published its opinion, which legal commentator Ken White called “the most angrily incoherent First Amendment decision I think I’ve ever read.”

According to the company, it has not changed its policies but that its approach to Enforcement will rely on de-amplification of violations, something that had already been done. “Freedom of speech,” the blog post stated, “not freedom of reach.”

Why the Internet is Human at Scale? Why governments can’t enforce Section 230 laws in Texas, Florida, or why it is illegal to use the internet to protect sexual, gender, or gender nonconforming people

Thomas, as well as two other conservative justices, voted against putting the law on hold. (Liberal Justice Elena Kagan did, too, but some have interpreted her vote as a protest against the “shadow docket” where the ruling happened.)

Only a stupid person would support the laws in Texas and Florida. The rules are rigged so they punish political targets at the expense of consistency. They attack “Big Tech” platforms for the power they have, ignoring internet service providers who control the choke points that allow anyone to access those platforms. There is no saving a movement so intellectually bankrupt that it exempted media juggernaut Disney from speech laws because of its spending power in Florida, then subsequently proposed blowing up the entire copyright system to punish the company for stepping out of line.

Many of the same politicians are trying to restrict children from accessing media that acknowledges the existence of trans, gay, or gender non conforming people. A Virginia Republican state delegate dug up a rarely used obscenity law to prevent Barnes and Noble from selling a graphic memoir called Gender queer and a novel that was a young adult novel. A panic over grooming doesn’t affect all Americans. Even as Texas tries to stop violent insurrectionists from getting on the social networking site, it is also trying to prevent pornography from being seen on it.

The tradeoff here is that almost all software code is speech, which renders software-based services impossible to regulate. Section 230 was used by both Amazon and Airbnb to fight claims of providing faulty physical goods and services, an approach that has not always worked but that remains open for companies that have little to do with speech, just software.

Balk’s Law is obviously an oversimplification. Internet platforms change us — they incentivize specific kinds of posts, subjects, linguistic quirks, and interpersonal dynamics. But still, the internet is humanity at scale, crammed into spaces owned by a few powerful companies. And it turns out humanity at scale can be unbelievably ugly. Vicious abuse might come from one person, or it might be spread out into a campaign of threats, lies, or stochastic terrorism involving thousands of different people, none of it quite rising to the level of a viable legal case.

The release of internal documents comes at a time when Musk is trying to change the perception of the platform. In the past, it has been stated that the billionaire wanted to do away with permanent user bans and the social network recently restored the accounts of thousands of users. But Musk has also said he doesn’t want Twitter to “become a free-for-all hellscape” and plans to moderate content in a way that appears largely consistent with Twitter’s prior policies.

If you have been shadow banned, there is a software update that will show you your true account status so you know how to appeal. He didn’t give further details or a timetable.

His announcement came amid a new release of internal Twitter documents on Thursday, sanctioned and cheered by Musk, that once again placed a spotlight on the practice of limiting the reach of certain, potentially harmful content — a common practice in the industry that Musk himself has seemingly both endorsed and criticized.

Last month, Musk said there is a new policy of ‘freedom of speech, not freedom of reach’ at the social network. Negative and hateTweets will be demonetized so no revenue can come from them.

With that announcement, Musk, who has said he now votes Republican, prompted an outcry from some conservatives, who accused him of continuing a practice they opposed. The clash reflects an underlying tension at Twitter under Musk, as the billionaire simultaneously has promised a more maximalist approach to “free speech,” a move cheered by some on the right, while also attempting to reassure advertisers and users that there will still be content moderation guardrails.

The second set of the so-called TwitterLeaks was shared by journalist Bari Weiss on the platform and focused on how the company has restricted the reach of some accounts and banned certain topics that it deems potentially harmful.

Weiss said that actions were taken without the knowledge of the users. In some cases, when it comes to suspending accounts for breaking the rules, it’s been shown that it may apply a “strikes” that correspond with the suspension. Users receive a notification when their accounts have been temporarily suspended.

Social Media and Terrorism: The Power of the High-Energy DOJEC and a High-Judgement Benchmark

In both instances the internal documents appeared to have been given to the journalists by Musk. Musk on Friday shared Weiss’ thread in a tweet and added, “The Twitter Files, Part Duex!!” They had popcorn and a couple of popcorn messages.

Weiss offered several examples of right-leaning figures who had moderation actions taken on their accounts, but it’s not clear if such actions were equally taken against left-leaning or other accounts.

She pointed to the tech case the court will hear Wednesday, in which the justices will consider whether an anti-terrorism law covers internet platforms for their failure to adequate remove terrorism-related conduct. The same law is being used by a small group of people to file a lawsuit.

Tech companies that are involved in the litigation have cited the statute as an argument for why they shouldn’t have to face lawsuits.

The law’s central provision holds that websites (and their users) cannot be treated legally as the publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.

The executive order faced a number of legal and procedural problems, not least of which was the fact that the FCC is not part of the judicial branch; that it does not regulate social media or content moderation decisions; and that it is an independent agency that, by law, does not take direction from the White House.

The result is a bipartisan hatred for Section 230, even if the two parties cannot agree on why Section 230 is flawed or what policies might appropriately take its place.

The deadlock has thrown much of the momentum for changing Section 230 to the courts — most notably, the US Supreme Court, which now has an opportunity this term to dictate how far the law extends.

The Twitter Case: Why the Google Case Should not be Sufficient under Section 230, Unless It Is Against the Terrorist Group

Tech critics want legal exposure and accountability. The courts and the normal development of a body of law have largely been protected from the social media industry. The Anti-Defamation League wrote that it was irregular for a global industry to be protected from judicial inquiry.

It would be a bad idea for the tech giants and many of their competitors because it would undermine the internet. It would change how some websites work in order to avoid being sued, as well as put a lot of websites and users into legal jeopardy.

Recommendations are the most important aspect that make the site a vibrant place, wrote the company and several volunteer volunteers. “It is users who upvote and downvote content, and thereby determine which posts gain prominence and which fade into obscurity.”

A legal regime that could cause a lot of harm to people by threatening to ruin them if they recommend a post is one of the reasons for the brief.

The arguments today were a relief after all the drama of the past year. Even Justice Clarence Thomas, who’s written some spine-tinglingly ominous opinions about “Big Tech” and Section 230, spent most of his time wondering why YouTube should be punished for providing an algorithmic recommendation system that covered terrorist videos alongside ones about cute cats and “pilaf from Uzbekistan.” That might be the best we can come up with.

The allegation seeks to carve out content recommendations so that they do not receive protections under Section 230, potentially exposing tech platforms to more liability for how they run their services.

The facts in the Twitter and the Google case are similar, even if they’re posing different legal questions. And that’s why, as Barrett suggested, a finding that Twitter is not liable under the ATA might also resolve the Google case without the need to weigh in on Section 230.

The company can not be held responsible for content that is not on the site, because it did not give any assistance to the terrorist group. The Biden administration, in its brief, has agreed with that view.

Twitter v. Taamneh: a recent court challenge to the Texas and Florida lawss, and what they could have to say

A number of petitions are currently pending asking the Court to review the Texas law and a similar law passed by Florida. The Court last month asked the Biden administration to submit its views on the cases, delaying a decision on whether to hear them.

Twitter v. Taamneh, meanwhile, will be a test of Twitter’s legal performance under its new owner Elon Musk. Like Gonzalez, the suit concerns a separate Islamic State attack in Turkey, but it is also worried about whether the social media site gave material aid to terrorists. The petition was filed before the platform was bought by Musk, to shore up its legal defenses in case the court ruled against it.

Under Schnapper’s interpretation, could liking, retweeting or saying “check this out” expose individuals to lawsuits that they could not deflect by invoking Section 230?

He said that most of the money is made by advertisements and social media companies make more money the longer you are online.

The modern social media company executives were aware of the dangers of what they were doing. He said that they met with government officials who warned them about the danger posed by the videos, and how they were used for recruitment.

The attorney general, the FBI, the director of national intelligence, and the then White House chief of staff are some of the people that were mentioned. Those officials from the government. He says he told them that.

The Case for Extremism in the Social Media Industry: The Biden Administration’s Position and the First Appellate Circuit Court Justices

“We believe that there’s no place for extremist content on any of our products or platforms,” she says, noting that Google has “heavily invested in human review” and “smart detection technology,” to “make sure that happens.”

Prado agrees that the social media companies of 1996 were no better today than they were then. But, she says, if there is to be a change in the law, that is something that should be done by Congress, not the courts.

There are many “strange bedfellows” among the tech company allies in this week’s cases. Groups ranging from the conservative Chamber of Commerce to the libertarian ACLU have filed an astonishing 48 briefs urging the court to leave the status quo in place.

But the Biden administration has a narrower position. Columbia law professor Timothy Wu summarizes the administration’s position this way: “It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230.”

In short, hyperlinks, grouping certain content together, sorting through billions of pieces of data for search engines, that sort of thing is OK, but actually recommending content that shows or urges illegal conduct is another.

If the Supreme Court adopted that position, it would threaten the viability of social media companies today. The tech industry says there is no easy way to distinguish between aggregating and recommending.

It would likely mean that these companies would be defending their conduct in court. Getting over the hurdle to show enough evidence to justify a trial is not the same as filing suit. The Supreme Court made it much harder to jump that hurdle. The second case the court hears this week, on Wednesday, deals with just that problem.

Supreme Court justices appeared broadly concerned Tuesday about the potential unintended consequences of allowing websites to be sued for their automatic recommendations of user content, highlighting the challenges facing attorneys who want to hold Google accountable for suggesting YouTube videos created by terrorist groups.

If the court draws that line it will have significant implications for the way website choose to rank, display and promote content to their users as they try to avoid litigation.

Eric Schnapper argued that a ruling for Gonzalez would have no far-reaching effects because most suits would be thrown out anyway.

At one point Kagan suggested that Schnapper was trying to gut the entire statute: “Does your position send us down the road such that 230 can’t mean anything at all?” she asked.

You are making a world of lawsuits. Any time you have something, you have the choice of presentational and prioritization choices.

“I wouldn’t necessarily agree with ‘there would be lots of lawsuits’ simply because there are a lot of things to sue about,” Stewart said, “but they would not be suits that have much likelihood of prevailing, especially if the court makes clear that even after there’s a recommendation, the website still can’t be treated as the publisher or speaker of the underlying third party.”

Multiple justices asked Schnapper to clarify how the court should treat recommendation strategies if the same thing happened to someone interested in terrorism and someone interested in cooking.

Schnapper attempted several explanations, including at one point digressing into a hypothetical about the difference between YouTube videos and video thumbnail images, but many of the justices were lost about what he was calling for.

It might be harder for you to say that you can be held responsible if the selection is different across the board.

Barrett raised the issue again in a question for Justice Department lawyer Stewart. She asked: “So the logic of your position, I think, is that retweets or likes or ‘check this out’ for users, the logic of your position would be that 230 would not protect in that situation either. Correct?”

There was a distinction between an individual user that made a conscious decision to amplify their content and an automated system that made decisions on a systemic basis. Stewart did not give a clear answer about how changes to Section 230 could affect users.

Tech law experts believe that a lot of defamation litigation is going to take place if Section 230’s protections are weakened.

Justice Samuel Alito posed for Schnapper that a competitor of his restaurant would make a video accusing the restaurant of violating health code and then refuse to take the video down even though they knew it was false.

Alito posed the question of what happens if a platform recommends a false restaurant competitor video to the public and it is called the best video ever, but didn’t mention anything about its content.

Though Google’s attorney, Lisa Blatt, did not get the tough grilling that Schnapper and Stewart received, some justices hinted at some discomfort with how broadly Section 230 has been interpreted by the courts.

When Congress enacted the relevant provision in 1996, Justice Jackson pushed back on the claims about the intent of Congress. Blatt had claimed that Congress’ move to broadly protect tech platforms from legal liability is what got the internet off the ground.

Section 230 was written in response to lawsuits over how websites manage their platforms, and is intended to shelter the internet, as explained in the brief written by Ron Wyden and Chris Cox.

Social Media and Google: Justices vs. Justice Blatt in a Sotomayor v.s. Barrett’s Question

If a search engine was made that was sexist, Justice Sotomayor said platforms could be held liable. She put forth an example of a dating site that wouldn’t match individuals of different races. The hypothetical question was posed by Justice Barrett in her questioning of Blatt.

Multiple justices – even as they were not sympathetic to the tech foes’ arguments – suggested that that Google and its allies were playing Chicken Little in the stark warnings they gave the court about how a ruling against Google would transform the internet.

“Would Google collapse and the Internet be destroyed if YouTube and therefore Google were liable for posting and refusing to take down videos that it knows are defamatory and false,” Alito asked Blatt.

Do we have to go to Section 230 if you lose tomorrow? Would you concede that you would lose on that ground here?” Barrett asked Schnapper.

The nine justices began their work Tuesday to figure out what the future of the internet would look like if the Supreme Court narrows the scope of a law that some believe created the age of modern social media.

The court may not issue a sweeping decision with unknown ramifications in the case at hand because of the fact that justices were wading for the first time into new territory.

The Antiterrorism Act of 1990 authorizes such lawsuits for injuries when there is an act of international terrorism.

On Facebook, Social Lobby and the Internet: How Google, Facebook, and Isis can be sued for lying to the Law: A Head of the Fight

There were issues raised in the oral argument, from artificial intelligence, to thumbnail pop-ups, to endorsements and even restaurant reviews. But at the end of the day, the justices seemed deeply frustrated with the scope of the arguments before them and unclear of the road ahead.

“I’m completely confused by whatever you’re saying at the moment,” Justice Alito said early on. “So I guess I’m thoroughly confused,” Justice Ketanji Brown Jackson said at another point. Justice Clarence Thomas was still confused halfway through the arguments.

There was a ripple of laughter in the US Supreme Court on February 21 when Justice Elena Kagan said: “We are a court—we really don’t know about these things. We are not, like, the nine greatest experts on the internet.”

Roberts tried to make a comparison with a book seller. He said that a book seller sending a reader to a table of books with related content is just as similar to a player in a baseball game.

Supreme Court Justice Elena Kagan made the wryly self-deprecating comment early in oral arguments for Gonzalez v. Google, a potential landmark case covering Section 230 of the Communications Decency Act of 1996. The remark was a nod to many people’s worst fears about the case. Gonzalez could potentially undermine core legal protections for the internet and will be decided by a court that is open to looking at longstanding speech law.

There are many questions about introducing liability to the Algorithms. Should Google be punished for returning search results that link to defamation or terrorist content, even if it’s responding to a direct search query for a false statement or a terrorist video? Is there a clear answer if a hypothetical website wrote an algorithm that was designed to protect it from being in cahoots with Isis? While it (somewhat surprisingly) didn’t come up in today’s arguments, at least one ruling has found that a site’s design can make it actively discriminatory, regardless of whether the result involves information filled out by users.

One prominent example of this supposedly “biased” enforcement is Facebook’s 2018 decision to ban Alex Jones, host of the right-wing Infowars website who later was slapped with $1.5 billion in damages after harassing the families of the victims of a mass shooting.

Previous post New TV lineups hands-on
Next post Despite a drop in PC demand, the chip maker is still making a lot of money