The Supreme Court had a hearing on the internet speech case

Section 230 and Algorithmic Terrorism: The Long-Term Evolution of Facebook, YouTube, and the ISIS Ubiquity Case

Section 230 would be a problem for sites if they are not allowed to help users find videos, or find users more easily, because of the negative effects it would have on the internet. The Gonzalez case would show whether the Supreme Court believes that recommendations are an extension of user-generated content, or if they are a form of separate, protected speech by the platform. And for Twitter, the case will be a test of new owner Elon Musk’s appetite for defending his platform in court.

“I want you to tell us what it is that is standard on YouTube for practically anything you have an interest in, and suddenly amounts to aiding and abetting terrorism because you watch in the ISIS category,” Justice Clarence Thomas said.

The lawyers for the family said in their petition to the Supreme Court that the videos that the users watched on YouTube were the main way in which the militant group utilized its resources outside of Syria and Iraq.

Section 230 attempted to solve the problem that was highlighted in the 1995 ruling in the $200 million defamation lawsuit against Prodigy. A New York trial court judge ruled that because Prodigy had reviewed user messages before posting, used technology that prescreened user content for “offensive language,” and engaged in other moderation, its “editorial control” rendered it a publisher that faced as much liability as the author of the posts. A few years earlier, a New York federal judge had reasoned that because CompuServe did not exert sufficient “editorial control,” it was considered a “distributor” that was liable only if it knew or had reason to know of the allegedly defamatory content.

Section 230, written in 1995 and passed in early 1996, unsurprisingly does not explicitly mention algorithmic targeting or personalization. A review of the statute’s history shows that it was intended to promote a wide range of technologies to display, filter, and prioritize user content. This means that eliminating Section 230 protections for targeted content or types of personalized technology would require Congress to change the law.

These attacks on the First Amendment are already affecting some of the most vulnerable Americans, but they have far-reaching implications for everyone. The rules in Texas and Florida aren’t written carefully enough to only apply to “Big Tech.” Under some interpretations, Texas’ law means Wikipedia wouldn’t be allowed to remove edits that violate its standards. Republican lawmakers are even attacking spam filters as biased, so the effects aren’t just theoretical — if courts rule the wrong way, your inbox may be about to get a lot messier.

Tech freedom advocates have fought for years against laws that would stifle online communication in order to argue that it is a social good. The limits of this assumption have never been clearer, and the backlash threatens to make things even worse.

Politics in America is based upon the love of the First Amendment. Many of them profess to hate another law: Section 230 of the Communications Decency Act. But the more they say about 230, the clearer it becomes that they actually hate the First Amendment and think Section 230 is just fine.

Defaming Social Media Users: The First Amendment and Its Implications for Intellectual Property, Privacy and the Public Interests of Social Media

The user of an interactive computer service and their providers will not be treated as the publisher or speaker of information provided by another information content provider.

The law was passed in 1996, and courts have interpreted it expansively since then. It effectively means that web services — as well as newspapers, gossip blogs, listserv operators, and other parties — can’t be sued for hosting or reposting somebody else’s illegal speech. The law was passed after a pair of seemingly contradictory defamation cases, but it’s been found to cover everything from harassment to gun sales. In addition, it means courts can dismiss most lawsuits over web platform moderation, particularly since there’s a second clause protecting the removal of “objectionable” content.

The oft-neglected key here is illegal speech. Many well-deserved critiques of internet and social media have been made that it helps spread false stories about witches, lets huge crowds of people dogpile teachers or nurses with angry messages, or facilitates hate speech at a large scale. There are some defamation cases that are still in progress, like the defamation lawsuits against Fox News for false statements about voting machine manufacturers. It’s difficult to meet defamation. Joe Biden, for instance, claimed on the campaign trail that Section 230 let Facebook host disinformation. He took Facebook to task for allowing the spread of vaccine misinformation, and soon afterwards, a senator proposed removing protection for health misinformation from Section 230.

Section 230 is used to make false claims about science, which is not necessarily illegal. There’s a good reason why the First Amendment protects shaky scientific claims. Imagine if researchers and news outlets were sued for publishing good-faith assumptions that were later proven incorrect, like covid not being airborne, because of our early understanding of covid.

Section 230 is important for the First Amendment. Without 230, the cost of operating a social media site in the United States would skyrocket due to litigation. It is difficult to invoke a straightforward 230 defense, which could lead to lawsuits over legal content. Web platforms would be paid to remove posts that were illegal if they’d won in court. It would burn time and money in many ways. There is nothing more satisfying for platform operators to keep 230 alive. The platforms respond to politicians when they complain.

Source: https://www.theverge.com/23435358/first-amendment-free-speech-midterm-elections-courts-hypocrisy

Loudly Loud: The Case of Johnny Depp and the Courts of Appeals for Section 230 in the U.S. House of Representatives

It’s also not clear whether it matters. Sandy Hook families were left unable to chase down Jones’ money, after he declared corporate bankruptcy during the procedure. He used the court proceedings to promote his health supplements, and treated them contemptuously. The legal system has not meaningfully changed his behavior, despite all the legal fees and damages hurting his finances. If anything, it provided yet another platform for him to declare himself a martyr.

Contrast this with the year’s other big defamation case: Johnny Depp’s lawsuit against Amber Heard, who had identified publicly as a victim of abuse (implicitly at the hands of Depp). Amber Heard’s case was less cut-and-dried than Jones’, but she lacked Jones’ shamelessness or social media acumen. The case turned into a ritual public humiliation of Heard — fueled partly by the incentives of social media but also by courts’ utter failure to respond to the way that things like livestreams contributed to the media circus. Defamation claims can meaningfully hurt people who have to maintain a reputation, while the worst offenders are already beyond shame.

Over the course of the last few years, I have mostly addressed Democratic and bipartisan proposals to fix Section 230 because they have some semblance of substance to them.

Republican-proposed speech reforms are ludicrously, bizarrely bad. We’ve learned just how bad over the past year, after Republican legislatures in Texas and Florida passed bills effectively banning social media moderation because Facebook and Twitter were using it to ban some posts from conservative politicians, among countless other pieces of content.

As it stands, the First Amendment should almost certainly render these bans unconstitutional. The government has speech regulations. But while an appeals court blocked Florida’s law, Texas’ Fourth Circuit Court of Appeals threw a wrench in the works with a bizarre surprise decision to uphold its law without explaining its reasoning. Months later, that court actually published its opinion, which legal commentator Ken White called “the most angrily incoherent First Amendment decision I think I’ve ever read.”

The Supreme Court temporarily blocked the Texas law, but its recent statements on speech haven’t been terribly reassuring. It’s almost certain to take up either the Texas or Florida case, and the case will likely be heard by a court that includes Clarence Thomas, who’s gone out of his way to argue that the government should be able to treat Twitter like a public utility. Conservatives have raged against the idea of regulating internet service providers like utilities, which will make your brain hurt.

Thomas, as well as two other conservative justices, voted against putting the law on hold. Some people have assumed Elena Kagan’s vote was a protest against the ruling in the shadow docket.

The laws in Texas and Florida are only supported by a useful idiot. The rules are rigged to punish those with political leanings. They attack the Big Tech platforms because of their power, ignoring other companies who control the chokepoints that let anyone access those platforms. There is no way to save a movement that exempts Disney from the speech laws because it has money in Florida and wants to penalize the company for being out of line.

Many of the same politicians are trying to ban children from finding media that accepts the existence of trans, gay, or gender non conforming people. The Republican state delegate in Virginia dug up a rarely used obscenity law to stop Barnes & Noble from selling the graphic memoir Gender Queer on top of being able to pull books from schools and libraries. A disingenuous panic over “grooming” doesn’t only affect LGBTQ Americans. Even as Texas is trying to stop Facebook from kicking off violent insurrectionists, it’s suing Netflix for distributing the Cannes-screened film Cuties under a constitutionally dubious law against “child erotica.”

But once again, there’s a real and meaningful tradeoff here: if you take the First Amendment at its broadest possible reading, virtually all software code is speech, leaving software-based services impossible to regulate. Section 230 has long been used by companies to fight against claims of providing faulty physical goods and services, although it has not always worked, but that is still open for companies with less to do with speech than software.

The oversimplification of the law is that of Balk’s Law. The internet platforms change us and encourage specific types of posts. But still, the internet is humanity at scale, crammed into spaces owned by a few powerful companies. At scale, humans can be incredibly ugly. It might be from a single person or it could be spread out into a campaign of threats and lies, all of it not rising to the level of a viable legal case.

The Supreme Court has scheduled arguments for two major internet moderation cases in February of 2023. As noted by Bloomberg reporter Greg Stohr, hearings for Gonzalez v. Google and Twitter v. Taamneh have been respectively scheduled for February 21st and February 22nd, respectively.

Tech companies involved in the litigation have cited the 27-year-old statute as part of an argument for why they shouldn’t have to face lawsuits alleging they gave knowing, substantial assistance to terrorist acts by hosting or algorithmically recommending terrorist content.

The law holds that websites can’t be treated as the publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.

Defend the Section 230 Recommendation of Google, Facebook, and the FOIA/CFT Judgment

The FCC is an independent agency that is not part of the judicial branch, and it doesn’t regulate social media or moderation decisions, which caused a number of legal problems for the executive order.

The two parties can’t agree on what policies should take their place, but they still hate Section 230.

The US Supreme Court has the opportunity this term to decide how far the law should go, as a result of the deadlock, which has thrown the ball into the court’s court.

The lawyer who is representing the lawsuit repeatedly failed to provide substantial limiting principles to his argument that could spark a flood of lawsuits against powerful sites such as Google or threaten the very survival of smaller sites. Some justices withdrew from the attitudes put forward by the advocates for internet service providers.

It would be a bad thing if it happened because it would undermine the internet. It would potentially put a lot of websites and users into legal trouble and could change how some websites operate to avoid liability.

“‘Recommendations’ are the very thing that make Reddit a vibrant place,” wrote the company and several volunteer Reddit moderators. Users determine which posts gain prominence and fade into obscurity by upvoting and downvoting content.

Under a legal regime that would make it very difficult for someone to file a libel case against someone else, a brief argued, people would stop using Reddit and the community would stop volunteering.

Even if the legal questions are different, the facts in the case are the same. And that’s why, as Barrett suggested, a finding that Twitter is not liable under the ATA might also resolve the Google case without the need to weigh in on Section 230.

Google has asserted that it’s protected by Section 230, but the plaintiffs argue that the law’s boundaries are undecided. The section does not provide a standard for governing recommendations, they said in yesterday’s legal filing. They’re asking the Supreme Court to find that some recommendation systems are a kind of direct publication — as well as some pieces of metadata, including hyperlinks generated for an uploaded video and notifications alerting people to that video. By extension, they hope that could make services liable for promoting it.

There are lots of tricky questions around the limits of an Algorithmic recommendation. An extreme version of that liability, for instance, would make websites liable for delivering search results (which, like almost all computing tasks, are fueled by algorithms) that include objectionable material. The suit attempted to allay the fear by arguing that search results are meaningfully different since they deliver information directly to the user. It is still an attempt to police a piece of the present-day social media landscape, not just for terrorism-related content but for other stuff as well.

It will be a test of Twitter’s legal performance under Musk. The suit concerns an Islamic State attack in Turkey, but also whether or not the company provided material aid to terrorists. Twitter filed its petition before Musk bought the platform, aiming to shore up its legal defenses in case the court took up Gonzalez and ruled unfavorably for Google on it.

How the court rules could transform American law, society, and social media platforms that are some of the most valuable businesses in the world.

Lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was put in place, social media companies wanted people to pay for their services but now that model is different.

He said that most of the money is made by advertisements and that if you are online longer you will get more money.

He believes that modern social media company executives knew the dangers of what they were doing. In 2016, he says, they met with high government officials who told them of the dangers posed by ISIS videos, and how they were used for recruitment, propaganda, fundraising, and planning.

“The attorney general, the director of the FBI, the director of national intelligence, and the then-White House chief of staff . . . those government officials . . . He says he told them that.

Detecting Terrorists: What Happens When You Leave Behind the Protects of 230? The First Seventy Years of Social Media Lawsuits

She says that they believe there’s no place for extremists on their products and platforms, and that “smart detection technology” is invested in to make sure that happens.

Prado acknowledges that the companies of 1996 are not the same as today’s social media companies. But, she says, if there is to be a change in the law, that is something that should be done by Congress, not the courts.

There are many bedfellows among the tech company allies. The Chamber of Commerce and libertarian group the American Civil Liberties Union are two groups that have joined together to urge the court to keep the status quo.

But the Biden administration has a narrower position. Columbia law professor Timothy Wu summarizes the administration’s position this way: “It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230.”

In short, hyperlinks, grouping certain content together, sorting through billions of pieces of data for search engines, that sort of thing is OK, but actually recommending content that shows or urges illegal conduct is another.

It would be very bad for the economic model of social media companies if the Supreme Court adopted that position. The tech industry does not have an easy way of distinguishing between the two.

It will probably mean that these companies will defend their conduct in court. There are two different things: getting over the hurdle of showing enough proof to justify a trial and filing suit. What’s more, the Supreme Court has made it much more difficult to jump that hurdle. The second case the court hears this week, on Wednesday, deals with just that problem.

Still, the arguments today were a relief after the past year’s nightmare legal cycle. Even Justice Clarence Thomas, who’s written some spine-tinglingly ominous opinions about “Big Tech” and Section 230, spent most of his time wondering why YouTube should be punished for providing an algorithmic recommendation system that covered terrorist videos alongside ones about cute cats and “pilaf from Uzbekistan.” It may be the best we can expect for now.

Most suits would be thrown out anyways, even if a ruling for Gonzalez resulted in new liability for the websites, according to Eric Schnapper.

Even if many of them would be thrown out, Justice Elena Kagan warned that narrowing Section 230 could lead to a wave of lawsuits.

Kagan said that you were creating a world of lawsuits. Even though you have content, you also have these presentational and prioritization choices.

Stewart said that suits that have much likelihood of prevailing, even after there is a recommendation, would not be suits he would agree with.

If the same algorithm that promotes an IS video to someone interested in terrorism is also likely to recommend a pilaf recipe to someone interested in cooking, it’s time to clarify how recommendation systems should be treated by the court.

Schnapper tried several different explanations, but many of the justices weren’t clear about what he was talking about.

Roberts said: “It might be harder for you to say that there is a selection involved for which you can be held responsible if they have a focused algorithm with respect to terrorist activities.”

Justice Department lawyer Stewart was the one who had the question raised by Barrett. The logic of your position would be that there is no need for 230 to protect you in that situation. Correct?

Stewart said there was distinction between an individual user making a conscious decision to amplify content and an algorithm that is making choices on a systemic basis. Stewart didn’t give an answer on how the changes to Section 230 would affect individual users.

Tech law experts say an onslaught of defamation litigation is the real threat if Section 230’s protections are weakened and the justices seemed to agree, posing several questions and hypothetical that turned on defamation claims.

Justice Samuel Alito posed for Schnapper a scenario where a competitor of a restaurant created a video making false claims about the restaurant violating health code and YouTube refusing to take the video down despite knowing its defamatory.

Alito had an hypothetical in which a platform recommended a false restaurant competitor’s video that was called the greatest video ever, but didn’t mention anything about the content of the video.

Though Google’s attorney, Lisa Blatt, did not get the tough grilling that Schnapper and Stewart received, some justices hinted at some discomfort with how broadly Section 230 has been interpreted by the courts.

When Congress enacted the relevant provision in 1996, Justice Ketanji Brown Jackson pushed back at the claims made by Blatt. Blatt had claimed that Congress’ move to broadly protect tech platforms from legal liability is what got the internet off the ground.

Section 230 was written to keep the internet free from lawsuits over how websites managed their platforms, according to the brief written by Ron Wyden and Chris Cox.

What Will the Supreme Court Tell Us About the Future of the Internet if Social Media is Open? Justice Sotomayor v.s. Barrett

Justice Sotomayor said platforms could be held liable if they created a search engine that was unfair. An example of a dating site that would not match individuals of different races was put forth by her. The questioning by Justice Barrett was about the hypothetical.

The justices suggested that the tech foes were playing Chicken Little in their warnings to the court about how a ruling against them would change the internet.

Would the internet be destroyed if it were to collapse as a result ofYouTube’s posting and refusing to take down videos that are false and defamatory, said Alito.

“So if you lose tomorrow, do we even have to reach the Section 230 question here? Do you think you would lose on that ground? Barrett asked Schnapper a question.

Nine justices set out Tuesday to determine what the future of the internet would look like if the Supreme Court were to narrow the scope of a law that some believe created the age of modern social media.

The fact that the justices were wading for the first time into a new area suggests that the court will not issue a sweeping decision in one of the most closely watched disputes of the term.

The family sued under a federal law called the Antiterrorism Act of 1990 , which authorizes such lawsuits for injuries “by reason of an act of international terrorism.”

The Tortuga Sentiment: Incriminating Google v.s. News, Videos, Emojis and Yelp Reviews

Oral arguments drifted into a maze of issues, raising concerns about trending algorithms, thumbnail pop-ups, artificial intelligence, emojis, endorsements and even Yelp restaurant reviews. The justices were frustrated by the scope of arguments and unsure of a road to go in the case.

“I’m afraid I’m completely confused by whatever argument you’re making at the present time,” Justice Samuel Alito said early on. At a point, Justice Ketanji Brown Jackson said he was completely confused. “I’m still confused,” Justice Clarence Thomas said halfway through arguments.

Justice Elena Kagan even suggested that Congress step in. “I mean, we’re a court. We have no idea about these things. She chuckled and said that the nine greatest experts on the internet aren’t like these.

The Chief Justice tried to make a comparison between a book seller and a book buyer. He suggested that Google recommending certain information is no different than a book seller sending a reader to a table of books with related content.

Supreme Court Justice Elena Kagan made the wryly self-deprecating comment early in oral arguments for Gonzalez v. Google, a potential landmark case covering Section 230 of the Communications Decency Act of 1996. The remark was a nod to many people’s worst fears about the case. The court that will decide Gonzalez’s fate is known for being willing to overturn legal precedent and reexamining long-standing speech law.

The hearing focused onthumbnails, a term used by Gonzalez family attorney Eric Schnapper to describe a combination of a user-generated image and a YouTube-generated web address. Several justices seemed dubious that creating a URL and a recommendation sorting system should strip sites of Section 230 protections, particularly because thumbnails didn’t play a major part in the original brief. The argument of removing the thumbnail problem from videos by simply changing the title or providing a picture could be difficult to understand.

The Countervailing Wind: Facebook, YouTube, Facebook, ISIS and YouTube, the Federal Antiterrorism Act, and a Voice of Nohemi Gonzalez

Along the other side are multi-billion dollar companies like Facebook and other smaller companies that together make up a huge amount of the U.S. economy.

Justice Elena Kagan seemed to sum up the countervailing winds when discussing how the EU deals with these issues, including levying a huge fine against Google. But, she noted, that fine was not levied by a court.

After joking with her colleagues on the bench, she commented, “you know, these are not like the nine greatest experts on the internet.”

The justices tried to find a line between what internet providers should and shouldn’t do with organizing content on their platforms.

Lawyer Eric Schnapper, representing the family of Nohemi Gonzalez, the young woman killed in Paris, said the algorithms are the same, but when it comes to ISIS videos, the result is that companies are encouraging illegal conduct covered by the Federal Antiterrorism Act—a law that bars material aid to terrorist groups.

He said that they are not doing that in YouTube. “I type in ISIS video and they’re sending me to a catalogue of thumbnails which they created.”

Source: https://www.npr.org/2023/02/21/1158628409/supreme-court-section-230-arguments

Defending Topic Headings in High-Energy News Articles: State of the Art and Viewpoint on the Phenomenology

The basic features of topic headings are currently on trend. we would say are core, inherent,” she said. “They’re no different than expressing what is implicit in any publishing.”

There are 3.5 billion searches every day, all displays of other people’s information, and Congress wanted platforms to be able to do that, so if the court were to prevent it, that would be different.

While the justices indicated that it might be better for Congress to take on the task of modifying the 1996 law, at the same time, several fired some pointed shots across the bow, hinting at limited patience with internet platform providers. Today’s case could well end in a stalemate, but more cases are expected next term.

Previous post Stock market news is available in both the S and S&P 500
Next post After Turkey’s earthquake, one family grieves and prepares for more