The future of the internet is looking cloudy by the day

Hateful Conduct at Meta? A Response to Mark Zuckerberg’s Message on Instagram, Twitter, and the Social Media Site Reel

It is usually the comment on a video of a teenager on TikTok singing out of a key that is the reason for me to come across it. Even more upsettingly, you’ll find that comment, word for word, on videos from people with disabilities simply existing. These commenters say this because they believe it’s easier to be nasty to people on Instagram and other Meta platforms. The subtext is that if these users posted to Reels instead of TikTok, they’d receive the harassment the commenters believe they deserve.

It is TikTok that is staring down the barrel of a nationwide ban. Meta’s Mark Zuckerberg, on the other hand, has decided to make his platforms more dangerous to appease the incoming president.

As you’ve likely already heard, Meta announced on Tuesday that it would be ending its third-party fact-checking program, replacing it with X-style Community Notes, and “restoring free expression” to its platforms. To be able to accomplish the latter aspect, the company will move its trust and safety operation from California to Texas, where it will be more focused on moderation and less on lower-sever content.

Kate Knibbs reported this week on the new Hateful Conduct policy at Meta, which allows users to make blatantlyhomophobic, transphobic, sexist, and racist posts without consequences. For Platformer, Casey Newton noted that, lost amongst other changes, Meta removed a sentence from its guidelines “explaining that hateful speech can ‘promote offline violence.’” Immediately following the anniversary of January 6 is something to see.

Why don’t protections for Meta’s most vulnerable users be destroyed? In a video statement on Tuesday, Zuckerberg explained that the policies were “out of touch with mainstream discourse” and that “recent elections also feel like a cultural tipping point towards once again prioritizing speech.” (Zuckerberg, no one’s idea of a political theorist, didn’t really explain why fact-checking, itself the sort of speech that free-speech activists have long held is the appropriate reaction to bad speech, isn’t worth prioritizing, nor did he explain what he has against the many forms of speech that Meta will still suppress. It seems that whatever Meta does not ban at a moment, it’s the same as free expression.

Source: The Internet’s Future Is Looking [Bleaker by the Day](https://lostobject.org/2025/01/03/the-death-of-net-neutrality-is-a-bad-thing/)

Mark Zuckerberg’s Facebook Peculiarity Challenge: Resolving the “Fake News” that Helped Donald Trump

The Supreme Court will also take up TikTok’s lawsuit against the US government and its attempts to ban the app nationwide. We are less than two weeks from the deadline for a sale or extension, so the court doesn’t have a lot of time to save the app.

The Biden White House was blamed by Facebook for moderatening the “lab leak” conspiracy theory. The WH put the pressure on us to censor the lab leak theory. he asked in a WhatsApp chat. His former president of global affairs, Nick Clegg, responded, “I don’t think they put specific pressure on that theory.”

In his letter to Jordan’s committee, Zuckerberg writes, “Ultimately it was our decision whether or not to take content down.” It is necessary to emphasize mine. I have made it clear to my teams that we should not compromise our content standards, and we are ready to push back if something like this happens again.

I don’t think you should listen to Mark Zuckerberg whine and tell you that his interview on The Joe Rogan Experience is full of lies.

Rogan gave a series of softballs to Facebook, setting his tone by referring to moderation ascensorship. There is a idea that the government is trying to curb news about covid and covid vaccines and the election as a result. The man who was reprimanded by the city of San Francisco for putting his name on a hospital, while his platforms spread health misinformation thinks that “on balance, the vaccines.” Whew!

The problem wasn’t that the fact-checking was bad, it was that conservatives were more likely to share misinformation. That means conservatives are also more likely to be moderated. In this sense, perhaps it wasn’t Facebook’s fact-checking systems that had a liberal bias, but reality.

There was a lot of fake information on Facebook before the 2016 election. Adam Mosseri, former VP of product management at Facebook, said in a statement after the election that there was more they needed to do to combat fake news. It was criticized that Facebook spread fake news that helped Donald Trump, but even though it was criticized, it wasn’t stopped by founder Mark Zuckerberg. “I do think there is a certain profound lack of empathy in asserting that the only reason someone could have voted the way they did is they saw some fake news,” Zuckerberg said.

What do Facebook’s CEOs say about bullshits? How do they get away with their bullies, and what do they do about it? Mark Zuckerberg tells Rogan

Every lawyer I have knows that it is wrong to fire in a crowded theater. It is not the law, and it never has been. So if the theater is on fire, you can yell “fire” in a crowded theater. Rogan says nothing in response to this, and Zuckerberg knows he’s got a willing mark. If you can get away with the small bullshit, you can get away with the big bullshit, right?

Unfortunately I wasn’t born yesterday, and I remember Zuckerberg’s first attempt at getting rich: FaceMash, a clone of HotOrNot where he uploaded photos of his fellow female students to be rated — without their consent. I suppose giving people a voice is one way of describing that. It would be called “creep shit.”

Zuckerberg, CEO of Facebook’s parent company Meta, sets the tone at the very beginning: “I think at some level you only start one of these companies if you believe in giving people a voice, right?”

But Mark wants us to believe this isn’t about politics at all. Getting Rogan’s listeners riled up about Zuckerberg’s enemies and finding Republicans a new tech company target is just a coincidence, as are the changes to allow more hate speech on his platforms happening now, changes that just happen to pacify Republicans. All of this has nothing to do with the incoming administration, Zuckerberg tells Rogan. “I think a lot of people look at this as like a purely political thing, because they kind of look at the timing and they’re like, hey, well, you’re doing this right after the election.” he says. Policies that reflect mainstream discourse are what we try to have.

They wanted to investigate the theory that they found. They were trying really hard, right? To like to find a theory, but I don’t know. It just, it kind of, like, throughout the, the, the, the, the party and the government, there was just sort of, I don’t know if it’s, I don’t know how this stuff works. I mean, I’ve never been in government. I don’t know if it’s like a directive or it’s just like a quiet consensus that like, we don’t like these guys. They are not doing what we want. We’re going to punish them. It’s not easy to be at the other end of that.

This is a powerful demonstration that jujitsu, MMA training and hunting pigs in Hawaii aren’t going to help you act aggressive if you’re a bitch. Republicans have been attacking Facebook and Blaming the Bureau of Consumer protection for a witch-hunt is something. That is the aim of this performance, which is to get the rest of the Republican party to lay off. After all, the Cambridge Analytica scandal cost Facebook just $5 billion — chump change, really. If Zuckerberg plays ball, his next privacy whoopsie could be even cheaper.

In fact, Zuckerberg even offers Republicans another target: Apple. According to Zuckerberg, the way Apple makes money is “by basically, like, squeezing people.” Among his complaints.

At least some of these Apple issues actually matter — there is a legitimate DOJ antitrust case against the company. That isn’t what is on his mind. The important point is what he considers the last point. He has a longstanding grudge against Apple after the company implemented anti-tracking features into its default browser, Safari. Facebook was not happy with the changes in newspaper ads. The policy cost social media companies almost $10 billion, according to The Financial Times; Facebook lost the most money “in absolute terms.” You see, it turns out if you ask people whether they want to be tracked, the answer is generally no — and that’s bad for Facebook’s business.

And did this work? Did Zuckerberg’s gambit to talk about how social media needed more “masculine energy” win over the bros? Dave Portnoy isn’t fooled by this shit.

A social psychologist at the University of Cambridge, UK, who was an adviser on the fact-checking programme at Facebook, said thatfact checking does work in helping to convince people that information is legit. Studies have shown that fact-checking can help reduce false claims.

The company said that the move was to counter fact checkers’ political bias and censorship. “Experts, like everyone else, have their own biases and perspectives. It showed up in the choices people made about how to fact check.

Nature spoke about the value of fact-checking and what biases come from them.

“Ideally, we’d want people to not form misperceptions in the first place,” adds van der Linden. “But if we have to work with the fact that people are already exposed, then reducing it is almost as good as it as it’s going to get.”

Fact-checking is less effective when an issue is polarized, says Jay Van Bavel, a psychologist at New York University in New York City. “If you’re fact-checking something around Brexit in the UK or the election in United States, that’s where fact-checks don’t work very well,” he says. Some people don’t want their party to look bad because they want to believe things that are true.

On Facebook, articles and posts deemed false by fact-checkers are currently flagged with a warning. The platform suggests less flagged content to people, so they are more likely to ignore it.

Flagging posts as problematic could also have knock-on effects on other users that are not captured by studies of the effectiveness of fact-checks, says Kate Starbird, a computer scientist at the University of Washington in Seattle. “Measuring the direct effect of labels on user beliefs and actions is different from measuring the broader effects of having those fact-checks in the information ecosystem,” she adds.

The conservative misinformation is spread more because of that. “When one party, at least in the United States, is spreading most of the misinformation, it’s going to look like fact-checks are biased because they’re getting called out way more.”

Previous post After its Supreme Court arguments, it seems like TikTok is headed for a ban