The studies found that tweaking Facebook feeds is not easy to fix

Facebook Users Exposure to Politically-Critical Content: The 2020 US Presidential Election Turns Back The Way It Went: What Have They Learned?

We begin by assessing the extent to which US Facebook users are exposed to content from politically like-minded users, Pages and groups in their Feed during the period 26 June to 23 September 2020 (see Supplementary Information, section 4.2, for measurement details). The estimates we present are for the quantity of US adults who had at least onelogin to Facebook in the 30 days preceding 17 August 2020.

The results were released in four papers in Science and Nature, but the company did not pay the researchers. The president of the global affairs at Meta said in a statement that there is little evidence that key features of Meta’s platforms alone cause harmfulaffective polarization or have meaningful effects on political views.

It’s a sweeping conclusion. But the studies are actually much narrower. Even though researchers were given more insight into Meta’s platforms than ever before—for many years, Meta considered such data too sensitive to make public—the studies released today leave open as many questions as they answer.

“We don’t know what would have happened had we been able to do these studies over a period of a year or two years,” Guess said at a press briefing earlier this week. There is no accounting for the fact that many users have been using social media for a decade or more. If we hadn’t had social media for the last 10 to 15 years, the world wouldn’t have been like it is now.

Many factors may have undermined interventions that were intended to reduce polarization, so the results do not allow social media to be taken off the hook. The experiments were conducted near the end of the 2020 US political election when partisan opinions might be locked in.

“I think there are unanswered questions about whether these effects would hold outside of the election environment, whether they would hold in an election where Donald Trump wasn’t one of the candidates,” says Michael Wagner, a professor of journalism and communication at University of Wisconsin-Madison, who helped oversee Meta’s 2020 election project.

Meta’s Clegg also said that the research challenges “the now commonplace assertion that the ability to reshare content on social media drives polarization.”

Researchers weren’t quite as definitive. One of the studies published in Science found that resharing elevates “content from untrustworthy sources.” The same study showed that most of the misinformation caught by the platform’s third-party fact checkers is concentrated amongst and exclusively consumed by conservative users, which has no equivalent on the opposite side of the political aisle, according to an analysis of about 208 million users.

Our analysis of platform exposure and behaviour considers the population of US adult Facebook users (aged 18 years and over). We focus primarily on those who use the platform at least once per month, who we call monthly active users. For the subset of US adults who accessed Facebook at least once in the 30 days preceding 17 August 2020, Aggregated usage levels are measured. During the third and fourth quarters of 2020, which encompass this interval as well as the study period for the experiment reported below, 231 million users accessed Facebook every month in the USA.

Getting the Metaverse Properly: Implications for Social Media, Live Events, and the Future of News Feeds in the 21st Century

The chronological option was reintroduced last year, the same as Facebook, after it was dumped in 2016 due to user objections. Some users prefer a chronological option to keep up with live events, and some lawmakers have raised it as an antidote to opaque ranking algorithms that can seal people into information bubbles or drive them toward harmful content.

Yet the new data add to at least two internal Meta studies over the past decade, leaks show, that found displaying posts chronologically caused people to log off. The new results show why Meta has made it hard to find alternative feeds, despite regulatory and political pressure.

Dean Eckles, a sociologist and statistician at MIT who has worked for Meta and testified to US senators about feed design, said the ranked feed was mostly designed for engagement and consumption with how much time the viewer spent with it. Meta and Twitter have their ranking systems trained to promote content like what users have clicked on, liked, or commented on in the past. Because that approach has worked so well, any intervention is going to reduce engagement.

A person from the social media site did not respond to a request for comment. Changes and improvements to its services are constantly made by the service.

The authors say that all of the data that were collected will be available for researchers. The model depends solely on the willingness of Meta to act, so some academics question it.

Tucker hopes that the project will inspire further research, but he cautions that the decision still rests with Meta and other platforms. “We very much hope that society, through its policymakers, will take action to make sure that this kind of research continues in the future,” Tucker says.

The research represents a important leap forward, but scientists still had only a partial view into the Meta universe, says Michael Wagner, a political scientist at the University of Wisconsin-Madison who served as an independent rapporteur for the project. Wagner notes that many of the individual data were off-limits, and even the data that the scientists were able to access came pre-packaged by Meta. He believes there is a need for a system that would allow access to raw data and give incentives for researchers to work together.

Two other interventions, published in Science3 and Nature4, also showed little effect: one limited “reshared” content — which comes from outside a user’s network but is reposted by a connection or a group to which the user belongs — and the other limited content from “like-minded” users and groups.

For Joshua Tucker, co-director of the Center for Social Media and Politics at New York University and a lead investigator on the project, the lesson is clear: some of the proposed solutions for reducing online echo chambers and improving social discourse would probably not have had much of an impact during the 2020 election. But Tucker acknowledges that there are limits to the inferences that can be drawn from the research.

Lewandowla, et al. Technology and Democracy: Understanding the Influence of Online Technologies on Political Behaviour and Decision-making (EU, 2020).

Our field experiment shows that changes to the social media algorithm can change the look of the content that is seen by users. The intervention substantially reduced exposure to content from like-minded sources, which also had the effect of reducing exposure to content classified as uncivil and content from sources that repeatedly post misinformation. However, the tested changes to social media algorithms cannot fully counteract users’ proclivity to seek out and engage with congenial information. Participants in the treatment group were exposed to less content from like-minded sources but were actually more likely to engage with such content when they encountered it.

Despite reducing exposure to content from like-minded sources by about a third over a period of weeks, we find no measurable effects on 8 pre registered attitudinal measures. We can confidently rule out effects of ±0.12 s.d. or more on each of these outcomes. The estimated effects do not differ much by political ideology, political sophistication, digital literacy or pre-treatment exposure to content that is political.

Effects of the Facebook Feed Intervention on Response to Cross-Cutting Sources in the Post-Newtonian Age: I. The Effect on Facebook Like-minded Sources

Facebook users don’t get a lot of exposure to cross-cutting sources. Only 32.2% have a quarter or more of their Facebook Feed exposures coming from cross-cutting sources (31.7% and 26.9%, respectively, for civic and news content).

The observed effects of the treatment on exposure to content from like-minded sources among participants are plotted in Fig. 2. The treatment reduced the exposure to content from like-minded sources. During the treatment period from September to December of 2020, the average exposure to content from like- minded sources declined to 36.1% in the treatment group while remaining stable at 53.7% in the control group. During the treatment period, exposure levels were relatively stable but there was a brief increase of exposure to content from like minded sources on 2 November and 3 Nov due to a technical problem in the production server.

Figure 3b shows that the intervention had no significant effect on the time spent on Facebook (−0.02 s.d., 95% confidence interval: −0.050, 0.004) but did decrease total engagement with content from like-minded sources. This decrease was observed for both active and passive engagement with content from like-minded sources. People who were in the treatment condition engaged more with cross-cutting sources. Finally, we observe decreased passive engagement but no decrease in active engagement with content from misinformation repeat offenders (for passive engagement, −0.07 s.d., 95% confidence interval: −0.10, −0.04; for active engagement, −0.02 s.d., 95% confidence interval: −0.05, 0.01).

Previous post Big machine won’t stop election Deepfakes
Next post Evaluating the evidence for adaptive rate variation