Business

Facebook Employees Found a Simple Way To Tackle Misinformation. They ‘Deprioritized’ It After Meeting With Mark Zuckerberg, Documents Show

A video purporting that it showed Nancy Pelosi, House Speaker, was drunk, and she was slurring as she spoke at an event. It went viral on Facebook in May 2019. The footage was actually slowed down by 75%.

The doctored video was shared more than 48,000 times and received over 3,000,000 views on one Facebook page. The video was quickly reuploaded onto other Facebook pages and groups. It then spread across social media. In thousands of Facebook comments on pro-Trump and rightwing pages sharing the video, users called Pelosi “demented,” “messed up” and “an embarrassment.”
[time-brightcove not-tgx=”true”]

Two days after the video was first uploaded, and following angry calls from Pelosi’s team, Facebook CEO Mark Zuckerberg made the final call: the video did not break his site’s rules against disinformation or deepfakes, and therefore it would not be taken down. At the time, Facebook said it would instead demote the video in people’s feeds.

Facebook employees quickly realized that the Facebook page that shared Pelosi’s video showed a form of manipulation of the platform that allowed misinformation spread unchecked. The page—and others like it—had built up a large audience not by posting original content, but by taking content from other sources around the web that had already gone viral. Once they had established an audience, the pages of nefarious intent often turned to financial fraud and misinformation for their followers. This tactic is similar to the way the Internet Research Agency, a Russian troll farm which interfered in 2016’s U.S. elections, disinformation spread to American Facebook users. Facebook employees gave the tactic a name: “manufactured virality.” Some believed it was a major problem since the pages accounted for 64% of page-related misinformation views but only 19% of total page-related views.

In April 2020, a team at Facebook working on “soft actions”—solutions that stop short of removing problematic content—presented Zuckerberg with a plan to reduce the reach of pages that pursued “manufactured virality” as a tactic. This plan would lower-rank such pages making it less likely that people would view their posts on the News Feed. This would have an impact on the pages sharing the peaked video of Pelosi. Employees specifically mentioned this in the presentation they made to Zuckerberg. It could also significantly decrease misinformation on platforms pages.

But in response to feedback given by Zuckerberg during the meeting, the employees “deprioritized” that line of work in order to focus on projects with a “clearer integrity impact,” internal company documents show.

This story is partially based on whistleblower Frances Haugen’s disclosures to the U.S. Securities and Exchange Commission (SEC), which were also provided to Congress in redacted form by her legal team. A consortium of news agencies, including TIME, saw the redacted versions. Many documents in question were reported for the first time by the Wall Street JournalIt is. The images depict a company who is driven to increase user engagement while inciting divisive anger and sensational content. The company also showed how it often ignored warnings by its researchers that it was causing societal harms.

With few obvious downsides, a pitch to Zuckerberg

Jeff Allen, co-founder and CEO of TheNextWeb, said that fake virality has been used to cheat the platform many times. Integrity InstituteFormer Facebook data scientist. His work on fake virality was closely monitored before he quit the company in 2019. These include a variety of people, including teenagers from Macedonia, who discovered that targeting ultra-partisan U.S. audience in 2016 was lucrative, as well as foreign governments, such covert influence operations, like the Kremlin. “Aggregating content that previously went viral is a strategy that all sorts of bad actors have used to build large audiences on platforms,” Allen told TIME. “The IRA did it, the financially motivated troll farms in the Balkans did it, and it’s not just a U.S. problem. It’s a tactic used across the world by actors who want to target various communities for their own financial or political gain.”

Learn more According to Leaked Documents, Why Do Some Facebook Users See Worse Content Than Others?

In the April 2020 meeting, Facebook employees working in the platform’s “integrity” division, which focuses on safety, presented a raft of suggestions to Zuckerberg about how to reduce the virality of harmful content on the platform. Several of the suggestions—titled “Big ideas to reduce prevalence of bad content”—had already been launched; some were still the subjects of experiments being run on the platform by Facebook researchers. Others —including tackling “manufactured virality”—were early concepts that employees were seeking approval from Zuckerberg to explore in more detail.

The employees noted that much “manufactured virality” content was already against Facebook’s rules. They claimed that Facebook was inconsistent in its enforcement of these rules. “We already have a policy against pages that [pursue manufactured virality],” they wrote. “But [we] don’t consistently enforce on this policy today.”

The employees’ presentation said that further research was needed to determine the “integrity impact” of taking action against manufactured virality. But they pointed out that the tactic disproportionately contributed to the platform’s misinformation problem. They had compiled statistics showing that nearly two-thirds of page-related misinformation came from “manufactured virality” pages, compared to less than one fifth of total page-related views.

Acting against “manufactured virality” would bring few business risks, the employees added. Doing so would not reduce the number of times users logged into Facebook per day, nor the number of “likes” that they gave to other pieces of content, the presentation noted. Neither would cracking down on such content impact freedom of speech, the presentation said, since only reshares of unoriginal content—not speech—would be affected.

Zuckerberg seemed to be discouraged by further research. After presenting the suggestion to the CEO, employees posted an account of the meeting on Facebook’s internal employee forum, Workplace. In the post, they said that based on Zuckerberg’s feedback they would now be “deprioritizing” the plans to reduce manufactured virality, “in favor of projects that have a clearer integrity impact.” Zuckerberg approved several of the other suggestions that the team presented in the same meeting, including “personalized demotions,” or demoting content for users based on their feedback.

Andy Stone, spokesperson for Facebook, denied suggestions that Facebook employees should be discouraged from studying manufactured virality. “Researchers pursued this and, while initial results didn’t demonstrate a significant impact, they were free to continue to explore it,” Stone wrote in a statement to TIME. Stone stated that the company contributed substantial resources to reduce bad content and down-ranking. “These working documents from years ago show our efforts to understand these issues and don’t reflect the product and policy solutions we’ve implemented since,” he wrote. “We recently published our Guidelines on Content DistributionThese are the types of content we cut in News Feed. And we’ve spent years standing up teams, developing policies and collaborating with industry peers to disrupt coordinated attempts by foreign and domestic inauthentic groups to abuse our platform.”

However, today pages can still share viral and unoriginal content in an attempt to drive traffic to questionable websites. still Some of the top-rated on all platformsAccording to Facebook’s August report, it was at least 5%.

Allen, the former Facebook data scientist, says Facebook and other platforms should be focused on tackling manufactured virality, because it’s a powerful way to make platforms more resilient against abuse. “Platforms need to ensure that building up large audiences in a community should require genuine work and provide genuine value for the community,” he says. “Platforms leave them themselves vulnerable and exploitable by bad actors across the globe if they allow large audiences to be built up by the extremely low-effort practice of scraping and reposting content that previously went viral.”

Learn more According to leaked documents: Why working at Facebook is similar to playing Chess with an alien

The internal Facebook documents show that some researchers noted that cracking down on “manufactured virality” might reduce Meaningful Social Interactions (MSI)—a statistic that Facebook began using in 2018 to help rank its News Feed. This algorithm update was intended to give users content more from friends and families and less from news media outlets and politicians. But an internal analysis from 2018 titled “Does Facebook reward outrage” reported that the more negative comments a Facebook post elicited​​—content like the altered Pelosi video—the more likely the link in the post was to be clicked by users. “The mechanics of our platform are not neutral,” one Facebook employee wrote at the time. Since the content with more engagement was placed more highly in users’ feeds, it created a feedback loop that incentivized the posts that drew the most outrage. “Anger and hate is the easiest way to grow on Facebook,” Haugen told the British Parliament on Oct. 25.

How “manufactured virality” led to trouble in Washington

Zuckerberg’s decision in May 2019 not to remove the doctored video of Pelosi seemed to mark a turning point for many Democratic lawmakers fed up with the company’s larger failure to stem misinformation. At the time, it led Pelosi—one of the most powerful members of Congress, who represents the company’s home state of California—to deliver an unusually scathing rebuke. She blasted Facebook as “willing enablers” of political disinformation and interference, a criticism increasingly echoed by many other lawmakers. Facebook defended its decision, saying that they had “dramatically reduced the distribution of that content” as soon as its fact-checking partners flagged the video for misinformation.

Pelosi’s office did not respond to TIME’s request for comment on this story.

The circumstances surrounding the Pelosi video exemplify how Facebook’s pledge to show political disinformation to fewer users only after third-party fact-checkers flag it as misleading or manipulated—a process that can take hours or even days—does little to stop this content from going viral immediately after it is posted.

After Zuckerberg disincentivised employees from fighting manufactured virality in the run-up to 2020, hyper-partisan websites used this tactic to increase engagement on their pages. A doctored video that claimed to have Pelosi drunk went viral again in August 2020. Pro-Trump pages and rightwing Facebook pages shared many similar posts. They included doctored video clips meant to make Joe Biden seem confused or lost while speaking at events as well as edited videos purporting to prove voter fraud.

The same pages which had amassed millions of followers using virality techniques used their reach to propagate the falsehood that the election was stolen.

Tags

Related Articles

Back to top button