Business

The 5 Most Important Revelations From the ‘Facebook Papers’

A more complete portrait of how Facebook was vividly aware of its harmful effects came to light Monday, both at Frances Haugen’s testimony in front of the British Parliament and via a series of reports based on internal documents she leaked, deemed “The Facebook Papers.” During the 2.5 hour question-and-answer, Haugen repeatedly said that Facebook puts “growth over safety,” particularly in developing areas of the world where the company does not have language or cultural expertise to regulate content without fostering division among users.

These are some of the most shocking revelations that she made in her testimony on Oct. 25, and from internal documents.
[time-brightcove not-tgx=”true”]

Facebook doesn’t moderate dangerous content in countries developing nations

Multiple news sources confirmed that hate speech and disinformation problems are much worse in developing countries, which have weaker content moderators. In India, Facebook reportedly did not have enough resources or expertise in the country’s 22 officially recognized languages, leaving the company unable to grapple with a rise in anti-Muslim posts and fake accounts tied to the country’s ruling party and opposition figures. According to one document, 87% of Facebook’s global budget for time spent on classifying misinformation goes towards the United States, while 13% is set aside for the rest of the world — despite the fact that North American users make up just 10% of its daily users.

To test the user experience of Facebook in Kerala in India in 2019, a couple of Facebook researchers made a fake account. They discovered a shocking amount of hate speech, misinformation, and violence. “I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote. With 340 million Facebook users, India is the company’s largest market.

Employees at Facebook found that they move into new countries with little understanding of the potential impacts on their local communities. This includes culture and politics. The company then doesn’t provide sufficient resources to offset those negative effects.

During her testimony Monday, Haugen said that Facebook has a “strategy” of only slowing down harmful content when “the crisis has begun,” deploying its “glass break measures” instead of making the platform “safer as it happens.” She referred to the ongoing ethnic violence in Ethiopia and Myanmar as the “opening chapters of a novel that is going to be horrific to read.”

Facebook AI is unable to detect harmful content in non-English language languages.

Facebook’s algorithm mistakenly banned a hashtag referencing the Al-Aqsa Mosque in Jerusalem’s Old City because it thought it represented the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party, documents show. The company later apologized, but the accidental removal of content serves as a shining example for how the social media giant’s algorithms can stifle political speech due to language barriers and a lack of resources outside of North America.

“Facebook says things like, ‘we support 50 languages,’ when in reality, most of those languages get a tiny fraction of the safety systems that English gets,” Haugen told British lawmakers. “UK English is sufficiently different that I would be unsurprised if the safety systems that they developed primarily for American English were actually [under-enforced] in the UK.”

The company has long relied on artificial-intelligence systems, in combination with human review, as a way of removing dangerous content from its platforms. But languages spoken outside of North America and Europe have made Facebook’s automated content moderation much more difficult.

Undocument showed that the company didn’t have any screening algorithm to detect misinformation in Burmese or hatred speech in Ethiopian languages Oromo and Amharic in 2020.

Facebook labeled election misinformation as “harmful, non-violating” content

Multiple news sources have confirmed that Facebook employees raised concerns about the misinformation and inflamatory content posted on its platform throughout the 2020 presidential election. However, company leadership did not address these issues. Posts alleging election fraud were labeled by the company as “harmful” but “non-violating” content — a problematic category that also includes conspiracy theories and vaccine hesitancy.

According to Facebook policies, this content does not break any rules. It’s a gray area that allows users to spread claims about a stolen election without crossing any lines that would warrant content moderation.

Although Facebook banned the Stop the Steal group on Nov. 5 for falsely casting doubt on the legitimacy of the election and calling for violence, the group had already amassed more than 360,000 members — and new Facebook groups filled with misinformation began popping up daily. Trump sympathizers and right-wing conspiracists appear to have outwitted this social network.

Many employees voiced their concerns on internal message boards as they watched the violent mob attack the U.S. Capitol on Jan. 6 — some writing that they wanted to quit working for the company because leaders failed to heed to their warnings.

While the documents provide insight into Facebook’s awareness of election misinformation, they do not reveal inside information behind the company’s decision-making process to label election misinformation as non-violating content.

Learn more The Reconciling of Facebook: Shutting down the team that put people ahead of profits

Facebook knew that maids could be sold via its platform

Internal documents show that Facebook admitted it was “under-enforcing on confirmed abusive activity” when it failed to take action after Filipina maids complained of being abused and sold on the platform, according to the Associated Press.

“In our investigation, domestic workers frequently complained to their recruitment agencies of being locked in their homes, starved, forced to extend their contracts indefinitely, unpaid, and repeatedly sold to other employers without their consent,” one Facebook document read, the AP reports. “In response, agencies commonly told them to be more agreeable.”

Apple threatened to pull Instagram and Facebook out of its app store, but it changed its mind when the social media company removed 1,000 accounts related to the sale maids. Human rights activists noted that Facebook still allows maids to be sold, but they can still be seen on its website with images showing their price and age.

Facebook debated internally the possibility of removing “Like”

Facebook did a 2019 study on how users would react to content without an Instagram Like button. This indicated that Facebook was well aware of the potential negative effects this feature might have on people’s health. According to documents, the Like button had sometimes caused the platform’s youngest users “stress and anxiety” if the posts didn’t get many likes from friends — but when hidden, users interacted less with posts and ads, and it failed to alleviate social anxiety as they thought it might.

When asked why Facebook hasn’t made Instagram safer for children, Haugen said during her testimony that the company knows “young users are the future of the platform and the earlier they get them the more likely they’ll get them hooked.”

Tags

Related Articles

Back to top button