Business

Facebook Accused of ‘Whitewashing’ India Human Rights Report

Facebook’s parent company Meta has been accused of “whitewashing” a long-awaited report on its human rights impact in India, which the company released in a highly summarized form on Thursday, drawing fire from civil society groups.

TIME published the first report in August 2020 about Facebook’s human rights assessment (HRIA). This study was conducted to examine Facebook’s role in spreading hate speech online. This report was anticipated by rights organizations for more than two years. These groups had long warned of the dangers facing minorities and an erosion in civil liberties by Facebook.

Ankhi Das, Facebook’s most senior executive in India, resigned in October 2020 after the Wall Street Journal reported she had intervened to prevent the platform removing accounts of members of the country’s Hindu nationalist ruling party, some of whom had called for violence against India’s Muslim minority. India is Facebook’s largest market by users.

Continue reading: Facebook’s Ties to India’s Ruling Party Complicate Its Fight Against Hate Speech

Foley Hoag was an independent law firm that conducted the India HRIA. It interviewed over 40 civil society activists and journalists in order to compile the report. But Facebook drew criticism from rights groups on Thursday after it released its own four-page summary of the law firm’s findings that was almost bereft of any meaningful details.

Ritumbra Manuvie, an academic who was one of the civil society members interviewed by Foley Hoag for the report, said Facebook’s summary was a “cover up of its acute fault-lines in India,” and showed that its “commitment to human rights is rather limited.”

The Real Facebook Oversight Board, a pressure organization made up of critics of the platform, said in a statement that the report was “a master-class in spin and obfuscation” and a “whitewashing [of]India’s religious violence was a major problem. [Meta’s] platforms.”

Facebook’s summary of the report, the full version of which was not made public, says that Foley Hoag made “recommendations” to the company on how to improve its human rights impact in India. But Facebook’s summary did not disclose what those recommendations were.

The four-page summary says: “The HRIA developed recommendations covering implementation and oversight; content moderation; and product interventions; and other areas.” It then details in the following seven paragraphs the human rights measures that Facebook is already taking in India, including increasing its content moderation workforce and bolstering transparency.

Facebook states that the complete report doesn’t make any judgement on the controversial allegation that stems from 2020’s Das controversy: That its moderation in India of hateful content is biased in favor of the ruling party to preserve market access. “The assessors [Foley Hoag] noted that civil society stakeholders raised several allegations of bias in content moderation,” Facebook’s summary of the report says. “The assessors did not assess or reach conclusions about whether such bias existed.”

Foley Hoag and Facebook did not reply to our requests for comment before publication.

“Facebook may as well have published a few blank pages on their human rights impact assessment (HRIA) on India,” Alaphia Zoyab, the director of campaigns and media for the progressive tech lobby group Luminate, said in a tweet. “I’ve never read so much bull—t in four short pages.”

“This is an insult to Indian civil society,” Zoyab added.

Manuvie, who is a legal scholar at the University of Groningen in the Netherlands, said that the foundation she runs, The London Story, had reported more than 600 pages that it says are hate accounts based in India to Facebook—but that the platform had only removed 16 of them.

“As stakeholders, we told [Foley Hoag] very clearly that Facebook has provided momentum for fringe groups to organize, hunt and doxx inter-faith marriage couples,” Manuvie told TIME. Facebook’s summary of the report contains no mention of this specific form of platform abuse.

TIME published a report in 2021 that Facebook permitted a Hindu conspiracy theory to thrive on its platform despite Facebook employees warning about the dangers. TIME reached out to Facebook last November after a video calling on Hindus and Muslims to revolt and murder Muslims was viewed 1.4 million times.

Continue reading: Facebook Let an Islamophobic Conspiracy Theory Flourish in India Despite Employees’ Warnings

Earlier that year, TIME reported that Facebook banned a Hindu extremist group under its terrorism policies—but left most of its pages online for months after that ban, allowing them to share content depicting Muslims as green monsters with long fingernails to their more than 2.7 million total followers.

Read More From Time


Send an email to Billy Perrigo at billy.perrigo@time.com.

Tags

Related Articles

Back to top button