Facebook failed to detect blatant election-related misinformation in ads ahead of Brazil’s 2022 election, a new report from Global Witness has found, continuing a pattern of not catching material that violates its policies the group describes as “alarming.”
The advertisements contained false information about the country’s upcoming election, such as promoting the wrong election date, incorrect voting methods and questioning the integrity of the election—including Brazil’s electronic voting system.
This is the fourth time that the London-based nonprofit has tested Meta’s ability to catch blatant violations of the rules of its most popular social media platform— and the fourth such test Facebook has flubbed. In the three prior instances, Global Witness submitted advertisements containing violent hate speech to see if Facebook’s controls—either human reviewers or artificial intelligence—would catch them. They didn’t.
“Facebook has identified Brazil as one of its priority countries where it’s investing special resources specifically to tackle election related disinformation,” said Jon Lloyd, senior advisor at Global Witness. “So we wanted to really test out their systems with enough time for them to act. And with the U.S. midterms around the corner, Meta simply has to get this right—and right now.”
Brazil’s national elections will be held on Oct. 2 amid high tensions and disinformation threatening to discredit the electoral process. Facebook is the nation’s most used social media platform. In a statement, Meta said it has “ prepared extensively for the 2022 election in Brazil.”
“We’ve launched tools that promote reliable information and label election-related posts, established a direct channel for the Superior Electoral Court (Brazil’s electoral authority) to send us potentially-harmful content for review, and continue closely collaborating with Brazilian authorities and researchers,” the company said.
In 2020 Facebook began requiring advertisers who wish to run ads about elections or politics to complete an authorization process and include “paid for by” disclaimers on them, similar to what it does in the U.S. These increased security measures were implemented in response to the 2016 U.S. Presidential Elections, where Russia paid rubles for political ads that aimed to create divisions among Americans.
Global Witness claimed it violated these rules by submitting the test ads, which were accepted for publication but never published. Global Witness claimed that the group submitted test ads from other countries, including London and Nairobi. This should have raised alarm bells.
It was also not required to put a “paid for by” disclaimer on the ads and did not use a Brazilian payment method—all safeguards Facebook says it had put in place to prevent misuse of its platform by malicious actors trying to intervene in elections around the world.
“What’s quite clear from the results of this investigation and others is that their content moderation capabilities and the integrity systems that they deploy in order to mitigate some of the risk during election periods, it’s just not working,” Lloyd said.
The group is using ads as a test and not regular posts because Meta claims to hold advertisements to an “even stricter” standard than regular, unpaid posts, according to its help center page for paid advertisements.
But judging from the four investigations, Lloyd said that’s not actually clear.
“We we are constantly having to take Facebook at their word. And without a verified independent third party audit, we just can’t hold Meta or any other tech company accountable for what they say they’re doing,” he said.
Global Witness sent Meta ten ads that clearly violated Meta’s policies on election-related advertisements. They included false information about when and where to vote, for instance and called into question the integrity of Brazil’s voting machines—echoing disinformation used by malicious actors to destabilize democracies around the world.
Learn more Human Rights Groups Call on Facebook to Drop ‘Racist’ Attempt to Silence Whistleblower
In another study carried out by the Federal University of Rio de Janeiro, researchers identified more than two dozen ads on Facebook and Instagram, for the month of July, that promoted misleading information or attacked the country’s electronic voting machines.
The university’s internet and social media department, NetLab, which also participated in the Global Witness study, found that many of those had been financed by candidates running for a seat at a federal or state legislature.
This will be Brazil’s first election since far-right President Jair Bolsonaro, who is seeking reelection, came to power. Bolsonaro has repeatedly attacked the integrity of the country’s electronic voting system.
“Disinformation featured heavily in its 2018 election, and this year’s election is already marred by reports of widespread disinformation, spread from the very top: Bolsonaro is already seeding doubt about the legitimacy of the election result, leading to fears of a United States-inspired January 6 ‘stop the steal’ style coup attempt,” Global Witness said.
In its previous investigations, the group found that Facebook did not catch hate speech in Myanmar, where ads used a slur to refer to people of East Indian or Muslim origin and call for their deaths; in Ethiopia, where the ads used dehumanizing hate speech to call for the murder of people belonging to each of Ethiopia’s three main ethnic groups; and in Kenya, where the ads spoke of beheadings, rape and bloodshed.
— Associated Press Writer Diane Jeantet contributed to this story.
Here are more must-read stories from TIME