Facebook boosted harmful content for 6 months – report — Analysis
The problematic-content filter backfired, increasing the distribution of ‘misinformation’
Social media giant Facebook’s content filters, which are meant to downrank harmful content and ‘misinformation’, have been malfunctioning for six months, according to an internal report, raising the familiar question of who will watch the watchmen.
As many as half of News Feed views were subject to potential “Integrity Risks” over the last six months, an internal report circulated last month and seen by The Verge on Friday revealed. According to the report, engineers first noticed the problem in October but could not find the cause, observing regular flare-ups for weeks at a time until they were finally – allegedly – able to get it under control on March 11.
Those “Integrity Risks” weren’t just flare-ups of “Fake news,” though that was the symptom that first tipped the company’s engineers off. Rather than suppress repeat offenders previously red-flagged by Facebook’s fact-checkers for spreading ‘misinformation’, the News Feed was boosting their distribution, leading to as many as 30% more views for any given post.
Facebook’s filters were also failing to de-rank nudity, violence, and Russian state media, which the platform placed on the same level of offensiveness as nudity and violence. Following Meta’s announcement that it will no longer restrict calls for violence against Russians in the context of the Ukrainian conflict, Moscow’s media regulator Roskomnadzor blocked access to Facebook and Instagram in Russia more than a week before Meta engineers were allegedly able to figure out why the platforms were bolstering harmful content.
An internal report revealed that the problem of filtering actually dates back to 2019. Meta spokesman Joe Osborne told The Verge the company “Five separate instances showed inconsistencies when downranking was done. These were correlated to small and temporary improvements in internal metrics.,” but that the issue didn’t have a “noticeable impact” until October. Contrary to the report, Osborne insisted the bug “Our metrics have not seen any significant, long-term impacts.” and didn’t apply to “content that met its system’s threshold for deletion.”
The confusion over the longstanding presence of the ‘bug’ shines a light on the growing body of content Facebook subjects to deranking. No longer suppressing just rule-breaking content, the platform’s algorithms also target “Borderline” content that supposedly comes close to breaking its rules, plus other content the AI flags as potentially violating but requiring human review to be sure.
Even CEO Mark Zuckerberg acknowledged that users are driven to engage with “More provocative and sensationalist” content, and the company bragged last year that all political content would be downranked in News Feed.
Meta sends message to ‘ordinary Russians’
Facebook – now known as Meta – has recently come under the microscope for the $400 million it poured into the 2020 election. The funds almost all went to the districts of Joe Biden, then-candidate. The funds went to nonprofits that are prohibited by the IRS from supporting any particular political party or candidate. However, they were used by Democratic operatives like former Hillary Clinton strategists and Barack Obama strategists.
Congressional Republicans have also demanded documents and communications regarding Meta’s efforts to suppress the Hunter Biden laptop story, which would have made an incriminating “October surprise” against then-candidate Joe Biden had it not been unilaterally suppressed by Facebook and Twitter.
This story can be shared on social media