Why Tech Companies Couldn’t Stop Buffalo Video
TThree years after social media companies promised to stop viral videos featuring terrorist attacks on their platforms and the attack at Buffalo, New York revealed their inability to do so.
On Saturday, a self-confessed white supremacist attacked a Buffalo grocery store targeting Black customers. He killed 10 and injured three more in the attack, which authorities called a racist one. Twitch, an Amazon-owned streaming site, was used to livestream his attack. Twitch said it took down the broadcast within two minutes of the violence starting—but that was enough time for copies of the video to be downloaded and shared widely on social media platforms over the weekend.
Many versions of these videos had been edited—with added text or blurring or cropping—in apparent successful attempts to evade the platforms’ automated removal systems, according to Jacob Berntsson, the head of policy and research at Tech Against Terrorism, a U.N.-backed project for countering online extremism. According to numerous media reports, copies of this video were circulated via Twitter and Facebook on Saturday as well as Sunday.
“He knew that as long as there was time for people to watch and download, then this would spread [online] regardless of how quickly Twitch took it down,” said Berntsson in an interview on Monday, referring to the attacker. “Clearly an audience was ready and prepared to download this kind of stuff.”
Bloomberg reports that the Buffalo shooter openly discussed his plans to kill people of color during a livestreamed mass shooting. Bloomberg claims that he had been posting these details for several months to Discord. The report stated that he used the chat app to guide people to his Twitch livestream. Discord has not responded to my request for comment.
Inspire by past attacks
White supremacists live-streamed their attacks, which can be a potent tool of radicalization for the future extremists. However platforms are struggling to delete edited copies of previous attacks. Livestreamed via Facebook, a gunman who claimed to be white supremacist in New Zealand murdered 51 worshippers at Christchurch’s two mosques. A man wearing a helmet camera attacked a Halle synagogue, Germany. He killed two people and injured two more. The attack was livestreamed by Twitch. Both videos were widely shared on social media, prompting an online game between users and tech companies.
Continue reading: ‘A Game of Whack-a-Mole.’ Why Facebook and Others Are Struggling to Delete Footage of the New Zealand Shooting
These videos were directly responsible for the radicalization of the Buffalo shooter. In a manifesto posted online shortly before the attack, seen by TIME, the Buffalo shooter said that he was inspired by the Christchurch attacker’s politics, and that he decided to live stream his own attack in order to inspire others. The shooter also stated that he streamed on Twitch as it took the platform only 35 minutes to take down the Halle attack.
The two-minute time it took Twitch for the Buffalo attack video to be removed from Twitch is a remarkable comparison to what happened with the Halle attack. It also shows the technological advances tech companies have made in the past year. “That’s a very strong response time considering the challenges of live content moderation, and shows good progress,” Twitch said in a statement to TIME on Monday, adding that it was working hard to stop copies of the video being uploaded. Facebook’s parent company, Meta, and Twitter said that they had designated the videos under their violence and extremism policies shortly after the shooting, and were removing copies of it from their platforms, as well as blocking links to external sites where it was hosted.
Continue reading: ‘There’s No Such Thing As a Lone Wolf.’ The Online Movement That Spawned the Buffalo Shooting
Still, despite their progress, tech companies’ work so far has not been enough to stop the spread of these videos—either before they occur, during the livestream, or in the places where copies of the video are being reuploaded. “I’ll blame the platforms when we see other shooters inspired by this shooter,” says Dia Kayyali, the associate director for advocacy at digital rights group Mnemonic. “Once something is out there, it’s out there. That’s why the immediate response has to be very strong.”
The ways platforms work together to combat terrorist content
These platforms now collaborate far closer than at other terror attacks.
In the immediate wake of the New Zealand attack, many of the world’s biggest social media platforms signed onto the “Christchurch Call,” a commitment to stamp out the spread of terrorist content online. Through an industry group, the Global Internet Forum to Counter Terrorism (GIFCT), the platforms are sharing identifying data about the Buffalo shooter’s video and manifesto between them in order to make it easier to remove from their sites.
GIFCT allows platforms to share encrypted versions of terrorist content (known as hashes) that they have taken down from their websites. This allows, for instance, Facebook to swiftly and easily take down a copy terrorist video that was only available on Twitter. Instead of sharing the actual file, hashing allows you to represent a video, photo, or other document using a string number. Although it is not possible to duplicate a piece or content using its hashcode, identical content will return the exact same hashcode if processed through the same algorithm. The hash code for a piece of new content can be matched with an entry in the database that contains known illegal content. Tech companies are able to remove it even though they have not seen the original content before.
Hashing is an excellent way for various platforms to share information regarding illegal content like terrorist propaganda and child abuse imagery without needing to distribute the files between themselves. Facebook, YouTube YouTube, Amazon and Twitter are some of the members.
The problem with hashing, however, is that a bad actor only needs to alter the file a small amount—for example by changing its color profile, or cropping the picture—to return a totally different hash code, and thus evade the platforms’ automated removal mechanisms. So, three years after the Christchurch attack, the only tool required to fool the platforms’ automated systems for removing terrorist content is basic video editing software, plus some persistence. This is known as “adversarial” behavior and makes the problem of scrubbing terrorist content from the internet far more difficult, according to Kayyali and Berntsson.
Continue reading: These Tech Companies were able to eliminate ISIS content. But They’re Also Erasing Crucial Evidence of War Crimes
While hashing’s shortcomings are not the root cause of the problem, some counterterrorism experts say they are one of the core weaknesses in the platforms’ current joint approach to terrorist content. “The patchy response from various platforms that are [members] of the hash-sharing database arguably suggests that improvements can be made,” Berntsson says. “Platforms should be able to handle this, but it still speaks to the fact that there are groups of people who are quite committed to circumventing the moderation tools that are in place.”
In a statement, a Meta spokesperson said that hash-sharing is only a small part of the company’s approach to dealing with the Buffalo video. There are human moderators assigned to the company who look for any copies. The company also uploads copies of both video and its manifesto into internal databases. The company says it uses machine learning tools to catch lookalike copies of videos in the database even if they have been altered—although it’s clear from the video’s proliferation that these tools are not 100% accurate. But beyond hash sharing, the gains made by Meta’s computational resources and workforce will only help remove copies of the video from Meta’s own platforms, not other social media sites like Twitter, Twitch or TikTok. This results in many companies duplicating the efforts required to locate and down the altered videos. Human bandwidth is often the biggest bottleneck when it comes to enforcing the ban on terrorist content.
TIME was told Monday by a spokesperson for GIFCT that they were exploring other ways to share information on terrorist content among platforms. However, he said these explorations have not advanced beyond their initial stages.
The platforms have become frustrating for some in the sector. “I’m sure there’s issues with people remixing content and only posting a clip of it, and all of the tricks that we know to try to evade automatic detection,” says Kayyali, who sits on civil-society advisory boards for both the Christchurch Call and GIFCT. “But still, I want to hear exactly the technical explanation from GIFCT about how it was possible that hours after [they shared hashes of the video among the platforms] the video was still out there.”
This is a larger problem
Although big tech platforms can remove terrorist content completely, they still allow it to thrive on smaller platforms. As the shooter’s open planning on Discord showed, many of the people circulating the video are likely collaborating through private messaging channels and smaller social networks. Only 22 people watched the Buffalo attacker’s Twitch stream in real time, according to the Washington Post. But this was all it took for some of them—presumably directed to the stream by the attacker himself on Discord—to download and spread the video far and wide.
Experts believe that, in addition to big platforms and media outlets, governments also play an important role. Current U.S. law does not recognize domestic terrorists as terrorists the way Islamist extremists, for instance, are. This means platforms don’t have the legal certainty that they do when dealing with content from Al Qaeda and ISIS, which they have largely succeeded in scrubbing from their platforms. “Designation can help provide some more legal certainty and clarity for companies,” says Berntsson. Tech Against Terrorism, his organization has its own tool that is similar to GIFCT to alert platforms about terrorist content.
The shooter stated in his manifesto that the Christchurch terrorist attack video had radicalized him. However, other media channels have allowed racist conspiracy theories to enter the mainstream political discourse, such as cable TV.
The Buffalo shooter spoke in his manifesto about his belief that white people were being intentionally replaced in the U.S. by people of other races—a conspiracy theory that has recently been picked up, and amplified, by the Fox TV show host Tucker Carlson and Republican politicians.
“It doesn’t start when this individual presses ‘stream’ on Twitch,” says Berntsson. “It starts long before.”
– WITH REPORTING BY VERA BERGENGRUEN/WASHINGTON, D.C.
Here are more must-read stories from TIME