Business

Midterms Misinformation: How TikTok, Meta + Twitter Prepare

WIn less than 3 months, the U.S. midterm elections are approaching and social media companies prepare for another fierce fight against misinformation. TikTok, Meta, and Twitter say they’re ready for the challenge this time, and they’ve each drafted a similar playbook that includes more fact-checking, labeling misinformation more carefully and adding more restrictions on political advertising.

But experts who study social media and politics point out that these new policies aren’t that different from those in place in 2020—which could have consequences, since video content may play a larger role in the spread of misinformation this year.

“They say they tested these processes during the 2018 midterms and 2020 [general election] and that they have it under control, but this is anything but the case,” says Jennifer Grygiel, an associate professor at Syracuse University who researches social media. “We don’t exist in a media environment today that looks like anything we’ve known, and we need to talk about that more.”

Video-based platforms like TikTok and Instagram Reels could be especially problematic, says Zeve Sanderson, the founding executive director of NYU’s Center for Social Media and Politics. Experts in media fear that automated moderation systems could have difficulty monitoring misinformation-filled videos as they gain popularity.

“No election that we’ve seen in the last decade has been like the one before,” he says. “​​We need to remain vigilant across this online ecosystem.”

Since the 2016 election which saw misleading information spread on social media, companies in social media have spent a lot of money to educate people about the dangers of social media and increased their moderation policies. Unfortunately, many harmful posts fell through the cracks.

Here’s what major social media platforms are doing this time to stop election misinformation.

TikTok launches ‘election center’

The video platform launched an election center where users can get “authoritative information” about voting sites and election results—in 45 languages. Each election-related post will include clickable labels which will permit viewers to find information about their state’s elections, such as video from politicians or parties.

This may be the first election cycle that TikTok has the time—and experience—to combat the spread of misinformation that garnered so much attention during the 2020 election, when the platform first took off. Grygiel reports that TikTok’s popular catchy songs are now gone and the platform is seeing an increase in posts about political or social misinformation. TikTok accounts in Germany impersonated prominent political figures during the country’s last national election. Misleading posts in Colombia falsely attributed quotes from a candidate to a cartoon villain, and videos in the Philippines amplified myths about the country’s former dictator, whose son became president.

This level of influence suggests that TikTok (owned by Chinese tech company ByteDance) could be a significant player in spreading misleading information. As an example, 6 hashtags supporting conspiracy theories concerning the 2020 election gathered more than 38,000,000 views by July. The company, however, has already blocked #StopTheSteal hashtags.

Video content can be more challenging to manage than text, thanks to the automation systems used by platforms. “We haven’t seen platforms respond to the ascendancy of video content,” Sanderson says. “They’re going to run into similar issues—human labeling isn’t going to be enough here.”

TikTok has banned political advertising paid for in 2019, unlike Meta which only stopped it during election season. However, some influencers discovered loopholes by posting sponsored content in return for money. Some campaigns and groups, including President Joe Biden’s team, collaborated with TikTok influencers in 2020 and beyond.

The company also says it will lean heavily on fact-checkers, preventing videos from showing up in users’ recommended videos while they’re still being reviewed. Videos that can’t be verified will be labeled as such and users will see a warning before sharing it.

However, videos have the potential to make an impact. “Video and images are much more powerful in terms of persuading people or making them have emotional reactions,” says Shannon McGregor, an assistant professor of journalism and media at the University of North Carolina. “It’s something that we need to be paying a lot more attention to.”

Meta bans election-week political ads

Just like in 2020, Facebook’s parent company Meta is implementing a “restriction period” for new ads about social issues, elections or politics in the United States. The advertising ban will be in effect between Nov. 1 and Nov. 8—the week leading up to General Election Day—and will prohibit any new ads relating to these topics from being published. The advertising ban will prohibit advertisers from changing their ads after publication.

Meta followed a similar strategy around the 2020 general elections. However, the self-imposed ban on political advertisements remained in effect for five months rather than one week. The company’s decision to once again impose a political advertising ban marks a significant development in digital politics, cutting off what campaign strategists describe as a massive pipeline for reaching potential supporters who fuel fundraising.

“Our rationale for this restriction period remains the same as 2020,” Meta Global Affairs President Nick Clegg said in a statement. “In the final days of an election, we recognize there may not be enough time to contest new claims made in ads. This restriction period will lift the day after the election and we have no plans to extend it.”

Meta also claims it has more than 400 people working in over 40 teams on the midterms. This includes 10 U.S. fact-checking partners to combat viral misinformation. Half of these will cover content in Spanish. According to the company, it plans to partner with Telemundo and Univision in order launch fact-checking service on WhatsApp (the private messaging platform owned by Meta).

All election-related content that contains misleading information about the voting process or calls to violence in relation to results will be deleted across its various platforms. Advertisements that discourage people from voting or call into question whether the forthcoming election is legitimate will not be allowed by the company. These were used during the 2020 electoral cycle.

Twitter can moderate misleading and fake election information

Twitter will also label misinformation about civic events and elections as misleading and offer links to reliable information. Twitter will warn users before they share or like tweets falling into this category.

This isn’t a new approach for Twitter, as the company rolled out redesigned misinformation warning labels last year that increased click through rates by 17%, the company found. The new design saw notable declines in engagement for tweets with it. They had 13% fewer replies and 10% fewer retweets.

Twitter will also bring back its “prebunks” from 2020—messages that appear at the top of users’ feeds to debunk misinformation. Twitter has been rolling out state-specific hubs for election information during primaries. It now intends to make a national event page accessible to all Americans.

What are the challenges?

Political analysts and experts in social media analysis fear misinformation will be more of a problem in this election, as the stakes have never been higher. McGregor says: “There already is more misinformation, and things will get worse as we get closer to election day.”

As Democrats seek to retain their tight control over both chambers, all 435 House and 34 Senate seats will be on the ballot. Although some strategies such as adding warning labels in several languages or creating hubs for election information might be useful, fundamental issues may prove to be too difficult.

Sanderson states that when it comes to Jan. 6, attack on U.S. Capitol, the problem was that platforms mobilized people for action. It was secretly done. “How do you moderate closed groups?” Sanderson asks. “Far too many of the social platform announcements focus on content, and don’t really discuss the potential connection between content and mobilization.”

Individual politicians’ accounts are also a source of misinformation, McGregor says, a problem that “none of these policies address.”

Analysts also fear that social media platforms have not been focusing as much on misinformation fighting over the past year and are making it less important during elections. The New York Times reports that Facebook is focusing more attention on metaverse issues and has cut the number of people who are involved in election integrity work from 300 employees to 60 this year. Times.

And some of their policies weren’t in place year-round, which means that misinformation is already circulating. “It seems like Facebook took a break,” McGregor says. “If the policies are not enforced all the time, then they’re not going to be successful.”

But for all the bad, there’s plenty of good. For one thing, McGregor says, “social media is great at bringing in new voters.”

Here are more must-read stories from TIME


Send an email to Nik Popli at nik.popli@time.com.

Tags

Related Articles

Back to top button