Twitter Inc.’s highest-profile users—those with lots of followers or particular prominence—often receive a heightened level of protection from the social network’s content moderators under a secretive program that seeks to limit their exposure to trolls and bullies.
The internal program code-named Project Guardian includes thousands of accounts that are most vulnerable to harassment or attack on Twitter, such as politicians, journalists and musicians. When someone flags abusive posts or messages related to those users, the reports are prioritized by Twitter’s content moderation systems, meaning the company reviews them faster than other reports in the queue.
Twitter says its rules are the same for all users, but Project Guardian ensures that potential issues related to prominent accounts—those that could erupt into viral nightmares for the users and for the company—are dealt with ahead of complaints from people who aren’t part of the program. This VIP group, which most members don’t even know they’re a part of, is intended to remove abusive content that could have the most reach and is most liable to spread on the social-media site. It also helps protect the Twitter experience of those prominent users, making them more likely to keep tweeting—and perhaps less apt to complain about abuse or harassment issues publicly.
“Project Guardian is just the internal name for one of many automated tools we deploy to identify potentially abusive content,” Katrina Lane, vice president for Twitter’s service organization, which runs the program, said in a statement. “The techniques it uses are the same ones that protect all people on the service.”
The list of users protected by Project Guardian changes regularly, according to Yoel Roth, Twitter’s head of site integrity, and doesn’t only include famous users. The program is also used to increase protection for people who unintentionally find the limelight because of a controversial tweet, or because they’ve suddenly been targeted by a Twitter mob.
That means some Twitter users are added to the list temporarily while they have the world’s attention; others are on the list at almost all times. “The reason this concept existed is because of the ‘person of the day’ phenomenon,” Roth says. “And on that basis, there are some people who are the ‘person of the day’ most days, and so Project Guardian would be one way to protect them.”
The program’s existence raises an obvious question: If Twitter can more quickly and efficiently protect some of its most visible users—or those who have suddenly become famous—why couldn’t it do the same for all accounts that find themselves on the receiving end of bullying or abuse?
Scale is the short answer. Twitter receives too many abuse reports daily from more than 200 million users. This means that Twitter prioritizes reports based on multiple data points. These include how many users a user has and how many views a tweet gets. It also considers whether the abuse report is likely to be abusive. An account’s inclusion in Project Guardian is just one of those signals, though people familiar with the program believe it’s a powerful one.
Roth said the distinction can’t apply to everybody, or it would mean that there’s no point in having a list.
“If the list becomes too big, it stops being valuable as a signal,” he added. “We really want to focus on the people who are getting an exceptional or unprecedented amount of prominence in a specific moment…this is really focused on a small slice of accounts.”
Project Guardian is used to provide protection for users in a variety of professions. James Charles, a YouTuber and makeup artist was targeted by harassment online. Project Guardian also includes Wael Ghonim, an Egyptian activist who uses the internet to protest against Covid-19 vaccinations. Scott Gottlieb was previously U.S. Food and Drug Administration Commissioner. The program has also included journalists—even news interns—who write about topics that can result in harassment, like 8chan or the Jan. 6 riot at the U.S. Capitol.
Project Guardian has been used by Twitter to safeguard its employees including Roth. After the company first fact-checked then-President Donald Trump’s tweets in May 2020, Roth was singled out by Trump and his supporters as the employee behind the decision, leading to attacks and death threats. Roth, who wasn’t actually the employee who made the call, says he was temporarily added to the Project Guardian list at the time. “All of a sudden I became a lot more famous than I was the day before,” Roth explained. Roth explained that he was expelled from the program when harassment began to slow down.
Twitter employees may recommend adding accounts to the database. In some cases, a famous Twitter user’s manager or agent will approach the company and ask for extra protection for their client. News organizations’ social media managers have asked for extra protection to protect their journalists who are writing high-profile, controversial stories. Users who are in the program don’t necessarily know they are receiving any extra attention.
“We look at it as, who are the people who we know have been the targets of abuse or who are predicted to be likely targets of abuse?” Roth said.
Twitter said it is getting better at detecting abuse and harassment automatically, meaning it doesn’t need to wait for a user to report a problem before it can send it to a human moderator. The company saysThe technology detects and asks for people to remove 65% of all abusive content that it has removed or asked them to delete, before any user reports it.
Lane said Twitter uses both technology and human review “to proactively monitor Tweets and Trends, especially when someone is put in the spotlight unexpectedly or there is a significant uptick in abuse or harassment.”
It’s not clear whether there was any one event or incident that sparked Project Guardian, though it has existed for at least a couple of years, people familiar with the program said.
The list doesn’t just protect prominent users; it also helps protect Twitter’s reputation.
In years past, Twitter’s image has suffered when high-profile users publicly criticize the service—or abandon it entirely—because of a failure to combat abuse and harassment. It’s been particularly common with famous women. Following being overwhelmed with negativity via negative tweets or messages, Chrissy Teigen and Lizzo singer, Maggie Haberman from New York Times, and actor Leslie Jones, all of them publicly announced their resignation. (They’ve all since returned.)
Recent celebrity harassment on Twitter seems to be less frequent. However, some insiders believe Project Guardian may be one reason.
Twitter’s program is another instance of the different treatment that social media apps provide to certain pre-eminent users and accounts. According to a September Wall Street Journal investigation, Meta Platforms Inc. (the company that owns Instagram and Facebook) was offering some users exemptions from certain rules. This allowed them to post content that could have been removed or flagged by others.
Twitter representatives insist Project Guardian is not like other platforms and all its users must adhere to the same standards. Reports related to users who are part of Project Guardian are judged the same way as all other content reports—the process usually just happens faster.
While Twitter’s rules may apply to everyone, punishments for breaking those rules aren’t always equal. World leaders, for example, have more leeway when breaking Twitter’s rules than most of its users. Meta and Twitter have spent years building relationships with prominent users and creating teams that can help them use their products. Twitter ended showing advertisements to prominent users in 2016, with the intention of making it better.