Business

4 Ways to Fix Social Media That Don’t Involve Elon Musk

Elon Musk’s bid to acquire Twitter in a deal worth $44 billion has sparked debate and concern about what the platform will look like once he takes over. The world’s richest man will steer the future of a small but influential platform of more than 200 million users, inheriting a heated debate over tackling issues of hate speech, misinformation and polarization.

Musk has alarmed Twitter employees and digital safeguarding experts by describing himself as a “free speech absolutist,” sparking fears Twitter’s content moderation regime—while imperfect—could be jettisoned.

Online spaces need improvement is a constant conversation that often plays out. on Twitter itself. The internet can be dangerous for people of colour, women, and other marginalized groups. Amnesty’s research found that 29% of U.S. female users were subject to threats of sexual or physical violence online.

Continue reading: Twitter has helped to build social movements around the world. Organisers are worried about its future

Here are four suggestions from experts to reform social media that don’t cost $44 billion.

Transparency can be radicalized

Twitter and Facebook have the most serious problems with harassment, abuse and spreading misinformation.

Twitter is making strides internally to gain control. But currently, “it’s a black box,” according to Michael Kleinman, director of Amnesty’s Silicon Valley Initiative. “It’s still unclear after all these years how they decide what is allowable and what is not,” he tells TIME. It is also in some cases unclear which actions are working and which aren’t.

A radical transparency idea would be to share curation algorithms, content moderation decisions, reporting outcomes and effectiveness of action taken. This means sharing algorithms for content moderation, reports on outcomes and effectiveness, as well as reporting the numbers of appeals, their results, and any upshot. Musk expressed willingness to “open source”Although Twitter algorithm are being used, some experts in digital security cautioned that this can’t be used to replace good content moderation.

This would help accrue “a kind of common law,” says Lucas Graves, assistant professor at the University of Wisconsin-Madison. We now have secret courts, where decisions of these judges are hidden and judgments lost forever. Transparency “pushes back against the arbitrariness” of executives, Graves says.

Professionalize moderation of content

“I would immediately scale up the number of human moderators,” says Matthias Kettemann, professor of innovation law at the University of Innsbruck. There are clear limitations to relying on algorithmic moderation. “Humans are better than machines at recognizing nuances in speech, distinguishing between humor and hate,” assessing posts in context, says Kettemann.

Moderators should be more like planners than firefighters, anticipating risks and identifying them before they happen.

A local injection of expertise is necessary for moderation. “We see a need for companies to address a chronic underinvestment in user safety outside of North America and Western Europe,” Deborah Brown, senior researcher and advocate on digital rights at Human Rights Watch, tells TIME.

These steps will cost significant funds. Particularly for smaller platforms such as Reddit or Snapchat which only has volunteer moderators and has experienced issues with hate speech, these measures are costly. But for Kettemann, it’s worth it. “The sheer amount of money that Elon is spending, if a fraction of that was spent on content moderation, it would go extremely far towards creating a better online experience.”

It is important to enforce existing guidelines in moderation, particularly when it comes to public officials breaking the rules. Trump was banned multiple times from Twitter after he spread false information about the vote counting process.

Graves says that one model for professionalizing moderation is for tech companies not to have control over independent experts. For instance, Facebook’s partnership with the International Fact-Checking Network.

Musk will discover quickly that moderation on social media is complicated. Rasmus Kleis Nielsen, director of the University of Oxford’s Reuters Institute, says that political polarization means there is no agreement on what good and bad moderation, or even “harmful” content, looks like.

It is also important to consider the wellbeing of moderators. Meta-owned Facebook and Whatsapp content moderators have expressed concerns over their working conditions and treatment. “It can be quite traumatic work,” Human Rights Watch’s Brown says.

Continue reading: Inside Facebook’s African Sweatshop

Empower people and place human rights at the center

TIME spoke with Samuel Woolley, an assistant professor at Moody College of Communication at Texas at Austin. He says platforms were designed to increase user attention in order to promote advertisements. That means that “user wellbeing, trust, safety” are secondary without a reorientation, he says.

It is crucial that users have control over privacy and content. This will require “cutting off a revenue stream based on pervasive surveillance,” Brown says.

Platforms have typically “rushed to capture new markets,” Brown adds, without “carrying out human rights due diligence.” That has led to some catastrophic consequences—like Facebook’s promotion of posts supporting Myanmar’s military, despite widely being accused of committing genocide in the country.

Regulate strongly on a worldwide scale

Ultimately, there’s a limit to what platforms will do voluntarily. European Union has begun to force social media companies into a clean slate.

E.U. The E.U. has reached an agreement on two landmark pieces of legislation in spring. This includes the Digital Services Act which will require platforms to address misinformation and reveal how they amplify divisive content. And the Digital Markets Act which would stop big tech companies from dominating the digital market. And there’s a stick attached: skirt the legislation, and platforms could be fined billions of dollars.

Similar U.S. measures to monitor online speech may run into First Amendment problems. Congressional polarization can also hinder antitrust progress. The E.U. The E.U. could force companies to change their business practices within one area, which may lead to those same changes being implemented in other areas. “A rising tide lifts all boats,” Kleinman says, as it’s easier for them to standardize procedures.

Others have called for more aggressive antitrust measures, such as the dissolution of large tech companies. The argument is that platforms such as Meta or Alphabet, by owning digital marketplaces but also utilizing user data to generate ad revenue, are “both player and referee,” says Gennie Gebhart, director of activism at the Electronic Frontier Foundation.

Nielsen points out that some lawmakers are pursuing the opposite path. Brazil’s President Jair Bolsonaro, for instance, has banned social media companies from removing certain content. Meanwhile, India’s government has imposed rules on social media companies that have been branded a step toward “digital authoritarianism” by international NGOs. The regulations force social media companies to remove posts that the government says are illegal, including content that threatens “the interests of the sovereignty and integrity of India,” public order, decency, morality, or incitement to an offense.

It seems difficult to find a universal standard in regulation right now.

Here are more must-read stories from TIME


Get in touchAt letters@time.com

Tags

Related Articles

Back to top button