Business

How to Make a Better System for Regulating Social Media

ItIs it sensible to attempt to regulate social media sites? It is possible. In almost all democracies, these questions have been a source of frustration for lawmakers. And finally–after years of debate–some answers are coming into view.

Prior to 2016, online regulation wasn’t high on the political agenda. After the 2016 election and the Brexit referendum, this changed. Each case saw the loser side believe, with some justification, that they were being manipulated by shady online forces. Today powerful voices on the right also argue for regulation, to stop platforms “censoring” conservative voices.

In fact, the basic argument for legislating intervention is non-partisan. It’s simply that, as more and more of our discourse migrates online, social media platforms are increasingly trusted to draw the borders of free expression. They order, filter and present the world’s information. They determine what information can be spoken and by whom. They ban and approve ideas and speakers. They must apply their rules, beliefs, biases and principles when doing these things. That’s not a criticism—sometimes the right will be aggrieved, sometimes the left – but it does mean that the choice is not between regulating speech and leaving it alone. Speech is AlreadyPlatforms are used to regulate you.

They have powerful enforcement powers: they can stifle voices or suppress ideas with one click. It is not necessary to argue for regulation based on the often simplistic claim that certain platforms may be biased in one direction or another. Instead, they have an increasing power to impact democratic discourse and lack appropriate checks. It is possible that they will make mistakes. They may make mistakes that violate the principles of a free society. They could inadvertently create systems that are detrimental to the democratic process. Just like others in positions of social responsibility—lawyers, doctors, bankers, pilots—those who assume the power to control the speech environment ought to be subject to a degree of oversight. Is it possible that a pharmacist has higher standards and qualifications than someone who manages a large social network.

A second and more challenging question concerns whether or not it is. PracticableSocial media platforms must be regulated. At least three of these challenges overlap.

First, there is an underlying concern that governments might become too involved in the regulation and monitoring of speech. History shows that even democratic regimes can be tempted to over-censor—in the name of religious orthodoxy, moral propriety, political correctness, national security, public order, or even (with the connivance of their supporters) political expediency. A sound social media governance system must not give too much power to the state. This principle is fundamental to the United States constitution.

Scale is another problem. Platforms come in different sizes. Even small platforms could be affected by burdensome regulations that would render their survival difficult. The challenge for larger companies lies in the sheer size of their operations. Facebook is hosting billions upon billions of posts every day. After a British teenager took her own life in 2017—the tragedy that prompted the UK Parliament to review its laws—Facebook and Instagram removed around 35,000 posts relating to self-harm and suicide Every day. Mistakes are inevitable, even if all the rules and resources were properly incentivized. As Monika Bickert, Facebook’s Head of Global Policy Management, has put it: “A company that reviews a hundred thousand pieces of content per day and maintains a 99 per cent accuracy rate may still have up to a thousand errors.” And even that hypothetical example understates the scale of the challenge.

This is the final problem. People cannot agree on what an “ideal” online speech environment would look like. Some goals —like stopping the dissemination of child pornography–—command broad consensus. Others are more difficult to define. Consider the issue of online disinformation. It is up for debate whether the best way to counter it is (a) to remove it completely; (b) to prevent algorithms from amplifying its effects; or (c), simply to refute it with truth. There’s no philosophically correct answer here. Reasonable people may disagree. The same goes for questions about how to regulate speech that is incendiary but not unlawful (such as claims that the 2020 US presidential election was “stolen”), speech that is offensive but not unlawful (for example, mocking a religious prophet), and speech that is harmful but not illegal (such as content encouraging young girls to starve their bodies, or quack theories about COVID-19). What’s the proper approach to this kind of speech? It should be banned. It must be stopped. Resist it It is not true. Even in countries with strong free speech norms, no policy can be universally accepted to be correct.

Many commentators have concluded that social media regulation is futile because of these challenges. However, it is worth noting that any regulation system will never be perfect. Speech is by nature chaotic. There will always exist controversy. There will always remain tumult. Lies and slanders will continue to exist. Social media is a place where conflicts are more popular than consensus. Every tweet containing a moral outrage will increase its rate by 17%.

Instead of regulatory perfection, it is possible to aim for sensible a. Reduce It is possible to be imperfect. Instead of striving to avoid all online harms, let’s strive for imperfection. ReduceThe RisistanceThere is no harm. Progress is when incremental gains can be made without creating new harm. The question is not “would this system be perfect?” but “would it be better than what we’ve got?”

How would you design a better system?

The system would rank platforms according to social risk. You will find smaller online communities, fanites and hobbyist groups at the lowest end. These spaces should only be subject to minimal regulation. They also need to remain relatively free from responsibility for any content they contain. This is not because small platforms are always pleasant places – many are dens of iniquity – but rather because they are easy to leave and easy to replace, and the harms they generate do not usually spill over into wider society. Too many regulations could prove to be too restrictive. The other extreme would include important, large platforms such as Facebook or Twitter. They have the ability to set the political agenda and quickly distribute content, as well as shaping the views and behaviour of millions. Users can’t leave them and rivals cannot challenge them. These spaces are crucial for both civic and commercial life. This type of platform requires more rigorous oversight.

Of course, size would not be the only guide to risks—small platforms can pose real social risks if they become hotbeds of extremism, for example–but it would be an important one. The Digital Services Act, adopted by the European Parliament in July, plans to distinguish between “micro or small enterprises” and “very large online platforms” that pose “systemic” risks.

The next step is to regulate platforms that are sufficiently dangerous. sistemOr Design level (as proposed for the UK’s Online Safety Bill, which is currently on ice). For example, lawmakers could decide platforms have to be. Systems that are proportionate or reasonableTo reduce online harassment, the platforms should have security measures in place. That platforms be able to provide this protection. Systems that are proportionate or reasonableTo reduce foreign interference in the political processes, there are measures in place. Enforcement action would support these requirements: platforms could face sanctions for failing to meet their standards. For serious misconduct, criminal sanctions and fines should be considered. But on the flipside, if platforms’ systems were certified as adequate, they would enjoy a high degree of immunity from lawsuits brought by individual users. You can’t have it both ways.

This brand of regulation—system-level oversight, graded according to social risk, with emphasis on outcomes—means the regulator would not be expected to interfere with on-the-ground operational decisions. There would be no government “censor” scrutinizing individual moderation decisions or pieces of content. The platforms could make mistakes as long as they had adequate systems. Platforms would bear the creative responsibility of figuring out the best way to achieve the goals that they have democratically agreed to. Incentives would be given to them for creating new interfaces, algorithms and business models. This is a good thing. This is a good thing.

Here are more must-read stories from TIME


Get in touchSend your letters to time@time.com

Tags

Related Articles

Back to top button