States Could Let Parents Sue Big Tech for Addicting Kids. Here’s What That Really Means.

LAwmakers in the United States are looking to find ways to clamp down on platforms like YouTube, TikTok and Instagram. They have been accused of using addictive algorithms for social media and exploiting children. Last week, legislators in California and Minnesota made strides on proposed legislation that would hold companies accountable for the toll their platforms take on young people’s mental health. They are also in agreement with Washington calls for meaningful oversight of Big Tech, to protect children’s safety.

The California bill would let parents sue companies that don’t take steps to avoid addicting children. The proposal is the first of its kind in the U.S. and the most aggressive state-level effort to rein in Big Tech over its use of algorithmic tools that draw on children’s personal data to generate recommendations and other techniques intended to increase their engagement. It would hold social platforms legally accountable for features that are designed to be addictive to children, such as “like” buttons and endless scroll. Violators could face civil penalties of up to $25,000 per child or damages that could include $1,000 or more per child in a class-action suit, according to the University of San Diego School of Law Children’s Advocacy Institute, a co-sponsor of the bill.

Still, if passed, this type of liability law likely wouldn’t be very successful at reigning in Big Tech, says Abbey Stemler, an associate professor of business law and ethics at Indiana University who specializes in internet law, regulatory theory, and Big Tech data. “This law isn’t really saying anything,” she tells TIME. “It’s too vague to actually be actionable.”

What are the challenges?

Dubbed the Social Media Platform Duty to Children Act, the proposal was advanced in the California Assembly on March 15 by a bipartisan pair of lawmakers, Republican Jordan Cunningham of Paso Robles and Democrat Buffy Wicks of Oakland, with support from the Children’s Advocacy Institute. Cunningham explained to the Los Angeles TimesThese companies deliberately design apps that keep kids coming back to them, and some do. He asks: “Who should pay the social cost of this? Should it be borne by the schools and the parents and the kids, or should it be borne in part by the companies that profited from creating these products?”

California’s bill came on the same day that another state, Minnesota, made strides on another measure aimed at protecting young people from social media. The state committee approved a bill that would ban social media sites from using algorithmic recommendations to suggest content to children under 18. The state’s House Judiciary Finance and Civil Law Committee will now vote on the measure on March 22. Companies would face civil penalties of $1,000 and damages if the bill is passed. “The bill would require anyone operating a social media platform with more than one million users to require that algorithm functions be turned off for accounts owned by anyone under the age of 18,” the bill summary reads.

While these types of proposals are intended to force social platforms to bear some responsibility for the damages inflicted by their algorithms, Stemler says a more effective strategy would be to enact measures that address companies’ ability to access the data that fuels those algorithms in the first place.

“The reason why algorithms work is because they suck in as much data as possible about what these young people are doing,” she says. “And once they have that data, they can use it. So instead of saying, ‘Hey, don’t create addictive systems,’ we really should be focused on [preventing platforms from]It is possible to learn this data. The easiest way to do that is just to limit access to the data itself.”

Cunningham, Wick, and Wick introduced a bill in February called the California Age-Appropriate Technology Code Act. It takes a similar approach. The proposal would restrict social platforms’ collection of children’s personal and location data.

What’s happening in Congress

Federal legislation has been introduced by Congress to reduce online dangers for children. Marsha Blackburn and Senator Richard Blumenthal introduced the Kids Online Safety Act in February. This bipartisan legislation gives parents and children options to safeguard their data, disarm addictive products features and opt out from algorithmic recommendations. Platforms are required to allow the most secure settings to be enabled by default.

Frances Haugen, a Facebook whistleblower, leaked documents about the company that are driving this push for regulatory action. Those documents showed that Meta, the parent company of Facebook and Instagram, downplayed its own research on the harmful effects of its platforms on young people—issues that included eating disorders, depression, suicidal thoughts, and more. This led to a series of Congressional hearings and growing calls for social media’s biggest players to face accountability for how they keep young users scrolling through content for as long as possible.

Features that encourage endless scrolling are among the most harmful to young people, according to the company’s own research. “Aspects of Instagram exacerbate each other to create a perfect storm,” one report leaked by Haugen read.

Popular video sites like YouTube and TikTok have come under fire for their algorithmic recommendations systems. The New York Times reported in December that the inner workings of TikTok’s algorithm were leaked by a source who was “disturbed by the app’s push toward ‘sad’ content that could induce self-harm.”

As state and federal efforts grow, Stemler says it’s crucial that lawmakers get it right—and fast.

“My concern for this generation’s mental health is serious,” she says. “There are deep problems coming from the pandemic and isolation… tech has become the way that young people interact with the world.”

Here are more must-read stories from TIME

Send an email to Megan McCluskey at


Related Articles

Back to top button