I am going to confess something. I am not on expert on the Kids Online Safety Act Here’s what I know. There’s broad consensus that social media can have harmful mental heath effects on children. This was explained by Surgeon General Vivek Murthy in a June editorial in the New York Times.
The mental health crisis among young people is an emergency — and social media has emerged as an important contributor. Adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms, and the average daily use in this age group, as of the summer of 2023, was 4.8 hours. Additionally, nearly half of adolescents say social media makes them feel worse about their bodies.
It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents. A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe.
Murthy laid out a variety of other concerns that he wanted to see addressed with legislation.
Legislation from Congress should shield young people from online harassment, abuse and exploitation and from exposure to extreme violence and sexual content that too often appears in algorithm-driven feeds. The measures should prevent platforms from collecting sensitive data from children and should restrict the use of features like push notifications, autoplay and infinite scroll, which prey on developing brains and contribute to excessive use.
Additionally, companies must be required to share all of their data on health effects with independent scientists and the public — currently they do not — and allow independent safety audits.
I also know that a month after Murthy’s editorial appeared in the New York Times, the U.S. Senate passed the Kids Online Safety Act in an overwhelming bipartisan 91-3 vote. The bill acted on many on Murthy’s concerns. Only Republicans Mike Lee of Utah, Rand Paul of Kentucky and Democrat Ron Wyden of Oregon opposed passage.
It should have sailed through the U.S. House of Representatives but that didn’t happen. Mark Zuckerberg, whose company Meta owns Facebook and Instagram, hired an army of lobbyists to fight the bill and he showered House Republican leadership and members with donations. As a result, Speaker of the House Mike Johnson killed the legislation and it never came up for a vote.
While I was reading Ruth Reader’s retelling of this story in Politico, I kept hoping for an education on what the bill would actually do, but I was disappointed. Instead, I learned some of the arguments that Zuckerberg’s lobbyists and congressional allies made in opposition to the bill. They said it would “threaten free speech by allowing Washington regulators to squelch conservative and religious voices.” They argued that it would it would strip “American parents and guardians of their authority and choice, replacing them with a council of bureaucrats to parent their kids online.” There were even some lobbyists who told members of the House that the bill would a pose a threat to the “pro-life movement.” But what is not explained by Reader, even to disprove it, is what legislative language is being referred to in these dire predictions.
Frustrated, I scanned through the legislative language myself, and it’s not obvious why conservatives or “the pro-life movement” would be disproportionately targeted, or even targeted at all. I see language that provides increased parental controls to protect children from harmful content, but nothing sticks out as stripping parents of authority or choice.
In an effort to get more information, I turned to an Associated Press article that Barbara Ortutay wrote back in July when the Kids Online Safety Act originally passed the Senate. Here’s her quick and concise summary of what the bill would provide. Ask yourself, which of these provisions would stifle the free speech of religious conservatives?
What does KOSA do?
KOSA would create a “duty of care” — a legal term that requires companies to take reasonable steps to prevent harm — for online platforms minors will likely use.
They would have to “prevent and mitigate” harms to children, including bullying and violence, the promotion of suicide, eating disorders, substance abuse, sexual exploitation and advertisements for illegal products such as narcotics, tobacco or alcohol.
Social media platforms would also have to provide minors with options to protect their information, disable addictive product features, and opt out of personalized algorithmic recommendations. They would also be required to limit other users from communicating with children and limit features that “increase, sustain, or extend the use” of the platform — such as autoplay for videos or platform rewards. In general, online platforms would have to default to the safest settings possible for accounts it believes belong to minors.
To gain any clue about plausible objections to the bill, I had to go the Electronic Frontier Foundation (EFF). They raise a lot of good questions, but much of their objection is based on what appears to be deliberate misreadings of the proposed law. To give but one example, the law says the Duty to Care requires a platform to take “reasonable care in the creation and implementation of any design feature to prevent and mitigate… anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.”
But the EFF interprets that to mean that minors will not be allowed to see content on, e.g., eating disorders or suicide. That’s just not what this legislative language aims to do. It is focused on “design features.” It says explicitly that it’s talking about a design feature that “evidence-informed medical information” indicates will cause mental health issues in minors, like depression, drug addiction, and suicide. These are the things Surgeon General Murthy identified as risks for “adolescents who spend more than three hours a day on social media.” No one is suggesting that kids are developing eating disorders because they read about eating disorders. The problem is rather a byproduct of the way adolescents interact with these platforms. Social media platforms make kids anxious, depressed and even suicidal, and if there is evidence-based scientific information that implicates some specific design feature as a culprit, that impact must be mitigated or prevented.
The same is true of a provision of the law that goes after design features that create “patterns of use that indicate or encourage addiction-like behaviors by minors.” This targets what Murthy described as “the use of features like push notifications, autoplay and infinite scroll, which prey on developing brains and contribute to excessive use.” The provision does not, as EFF suggests, try to prevent teens from accessing information about substance abuse. It tries to prevent what Ortutay called “addictive product features” that are deliberately designed by social media companies to create teen social media junkies.
Anyway, I said I am not an expert on the bill. It may well have had unintended consequences that stifled free speech or led to too much litigation and liability. But the reason it didn’t progress from being an overwhelmingly bipartisan bill to a law is because Zuckerberg’s lobbyists succeeded in convincing a bunch of conservatives to argue that the law would target them. I could find no evidence whatsoever to support this contention. What I found is Facebook and Instagram showering House Republicans with cash.
And the end result is that parents are still left to fend for themselves in trying to protect their kids from the known harms of social media. And, in fairness, I’ll also note that both Elon Musk and Donald Trump actively supported this bill, and it still died.