If you haven’t seen the short video of a dancing ballerina with a human body and giant cappuccino cup for a head, consider yourself lucky. The nonsense meme is just one of countless pieces of AI-generated content that are flooding popular online platforms such as YouTube, TikTok, and Instagram, encouraged by the apps’ engagement-hungry algorithms — and targeted at children under 18.
Often repetitive and overstimulating, these short-form videos and memes — sometimes referred to as “brain rot” — have been increasingly cropping up on platforms for the last decade or so, incentivized and even promoted by the tech companies themselves, says Leah Plunkett ’06, the Meyer Research Lecturer on Law at Harvard Law School and a faculty associate with the Berkman Klein Center for Internet and Society at Harvard University.
And now, artificial intelligence tools, which allow content creators to make and post clips even more quickly, are adding “rocket fuel” to concerns about addictive, low-quality content aimed at young people, says Plunkett, an expert in youth digital privacy issues.
Plunkett believes that digital companies such as YouTube have largely transformed from social media networks to something closer to mass entertainment — think Hollywood rather than community square — and that the law must evolve to reflect that.
“Imagine if someone wanted to offer 24/7 entertainment services in your neighborhood, and those services were going to feature kids on a stage, brushing their teeth, going to school, putting on makeup. Anyone who comes by can sit there and watch, and if you’re a kid in the audience, there are robots that repeatedly come over and tap you on the shoulder and say, ‘Do you like this? You should keep watching,’” she says.
“People would understandably be outraged about the abuse of child labor laws and the idea of pushing child audiences in ways that are not healthy for them. But that’s essentially what we’re doing online right now.”
In an interview with Harvard Law Today, Plunkett shared why she thinks the digital landscape can be unhealthy for youth, how AI could make it worse — and what we can do about it.
Harvard Law Today: Let’s start with the big picture. From your perspective, how prepared is the law to deal with AI-generated content popping up on popular online platforms used by kids, such as YouTube and TikTok?
Leah Plunkett: We have not yet come up with ethical, practical, comprehensive laws around youth engagement with online platforms — especially social media platforms. And the addition of AI-generated content across these platforms takes the gaps we already have in our legal system and makes them more urgent, yet more difficult to fix.
HLT: Can you describe what’s been happening?
Plunkett: Since the 2010s — before the AI moment that we’re in — we’ve been in a situation where social media platforms have been functioning as a digital entertainment industry that has been largely unregulated in the ways that traditional brick and mortar entertainment industries, notably Hollywood and Broadway, have been regulated. That’s showing up in a number of ways.
One of those ways is content featuring youth talent being monetized by content creators. An example is a parent filming a child in their home and then turning it into influencer content that is monetized through sponsorships, brand placements, or the platform’s content creator compensation framework. Only recently have states begun to pass child labor law reform that protects the kids being put to work on these platforms — something that has long been present in more traditional entertainment industries, like film and television.
Another way in which this new entertainment industry is going unregulated is in terms of the content presented. Unlike Hollywood or Broadway, where you have gatekeepers for when, where, what, how, and on what terms a production appears, you don’t have anywhere close to the level of gatekeeping on social media. That’s the whole point of it. And in fact, thanks to Section 230 of the Communications Decency Act, you have a lot of legal protection for social media companies to let their platforms be open stages for anyone, anywhere with a device and connectivity.
HLT: The law you referred to, Section 230, provides wide-ranging protection for online platforms from liability for content created by its users. How does that provision come into play here?
Plunkett: The law lets platforms ignore many, but not all, questions about what is in the content that people post. The tech companies don’t have to do much gatekeeping or oversight over the content posted, even if it is aimed at minors. It also lets them largely ignore the situations of the child actors who may be featured in that content.
HLT: What role do the platforms’ algorithms play in the dangers you’re seeing?
Plunkett: Before we even talk about content — AI-generated or not — we have issues stemming from the design and operation of the platforms themselves. One of the biggest concerns we see from parents, teachers, pediatricians, and even kids themselves, is that the platforms are designed in a way that makes them risky, if not dangerous for children. There are allegations that platforms are designed to be addictive through ongoing notifications and recommendations and even nudges to users toward increasingly provocative content. Regardless of what they’re watching, when minors are sitting in front of a device screen, if that device screen is pushing out notifications that tell them there’s something they need to see or just keeps playing videos on autoplay, minors seem to be particularly prone to being sucked in.
HLT: What concerns do experts have about the content itself?
Plunkett: The big concerns relate to things like depictions of activities that may be harmful to minors’ physical and mental health and social relationships, or may contain poor information quality. Content that evokes strong emotional or physiological responses, even if they are negative ones, may be some of the most engrossing, hard to look away from content — potentially even addictive content. And when it’s pushed out on a platform that is designed and operated to keep child and teen viewers engaged, no matter what content is being shared, then it can be ever harder for the youth audience for this digital entertainment industry to look away.
[W]e should expect to see more and more of what people are calling “brain rot” content aimed at kids
HLT: And what happens when you add AI-generated content into the mix?
Plunkett: AI turbocharges everything I just talked about, because it’s like giving rocket fuel to content creators to turn out content much more quickly.
AI as a toolkit can be used to create content of all types. Just because content is made with AI, it doesn’t mean the content necessarily will be negative or risky or harmful. But using AI to make content for kids and teens does mean that more content can be made and more quickly. So, my concern about the AI moment is less about AI as a toolkit, and more about the lack of legal regulation of the digital entertainment industry overall. In this environment, we should expect to see more and more of what people are calling “brain rot” content aimed at kids because we haven’t come up with ethical, practical, comprehensive legal regulation of the digital entertainment industry without AI— and now we’re having to do that with AI in the mix as well.
HLT: I understand that platforms are largely protected from liability under Section 230. But are there any legal cases involving people trying to hold companies accountable for content they host that you are watching right now?
Plunkett: There is a heartbreaking case called Anderson v. TikTok, out of the Third Circuit Court of Appeals in 2024, that involves an allegation by a mother whose young daughter hung herself in her closet after watching a TikTok challenge that promoted strangulation, and that content was recommended to her daughter via TikTok’s “For You” page. The Third Circuit didn’t let TikTok use Section 230 as a shield, because although TikTok didn’t make the videos, it recommended them to minors. It was important to the court that the daughter hadn’t sought the videos out, but rather they had been recommended to her. It’s a really interesting and impactful decision and approach, but this is likely not the final word on the question of when a social media platform somehow recommends content to a minor that encourages minors to engage in self-harm.
HLT: You’ve identified several levels of concern — from the way platforms are designed to promote “brain rot,” to the content itself, whether AI-generated or not. In your view, how do we start to address these problems?
Plunkett: Here are a couple important ideas for reform. The first is on the actor side — related to the kids who are featured in content, whether it is the child themselves, or their name, image, or likeness, which could then be fed into an AI tool — we need comprehensive, fair labor laws. This would likely happen at the state level.
At the federal level, the next big thing would be to take a look at Section 230 — ideally as a law reform matter, rather than as exercises in judicial interpretation of cases. We could address when online platforms that know they have minors on them, or are targeting minors, are being designed and operated in ways that are unreasonably risky to minors. Of course, we would want to be mindful that the law still supports innovation and respects the First Amendment and all other constitutional rights.
I also think it would be a good idea to get a coordinated, trustworthy set of private entities that play meaningful governance functions to help parents, caregivers, and others make decisions about platforms. What might the online platform equivalent of ratings by the Motion Picture Association be? As a parent, if I take my kids to the local movie theater, I can see before I walk in what the movie has been rated. I recognize this will never be perfect, because a local movie theater at the mall is different in many key and relevant ways from a two-year-old in a living room with an iPad. But we need to get the digital entertainment industry a lot closer to what already exists for legacy entertainment industries, like movies or television, where the parent can see upfront, consistently and clearly, what age range the content has been rated appropriate for, plus with a high degree of certainty that that rating has been fairly and thoughtfully administered. Similarly, I’m interested in having standardized, user friendly, legally mandated family tool centers to be able to efficiently, consistently, and with a high degree of trust understand how the platforms treat minors and how to set controls.