This story was co-published with The 74, a nonprofit, independent news organization focused on education in America. Sign up for their early learning Substack.
In a video that has been played almost 50,000 times since it was posted five months ago, two cartoon children sing along as they guide viewers through the experience of riding in a car amid a vividly colored, utopian backdrop.
At first, the video seems harmless. The song is upbeat and informative. The animation aligns with the promised subject.
Except, hold on a second, did those lyrics just say, “Red means stop, and green means right”? And why are the characters changing in every frame—different hairstyles and colors, slightly different outfits for the girl and boy?
Worst of all, for a video that purports to be “educational,” the visuals are sending precisely the wrong message about riding in a car.
The video opens with the children riding, without seatbelts, in the front row of a moving vehicle. The next scene shows the girl defying physics, floating alongside a moving car, while the boy is seated in what appears to be the hood of the vehicle as it travels backward down a busy street.
The third and fourth scenes show the children walking in the middle of the road with moving cars behind them.
It’s not hard to imagine how the video could have gotten so many views.
Maybe a parent needs to complete a task—fold some laundry, get dinner ready, hop in the shower—and is searching for an age-appropriate video on YouTube to entertain their toddler during that short time. Perhaps that toddler, increasingly independent and prone to running off, needs a better grasp of road safety. “Vroom Vroom! Car Ride Song | Educational Nursery Rhyme for Kids” presents itself as a win-win solution.
But children’s media experts say this is AI-generated “slop,” and that it has infiltrated the internet, preying on young children and their unsuspecting caregivers.
“We’re at the beginning of a monster problem, and we have to get hold of it quickly,” said Kathy Hirsh-Pasek, a professor of psychology and neuroscience at Temple University and senior fellow at Brookings Institution who studies child development.
“We’re at the beginning of a monster problem, and we have to get hold of it quickly.”
She and other researchers, including Dr. Dana Suskind, a professor of surgery and pediatrics at the University of Chicago, have warned that AI-derived products for babies and children need to be reined in.
“This is not neutral content,” said Suskind, author of the forthcoming book Human Raised: Nurturing Connection, Curiosity, and Lifelong Learning in the Age of AI. “I think of this as toddler AI misinformation at an industrial scale. It’s very risky for the developing brain.”

It’s hard to say just how pervasive this type of content is, but it’s clear the problem is widespread and getting worse. One report published by video-editing company Kapwing in November 2025 found that about 21 percent of YouTube’s feed consists of low-quality, AI-generated videos.
Jo Jo Funland, the creator of the “Vroom Vroom! Car Ride Song,” has posted more than 10,000 videos since its first release just seven months ago, in August 2025. That’s an average of about 50 new videos each day. Sesame Street, meanwhile, has published about 3,900 videos on YouTube in its entire 20 years on the platform.
The cognitive decline associated with the consumption of AI slop—such as a shortened attention span, decreased focus, and mental fog—is sometimes referred to as “brainrot.” But when the audience is children, there’s not much to rot, Suskind said. Because a child’s brain is still in its early development, still being built, what you get instead, she said, is “brain stunt.”
“Every experience is building a million new neural connections,” Suskind said of children who are still in their early years. “You will be unintentionally wiring the brain in incorrect ways.”
That comes at a cost. A child may absorb the implicit messages of something like the Vroom Vroom video and end up mimicking the “downright dangerous” behaviors they saw depicted there, said Carla Engelbrecht, who has created digital experiences for children’s media brands such as Sesame Street, PBS Kids, and Highlights for Children and considers herself an AI educator and creator.
Engelbrecht is also something of a whistleblower when it comes to child-targeted AI slop. She has found countless examples of AI-generated videos that could cause real physical harm.
“The more content I find,” she said, “the more horrified I get.”
They include videos of a scared child being chased by a T-Rex; a crawling baby biting into an apple that appears bloody, swallowing whole grapes (a major choking hazard), and eating honey (which carries the potentially fatal risk of infant botulism) and a teacher eating raw elderberries (which are toxic when uncooked).

But there’s another category of AI slop in kids’ media, she said, with consequences that are more difficult to capture. These videos claim to pertain to learning and development, focusing on topics like literacy and numeracy, but due to the speed with which they are produced and the lack of quality checks, they end up introducing or enforcing the wrong lessons. And sometimes, the errors don’t come until midway through the content. That means if a parent previews the first few seconds of a video, they may miss the unreliable information that appears later in the clip.
A video about vowels includes visuals of consonants. It also depicts letters on screen that don’t align with the audio overlay. A video promising to teach about the 50 US. states sings along as butchered state names appear in text at the bottom of the screen — Ribio Island, Conmecticut, Oklolodia, Louggisslia. A video about the seven continents frequently shows a compass with more than four points and indecipherable symbols where the “N,” “S,” “E” and “W” should be.
These may seem like silly slips from a machine, but for a child, every “input” is part of their learning process, Engelbrecht explained. “Mixed signals means you are delaying them learning the cause and effect of a thing,” she said. “If you learn that red is blue and blue is red, that’s a delay.”
“If you’re inconsistent, it takes that much longer to learn,” she added. “Every delay they have means everything else gets pushed back. That’s taking their executive function offline to go learn nonsense.”
Amid all of this internet muck, the question of responsibility is a tricky one.
“Fundamentally, everybody has a responsibility,” Engelbrecht said, including platforms like YouTube; companies that operate large-language models, like OpenAI, Google, and Anthropic; the people creating and publishing these poor-quality videos intended to reach kids; and parents.
YouTube’s current policy requires creators to disclose videos that have been generated by or altered with AI when that content “seems realistic.” This does not apply to cartoons and animated content—which seems to be the majority of what’s reaching children—because it has long been assumed to be fictional content, Engelbrecht explained.
The platform does have stricter “quality principles” for content targeting children than it does for its general viewership, said Boot Bullwinkle, a YouTube spokesperson, in a statement. It also has a “child safety policy.” (These web pages, however, do not specifically address the use of AI.)
Due to the volume of content on the platform, YouTube does not catch every video that violates its policies. (It did take action against at least seven channels on the platform in response to The 74’s reporting, including terminating two.)
“The trust that parents and families put in YouTube is a responsibility we take very seriously, and we’ve invested deeply in age-appropriate environments that empower parents.”
“The trust that parents and families put in YouTube is a responsibility we take very seriously, and we’ve invested deeply in age-appropriate environments that empower parents,” Bullwinkle wrote in the statement. “YouTube Kids, for instance, offers industry-leading parental controls and rigorous quality principles designed to provide a safer experience for families.”
YouTube Kids is a distinct version of the platform with content that has been curated for children from birth to 12. Many families continue to use the main YouTube platform to view children’s content, though, which means many creators still have an audience and earning opportunities there. None of the AI-generated videos reviewed for this story were found on YouTube Kids, although recent reporting in The New York Times found AI videos had penetrated that space as well.
Sierra Boone, executive producer of Boone Productions, a children’s media production company that makes original content for children ages 2 to 6, noted that kid-friendly competitors to YouTube, such as Sensical by Common Sense Media and Meevee, do exist. But they have struggled to break through to families.
“Overcoming that juggernaut is extremely difficult,” Engelbrecht said of YouTube. “There’s a graveyard full of failed attempts to create a safe YouTube alternative.”
Boone suggested that some effective labeling would go a long way, not unlike the “content credentials” LinkedIn is phasing in, which aim to disclose when media has been created or edited by AI, in part or in whole.
Engelbrecht thinks labels are a good idea, not least because they would be important for AI literacy, but she also believes they would penalize creators like her who use AI “thoughtfully” in their work. (She is developing, among other projects, an AI tool that detects AI slop in children’s videos on YouTube.)

As for who’s behind the videos, some of it is coming from overseas, but plenty of it is home-grown, created by Americans with access to phones or computers who are just trying to “make a quick buck,” as Boone put it.
These people are often using AI at every step of the process — to develop themes and scripts for children’s videos, to generate the videos, and to automate the process of publishing the content regularly on “faceless” YouTube channels, in which the creator is anonymous and has no on-camera presence, Engelbrecht explained.
A little over a year ago, a popular content creator posted a video to YouTube in which she raves about a “huge opportunity” that would lead to “many millionaires.” The opportunity? AI-generated animated videos that inexperienced users could create with a simple prompt in just minutes. The target audience? Young children.
That video has been viewed more than 335,000 times.
“AI in general isn’t inherently good or bad, but it exposes people’s intentions,” said Boone, whose production studio is responsible for The Naptime Show.
The flood of AI-generated content, she added, reveals how many people have “no regard for children or how they’re impacted,” as long as it benefits them.

For Boone, who works painstakingly with her team on every episode of The Naptime Show — researching, writing the script, editing the script, placing props, doing table reads, going to set, filming, editing the video, publishing and promoting the final product — creating children’s media is an “honor” that should be taken seriously.
“The very foundation of creating children’s media is you are creating something that a child, in their core developmental years, is going to be consuming,” Boone said. “So what is the level of intention that you’re bringing to that? I think we need to be holding the people who are uploading this content more accountable.”
Ultimately, though, in the absence of more regulation or content moderation, the burden falls on parents.
Parents are likely putting YouTube videos in front of their children in the first place because “they are already so stretched,” said Suskind, who still sees patients in her pediatric practice and interacts with families often. So it’s inherently challenging to ask them to more closely monitor the content that is coming through their children’s screens.
Yet that is what must be done, Hirsh-Pasek said. Until a better solution emerges, the onus is on parents to separate the slop from “the good stuff.”
“We owe it to our kids to protect them,” said Hirsh-Pasek. “That’s what they look to parents for, to keep them in safe spaces. If we don’t deal with that or do anything about that, we’ve absconded [from] our responsibility.”
