The investigation by NRK found that after a child watches typical children's content for 20 minutes, YouTube's algorithm begins recommending YouTube Shorts videos that fall into harmful categories. If a user clicks on one of these videos, they end up with a feed where on average one third of the content is of the same type. Advocacy groups and experts have condemned YouTube for serving low-quality AI-generated videos to children.
NRK conducted its investigation using automated tests that simulated child users searching for popular children's content on regular YouTube without being logged in. The tests were performed five times for each search term using a Python script to automatically control a browser, with each test run from a separate Norwegian IP address. The search terms included children's songs, Bluey, Cocomelon, fairy tales, Peppa Pig, Mario, Minecraft, Sabeltann, Vennebyen, and Badebussen.
The categories used to evaluate videos—violence, horror, sexual innuendo, and clear body focus—were inspired by criteria established by the Norwegian Media Authority for assessing whether video content is harmful to children. In response to these findings and broader concerns about AI-generated content, the children's advocacy group Fairplay sent a letter to YouTube CEO Neal Mohan and Google CEO Sundar Pichai expressing serious concern about the spread of AI-generated videos on YouTube and YouTube Kids. The letter was signed by more than 200 organizations and individual experts, including 135 organizations such as the American Federation of Teachers and the American Counseling Association, along with approximately 100 individual experts like author Jonathan Haidt.
The Fairplay letter calls on YouTube to implement several specific measures to protect children from potentially harmful AI-generated content. These demands include clearly labeling all AI-generated content, banning AI-generated content entirely from YouTube Kids, barring AI-generated videos from being recommended to users under 18, and implementing an option for parents to turn off AI-generated content. YouTube's current policy requires creators to disclose when realistic content is made with altered or synthetic media, including generative AI, but does not mandate such disclosure when content is clearly unrealistic.
YouTube has acknowledged some of these concerns, stating that it is actively working on developing labels specifically for YouTube Kids. A YouTube spokesperson emphasized that the platform has high standards for content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels. YouTube also points to additional safeguards available to parents, including the option to block specific channels and transparency measures for AI content.
According to the spokesperson, YouTube prioritizes transparency when it comes to AI content, labeling content from its own AI tools and requiring creators to disclose realistic AI content. Despite YouTube's assurances about high standards and protections, the NRK investigation reveals a significant contradiction between the platform's stated safeguards and the actual performance of its recommendation algorithm. Several key unknowns remain regarding how YouTube will address these concerns.
What specific actions YouTube will take in response to the Fairplay letter and NRK investigation findings has not been disclosed. The exact percentage or volume of AI-generated content currently available on YouTube Kids and regular YouTube is also unclear. Further unknowns include how YouTube's algorithm determines recommendations for children and whether it will be adjusted to reduce harmful suggestions based on the investigation's findings.
The timeline for implementing new labels for AI-generated content on YouTube Kids has not been specified. Additionally, whether regulatory bodies in Norway or other countries are investigating YouTube's practices based on these reports remains an open question. The implications of these findings suggest a potential need for stronger safeguards and greater transparency in how children's content is recommended on digital platforms.
The investigation highlights the ongoing challenges in balancing algorithmic recommendations with child safety, as platforms like YouTube continue to evolve their content policies and parental controls.