Engagement Over Education: How AI Slop Captured the Toddler Media Diet

The egg is too big. That is the first thing you notice. It rolls out from behind a barn door that is itself the wrong shape, on a farm whose perspective keeps sliding a degree or two out of true, and when the egg cracks open the horse that emerges is proportioned like a child's drawing of a horse attempted by a committee. Its legs are the wrong length. Its eyes are in slightly the wrong place. The soundtrack, set to an off-key rendition of “Old MacDonald Had a Farm”, is already on to the next verse before the animal has finished hatching. The whole sequence lasts about eleven seconds. Then the video cuts to a letter of the alphabet, a new animal, a new impossibility, the same music climbing a half-step higher than your ear wants it to.
This is not, on any sensible definition, a children's video. It is a machine's hallucination of one, optimised for the half-second in which a thirteen-month-old stops crying and reaches for the screen. According to the New York Times, which in February 2026 spent weeks reviewing more than a thousand videos that YouTube's algorithm recommended to accounts configured as children's, it is what vast stretches of the modern toddler's media environment now look like. After a single viewing of a legitimate CoComelon video, the Times reported, more than 40 per cent of the YouTube Shorts subsequently recommended to the test account contained synthetic visuals. The videos carried names that promised to teach the alphabet and animals and colours. They did no such thing. They were, in the exacting formulation the internet has settled on, slop.
On 1 April 2026 a coalition organised by the American advocacy group Fairplay and addressed jointly to Sundar Pichai, chief executive of Google's parent Alphabet, and Neal Mohan, chief executive of YouTube, asked the companies to do something about it. The letter was signed by more than 230 organisations and individual experts, including the American Federation of Teachers, the American Counseling Association, the National Black Child Development Institute, the Canadian Centre for Child Protection, Mothers Against Media Addiction and ParentsSOS. Among the individuals were Jonathan Haidt, author of The Anxious Generation, and the developmental behavioural paediatrician Jenny Radesky of the University of Michigan, who co-directs the American Academy of Pediatrics' Center of Excellence on Social Media and Youth Mental Health. The letter asked for six things: clear labelling of all AI-generated content across YouTube; an outright ban on AI content on YouTube Kids; a prohibition on AI-generated “made for kids” content on the main platform; a rule against recommending AI content to users under eighteen; a parental toggle that switches AI off by default; and a halt to further investment in AI-generated children's content.
Two weeks later, on 13 April, an arXiv paper titled “Who Gets Flagged? The Pluralistic Evaluation Gap in AI Content Watermarking” added an uncomfortable footnote. Even the parts of those demands that look technical, labelling, detection, identifying machine-made video at the point of delivery, cannot be done reliably with the tools platforms currently deploy. The three major regulatory frameworks that mandate watermarking, the EU AI Act, US Executive Order 14110 and China's Measures for Labeling AI-Generated Synthetic Content, all require some form of traceability; none requires that detection performance be evaluated across languages, cultural content types or demographic groups. The governance gap is not that watermarking is impossible. It is that watermarking, as currently specified, can be present and still fail silently in exactly the contexts in which it is supposed to protect the most vulnerable users.
All of which amounts to a problem with a particular shape. The first media environment that many of today's infants and toddlers encounter is increasingly generated by systems with no developmental mandate, governed by recommendation algorithms that optimise for watch time, delivered through platforms whose detection tools cannot reliably distinguish synthetic from human-made content, to children whose brains are at their most plastic. The question is not whether this matters. The question is what the mattering consists of, and what the companies whose infrastructure produces it are actually obliged to do.
The shape of the slop
To understand why the Fairplay letter exists, it helps to understand what has happened to the economics of children's video. The old attention economy, the one that produced Sesame Street and Bluey and, for better or worse, CoComelon, was expensive. It involved writers and animators and composers and, crucially, child-development consultants. A single episode of a flagship preschool programme could take a year and cost upwards of a million dollars. The cost structure was a filter: people who could not afford specialists did not tend to make the shows.
That filter has collapsed. According to a December 2025 Fortune profile, a 22-year-old entrepreneur named Adavia Davis runs a YouTube network whose videos are almost entirely generated by a proprietary pipeline called TubeGen, built by his partner Eddie Eizner. Scripts and visuals come out of Anthropic's Claude; narration from ElevenLabs; editing is automated. The results can run as long as six hours and cost as little as sixty dollars to produce. Davis told Fortune the network was taking in forty to sixty thousand dollars a month in advertising against about six and a half thousand in operating costs.
Zoom out, and the shape gets more alarming. A December 2025 study by the video-editing company Kapwing examined fifteen thousand trending YouTube channels and isolated 278 that produced nothing but AI-generated content fitting the slop profile. Those 278 channels had collectively amassed 63 billion views, 221 million subscribers, and an estimated 117 million US dollars in annual ad revenue. A single South Korean channel, Three Minutes Wisdom, had accumulated 2.02 billion views on its own. The broader Kapwing analysis suggested between a fifth and a third of the typical YouTube recommendation feed was now AI slop. In some children's categories, independent investigators have reported that only around five per cent of content in a given niche appears to be human-made.
These numbers describe a particular economic equilibrium. Each slop video earns a fraction of what a well-made children's programme earns per viewer, but the marginal cost of producing the next one is close to zero, and YouTube's algorithmic plumbing, which rewards raw watch time, does not meaningfully penalise the difference in quality. Neal Mohan listed “managing AI slop” among YouTube's priorities for 2026 in his February annual letter. The inauthentic content policy, clarified in July 2025, now explicitly targets “templated, low-effort videos at scale”. But YouTube has been clear, via creator liaison Rene Ritchie, that AI itself is not banned, and channels using AI remain eligible for monetisation. The company is trying to have an AI industry and a brand-safe children's platform in the same room at the same time.
What the videos actually show
The specifics are worse than the abstraction. The Times catalogued them at length. A gooey liquid squeezed into a glass of water before turning into animals representing each letter of the alphabet, except the animals were chimeras with mermaid tails. An impossibly proportioned horse hatching from an egg. Faces that warped mid-frame. Extra body parts appearing and disappearing within a single shot. Garbled text purporting to spell words. None of it was longer than about thirty seconds in Shorts form, which, as developmental researchers pointed out to the paper, allows no time for the repetition and narrative scaffolding that underlies actual learning.
Worse, some of the content was not merely incoherent but materially dangerous. A March 2026 follow-up reported by Futurism and the Los Angeles edition of National Today, drawing on videos flagged by the Times and by researchers at Children's Health Defense, documented AI-generated “educational” clips that depicted characters walking in the middle of a road with cars approaching as if this were normal; clips that taught road rules by informing children that “green means right” instead of “go”; clips that showed a baby swallowing whole grapes, a well-documented choking hazard; clips that showed a baby eating honey, which can cause infant botulism and kill children under one. These were not fringe horror videos surfaced by dedicated investigators. They were recommended by the platform, under thumbnails and titles that promised conventional toddler content.
The reason this happens is partly that the models do not know better and partly that, by the time the content has been generated, no part of the delivery pipeline is looking. Generative video systems trained on human-made footage can reproduce the surface characteristics of a children's cartoon, the saturated colours, the nursery-rhyme cadence, the toothy grin on a farm animal, without any internal representation of which of those surface features is meant to model appropriate behaviour. A script that says “baby eats a snack” can be rendered with any snack the model has seen enough of. If the model has seen honey, it can render honey. The video will pass every surface test the platform applies, because the surface is all there is.
Recommendation as developmental hazard
The coalition letter is careful not to suggest that every individual AI-generated video, considered on its own, is harmful. Its central argument is about system effects, and the part of the system that matters most is not creation but distribution. On YouTube, what a child watches next is a function of what the recommender predicts will maximise some combination of watch time, session length and ad impressions. In practice, as the Times documented and as a March 2026 EU Today investigation confirmed, the recommender steers children toward AI content because AI content is optimised, accidentally or deliberately, for exactly the features the recommender rewards: high novelty, short duration, bright colours, rapid cuts, sticky audio, the over-stimulating profile that holds a young child's gaze for longer than a slower, more coherent programme would.
This is the failure mode Radesky's American Academy of Pediatrics policy statement “Media and Young Minds”, published originally in 2016 and updated in 2024, really points at. Children under about two, according to the research literature her statement summarises, exhibit what psychologists call the “video deficit”. They learn language and behaviour from live human interaction far more readily than from video of the same speaker. The video deficit narrows if an adult co-views and scaffolds the material, asking questions, directing attention, connecting what is on the screen to the world around the child. It narrows further if the video involves genuine back-and-forth, as in a video call. It does not narrow if the child is alone in front of a screen that is alone in front of an algorithm.
Patricia Kuhl's 2003 work at the University of Washington is the canonical study. Nine-month-old American infants exposed to Mandarin speakers in person retained sensitivity to Mandarin phonemes they would otherwise have lost by twelve months. Infants exposed to the same speakers on video, saying the same things in the same way, showed no such retention. The human face, the shared gaze, the social contingency of the exchange, were doing work the pixels alone could not do. Two decades of follow-up research, including work on video chat by Lauren Myers and colleagues in Child Development (2014) and the 2018 PNAS study “Two are better than one” on peer-assisted video learning, has consistently found that whatever makes human interaction educationally potent for very young children is not reliably reproduced by a screen.
Now consider the environment the Fairplay letter is describing. A toddler sits in a high chair or a buggy or a back seat, alone with a tablet. The tablet is showing content no adult has vetted, because no adult knew what was going to appear next, because nobody knows, because the recommender is choosing in the same moment the child is watching. The content is generated by a model that has no mental model of a child. The audio is pitched to hold attention by exploiting the same perceptual features that attention-capturing advertising exploits in adults: the sudden colour change, the unexpected musical interval, the face that does not quite resolve. In place of joint attention, there is the machine's attention to engagement metrics. In place of a parent pointing at a horse and saying “horse”, there is a model that cannot quite render a horse and does not know why that matters.
The likely developmental cost is not that any single child will be ruined by any single video. It is that the aggregate environment is nudging language acquisition, narrative cognition and the early scaffolding of theory of mind toward corners they do not want to be in. Narrative cognition depends on following a story that unfolds over time, with characters whose goals and states change in recognisable ways. The thirty-second AI clip does not have characters in that sense; it has arrangements of pixels that a model has stitched together to maximise retention on a swipe feed. Parasocial bonds, which the Georgetown Center on Digital Media and Children's Development has shown can be genuinely educationally useful when they form with coherent characters like Daniel Tiger or Elmo, cannot form in the usual way with characters who mutate between shots because no stable character was ever authored. Theory of mind, the slow developmental achievement of understanding that other beings have beliefs, desires and intentions different from your own, depends on encountering minds in a reliable enough way to build a model of what a mind is. The AI chimera on the Shorts feed has no mind, never had one, and does not cohere long enough to pretend to.
The governance gap in detection
The coalition letter's demand for labelling is the one that sounds most technically tractable and is, in practice, the hardest. This is the territory the April 2026 arXiv paper stakes out. “Who Gets Flagged?” argues that the existing policy scaffolding around watermarking, Article 50 of the EU AI Act, US Executive Order 14110, China's content-labelling measures, shares a common flaw. Each framework obliges producers of generative AI systems to mark their outputs in some way, and each assumes, without requiring evidence, that the marks can be reliably recovered downstream. Recoverability varies with the statistical properties of the content itself. For text, with language. For images, with visual style. For audio, with compression, pitch and rate. None of the three frameworks requires evaluation of recoverability across those axes, which means a watermarking regime can be formally compliant and still be invisible on exactly the subset of content the regulator most needed to flag.
The paper does not single out children's content, but the implications are stark. Children's video is aggressively re-compressed, up-pitched, cut into Shorts, remixed with stock footage, re-uploaded across accounts. Every step degrades the signal. By the time a video reaches YouTube Kids, the original watermark a conscientious model provider might have embedded at generation time is, with high probability, unreadable. A related position paper, “Watermarking Without Standards Is Not AI Governance”, published on arXiv by Carnegie Mellon and partner institutions in May 2025 and updated in March 2026, makes the same point from the other direction: in the absence of shared standards, different providers produce different, mutually invisible watermarks, and platforms that want to act on the information cannot, because each provider's scheme requires a different decoder.
This does not mean YouTube cannot label AI content. It means the platform cannot label it purely by detecting a watermark in the file. It has to rely on other signals: account behaviour, metadata, manual review, creator flags. YouTube's 2024 disclosure requirement, updated through 2025, asks creators to tick a box in YouTube Studio when they upload “realistic altered or synthetic” content. The honest question is how much self-disclosure it really produces among operators whose business model depends on the audience not realising the videos are AI-generated. The same issue surfaced when the platform tried to rely on creator self-identification of child-directed content after the 2019 FTC settlement that required Google and YouTube to pay 170 million US dollars for alleged violations of the Children's Online Privacy Protection Act. Self-identification is a floor; the operators with the most reason to lie are the ones the label is supposed to constrain.
The obligations platforms are not yet meeting
The legal scaffolding around children's content has been quietly maturing while this environment has been forming beneath it. COPPA, the 1998 Children's Online Privacy Protection Act, is still the US baseline, and the 2019 settlement forced YouTube to build the “made for kids” designation that now, ironically, makes it easier for AI operators to flag their own slop into the child-directed pipeline. The UK's Online Safety Act 2023, whose Protection of Children codes came into force on 25 July 2025, imposes a statutory duty of care on user-to-user services accessed by children, backed by fines of up to 18 million pounds or 10 per cent of global turnover, enforced by Ofcom, which by March 2026 had opened more than eighty investigations under the Act's age-assurance provisions. The Act requires proportionate measures against content “harmful to children”, a category Ofcom's January 2026 guidance interprets to include content presenting risks of physical harm.
The EU's Digital Services Act, whose Article 28 guidelines on the protection of minors were finalised by the European Commission on 14 July 2025, obliges very large online platforms including YouTube to ensure “a high level of privacy, safety and security of minors”. The guidelines specifically address recommender systems: minors should have the ability to reset their recommended feeds; non-profiling recommender options should be available and, where appropriate, set as default; feedback mechanisms such as “show me less” should directly influence content visibility. The guidelines are non-binding in the strict sense, but the Commission has described them as a “significant and meaningful benchmark” for Article 28 compliance, and the enforcement powers are real.
Against this landscape, the Fairplay letter's demands look less like activism and more like a preview of what compliance will, within a few years, require. Clearly labelling all AI-generated content is a corollary of Article 50 once detection becomes reliable enough to enforce. Banning AI-generated content on YouTube Kids is implicit in a UK Online Safety Act duty of care that treats the cultivation of cognitive harm to under-fives as a foreseeable consequence of the current recommender configuration. Barring recommender systems from pushing AI content to under-eighteens is effectively what the DSA Article 28 guidelines describe. A parental toggle is the minimum user-control affordance those same guidelines enumerate.
What the platforms are not currently meeting is less the letter of these regimes than the spirit. YouTube's inauthentic content policy will remove a channel that produces templated videos at scale, but reactively, after views have accumulated and revenue has been disbursed and children have watched. YouTube Studio's disclosure tool flags AI content if the creator opts in, which is exactly what creators with a business model built on opacity will not do. The 2019 COPPA “made for kids” designation was never intended as a machine-readable tag for quality control, and is now being inverted by AI operators who flag their own output into the children's pipeline to collect child-directed advertising revenue that complies with COPPA's behavioural-targeting rules. None of this is outright illegal. A great deal of it is, in the terms of the Online Safety Act, arguably not reasonably practicable to prevent. That is the hinge on which every conversation about platform responsibility now swings.
Why the platforms have not acted
The political economy is mundane. The first reason is revenue. The 117 million US dollars in annual ad revenue across 278 slop channels is a small slice of YouTube's take, but non-trivial and growing, produced by channels whose cost of adding the next video is measured in dollars rather than thousands. Banning AI slop from YouTube Kids the way Fairplay has asked would require YouTube to either build a reliable classifier, exactly the problem the arXiv paper says is not solvable with watermarking alone, or default to human review of new children's channels, expensive, slow and the opposite of the platform's operating philosophy.
The second reason is that YouTube is not the primary villain of its own economics, and the company knows it. The slop pipeline extends upstream to Anthropic's Claude, OpenAI's video models, ElevenLabs' voice clones, Runway's generative video, and downstream to advertising networks that pay for watch time regardless of whether the watcher is sixteen months old. Each of those players faces regulatory pressure to label outputs; each, on present technology, cannot guarantee a label will survive the distribution pipeline. For YouTube to ban AI on Kids in a way that actually worked, it would need cooperation from every upstream provider and a detection regime that outperformed every adversarial operator with a financial incentive to evade it. That is not a problem any single platform can solve unilaterally.
The third reason is that there has historically been no political cost to inaction. COPPA enforcement is slow. The FTC's 170-million-dollar 2019 penalty was extracted for a decade of behavioural advertising violations, not for the content itself. The Online Safety Act has not yet been tested in a children's-content case at precedent level. The DSA is still in its enforcement adolescence. None of this is an excuse, but it explains why the board of Alphabet has not yet been presented with a memo saying the quiet part out loud: that the expected value, under the current regime, of cleaning up children's AI content is smaller than the expected value of continuing to run the platform roughly as it is.
The coalition letter is trying to change that calculus by making the political cost of inaction legible. Getting 230 organisations on the same page is a logistical feat. Getting the American Federation of Teachers, the National Black Child Development Institute, the Canadian Centre for Child Protection and a Jonathan Haidt on the same letter makes the story harder to dismiss as activist overreach or parental paranoia. It builds a record that will be cited at the next Ofcom enforcement action, the next DSA systemic-risk assessment, the next COPPA rulemaking. Those are the places where the cost-benefit math for YouTube will change.
What a responsible regime would look like
The contours of a workable regime, extrapolating from the coalition demands and the regulatory trajectory, are not mysterious. They are just expensive.
On the content side, YouTube Kids, the walled-garden app, should not contain AI-generated video at all for children under a threshold set by reference to the developmental literature Radesky and her colleagues have been producing for a decade. The argument that this is technically infeasible because detection is unreliable is, on closer inspection, an argument for whitelisting rather than blacklisting: allow only content from a finite set of human creators whose identities and production processes are verified, and default to refusal for everything else. This is the model the BBC CBeebies app already runs in effect. The cost is dramatic reduction in the library. The benefit is dramatic reduction in the harm surface.
On the main platform, the recommender should not push AI-generated content to accounts logged in as under-eighteen, and should offer a non-profiling recommender by default to those accounts. This is the DSA Article 28 default; Fairplay has simply asked YouTube to apply it to the specific class of content the data suggests is most hazardous. A parental toggle goes further, giving adult users a single control for AI content across the household account, on by default, shifting the cognitive labour of supervision back onto the platform.
On disclosure, the watermarking regime needs the interoperability the May 2025 arXiv paper has already laid out: a shared, audited, open-standard scheme every major model provider is contractually obliged to implement and every platform is contractually obliged to decode. Article 50 is the right vehicle. In the interim, platforms should rely on stronger signals than creator self-declaration: account behaviour, production cadence, upload patterns, and the content-forensics tooling newsrooms have been developing since the first deepfakes appeared.
On advertising, the monetisation pipeline for child-directed AI content should be closed. The post-COPPA made-for-kids designation was always meant to constrain commercial exploitation of child viewers. Treating AI-generated made-for-kids content as ineligible for the advertising revenue that funds the slop economy is the single regulatory change with the largest expected effect on the supply side. It does not require new legislation. It requires the platform to interpret its existing obligations in the direction of children rather than operators.
What the mattering consists of
Return, finally, to the hatching horse. There is a version of this story in which the debate is a fuss about nothing. Children have always watched odd television. The Teletubbies were strange. Peppa Pig is, on close inspection, relentlessly repetitive. No generation raised on the BBC test card turned out wholly deranged. The calculator analogy developmental psychologists have spent three years tearing down is not the only deflection available; there is also a cynical one, which says commercial children's media has always been a cognitive hazard and AI has merely lowered the production cost.
The answer the Fairplay coalition is giving is that scale and direction both matter. Scale, because the difference between a child watching one strange programme a week and a child being served an endless algorithmic feed of strange synthetic content is not a difference of degree but of what the child's media environment is. Direction, because every previous generation of children's media, even the commercially cynical ones, involved adults at some point making decisions about what was appropriate for a six-year-old to see, and those decisions, however imperfect, were adults' decisions. The AI slop pipeline removes those decisions. It replaces them with engagement metrics. It delegates to a model the things a responsible children's producer, even a merely competent one, would have been contractually obliged to think about.
The long-term implications are legible even if they are not yet measurable. A cohort of children whose earliest language exposure is dominated by content with no consistent narrator, no stable characters, no coherent grammar of the world, will build internal models of narrative, language and social reality that reflect that input. A cohort whose parasocial bonds form with mutating AI chimeras rather than with Daniel Tiger or Bluey or Elmo will have parasocial bonds that are less developmentally productive. A cohort whose first encounters with educational content are not educational, because they teach that green means right and that babies eat honey, will start formal schooling with a different set of priors than a cohort whose first encounters were produced by adults trying, in however flawed a way, to teach something true. These are not apocalyptic predictions. They are base-rate extrapolations from a developmental psychology literature that has been remarkably consistent for fifty years.
The obligations platforms are not currently meeting are also not exotic. They are the obligations that have existed, in some form, since television was invented and any regulator decided children's television should be treated differently from adult television. Broadcast regulators in every developed country, from the FCC to Ofcom to ACMA, have always operated on the premise that the developmental vulnerability of young children creates a special duty for the people who transmit content into their homes. YouTube has argued for twenty years that it is not a broadcaster, that it is a platform, that the content on it is produced by its users and that its role is infrastructural rather than editorial. The AI slop crisis is the moment at which that argument becomes unsustainable. When the content is not being produced by users in any meaningful sense, when it is produced by models operated by commercial entities whose relationship with the child is purely extractive, the platform is not hosting user content. It is running a delivery system for a synthetic children's media industry it declined to regulate because declining to regulate was profitable.
The coalition letter does not put it in quite those terms. It asks, politely and in the language of child welfare, for YouTube and Google to behave as if the children on the other end of the recommendation algorithm were their own. The question the letter is really posing is the one the next five years of regulatory enforcement will answer: whether a platform whose products meaningfully shape the cognitive development of a generation of infants and toddlers can continue to claim, with a straight face, that the cognitive development of that generation is somebody else's problem. It is not a question that has a technical answer. It is a question about what kind of industry we want children's media to be, and who, in the end, is responsible for the horse that hatches from the egg.
References & Sources
- Metz, C. and Mickle, T. “How A.I.-Generated Videos Are Distorting Your Child's YouTube Feed.” The New York Times (via dnyuz.com), 26 February 2026. https://dnyuz.com/2026/02/26/how-a-i-generated-videos-are-distorting-your-childs-youtube-feed/
- Fairplay for Kids. “YouTube: Stop 'AI Slop' for Kids.” 1 April 2026. https://fairplayforkids.org/youtube-stop-ai-slop-for-kids-says-letter-from-fairplay-over-200-experts-including-jonathan-haidt/
- Fairplay for Kids. Open letter to Sundar Pichai and Neal Mohan. March 2026. https://fairplayforkids.org/wp-content/uploads/2026/03/YouTube-Letter-AI-Slop.pdf
- “Who Gets Flagged? The Pluralistic Evaluation Gap in AI Content Watermarking.” arXiv:2604.13776, April 2026. https://arxiv.org/abs/2604.13776
- “Watermarking Without Standards Is Not AI Governance.” arXiv:2505.23814, May 2025 (rev. March 2026). https://arxiv.org/abs/2505.23814
- “Missing the Mark: Adoption of Watermarking for Generative AI Systems.” arXiv:2503.18156, March 2025. https://arxiv.org/abs/2503.18156
- Futurism. “'Educational' YouTube AI Slop Encourages Kids to Play in Traffic.” March 2026. https://futurism.com/artificial-intelligence/educational-youtube-ai-slop-play-in-traffic
- National Today (Los Angeles). “AI-Generated YouTube Videos Teach Children Dangerous Behaviors.” 30 March 2026. https://nationaltoday.com/us/ca/los-angeles/news/2026/03/30/ai-youtube-videos-teach-children-dangerous-behaviors/
- EU Today. “YouTube steers children towards AI-made videos disguised as educational content.” 2026. https://eutoday.net/youtube-steers-children-towards-ai-made-videos/
- Children's Health Defense. “AI 'Slop' Puts Young Brains at Risk.” 2026. https://childrenshealthdefense.org/defender/ai-slop-misleading-videos-youtube-children-safety-brain-development-risks/
- Fortune. “This 22-year-old college dropout makes $700,000 a year from 'AI slop'.” 30 December 2025. https://fortune.com/2025/12/30/ai-slop-faceless-youtube-accounts-adavia-davis-user-generated-content/
- MediaNama. “AI Slop Videos Make Up 33% of YouTube Feed, Says Study.” December 2025. https://www.medianama.com/2025/12/223-ai-slop-videos-youtube-algorithmic-recommendations/
- YouTube Blog. “Neal Mohan's 2026 Letter: The Future of YouTube.” February 2026. https://blog.youtube/inside-youtube/the-future-of-youtube-2026/
- Flocker. “YouTube Inauthentic Content Policy: AI Enforcement Wave 2026.” 2026. https://flocker.tv/posts/youtube-inauthentic-content-ai-enforcement/
- Radesky, J. and Christakis, D. “Media and Young Minds.” AAP Policy Statement, Pediatrics, 2016 (updated 2024).
- Kuhl, P., Tsao, F. and Liu, H. “Foreign-language experience in infancy.” PNAS, 2003.
- Myers, L. et al. “Skype me! Socially Contingent Interactions Help Toddlers Learn Language.” Child Development, 2014. https://pmc.ncbi.nlm.nih.gov/articles/PMC3962808/
- Lytle, S., Garcia-Sierra, A. and Kuhl, P. “Two are better than one.” PNAS, 2018. https://www.pnas.org/doi/10.1073/pnas.1611621115
- Brunick, K. et al. “Children's future parasocial relationships with media characters.” Georgetown CDMC, 2016. https://cdmc.georgetown.edu/wp-content/uploads/2016/04/Brunick-et-al-2016.pdf
- FTC. “Google and YouTube Will Pay Record $170 Million.” 4 September 2019. https://www.ftc.gov/news-events/news/press-releases/2019/09/google-youtube-will-pay-record-170-million-alleged-violations-childrens-privacy-law
- UK Government. “Online Safety Act 2023.” https://www.legislation.gov.uk/ukpga/2023/50
- Ofcom. “Protection of children duties under the Online Safety Act.” 2025. https://www.ofcom.org.uk/online-safety/protecting-children/protection-of-children-duties-under-the-online-safety-act
- Ofcom. “Online safety industry bulletin, March 2026.” https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/online-safety-industry-bulletins/online-safety-industry-bulletin-march-2026
- European Commission. “Guidelines on the protection of minors.” 14 July 2025. https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-protection-minors
- Children's Online Privacy Protection Act of 1998 (COPPA), 15 U.S.C. §§ 6501-6506.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
Listen to the free weekly SmarterArticles Podcast








