The Internet Nobody Wrote: Autonomous Content Floods the Web

On 20 March 2026, WordPress.com flipped a switch that most of the internet did not notice but probably should have. The platform, which powers more than 43 per cent of all websites globally according to figures presented at Automattic's State of the Word event in December 2025, enabled AI agents to autonomously write, edit, publish, and manage entire websites. Not draft suggestions. Not autocomplete. Full publishing control, handed to machines through a protocol that lets Claude, ChatGPT, Cursor, and any other compatible AI client operate a WordPress site the way a human editor once did.
The update added 19 new writing capabilities across six content types: posts, pages, comments, categories, tags, and media. From a single natural-language prompt, an AI agent can now draft and publish a post, build a landing page using a site's existing theme and block patterns, approve and reply to comments, reorganise category structures, or fix missing alt text across an entire media library. The agent even understands your site's design system, inheriting its colours, fonts, spacing, and patterns so that everything it produces looks as though a human built it with care.
WordPress.com users already publish 70 million new posts every month. That is 1,600 new blog posts every minute, or roughly 26 every second. Now imagine what happens when you remove the bottleneck of human typing speed, human fatigue, and human doubt from that equation entirely.
Welcome to the age of autonomous publishing. The question is no longer whether AI can write for the web. It is whether anyone will be able to tell the difference, or whether it will even matter.
WordPress Hands Over the Keys
The technical architecture behind this shift is worth understanding, because it reveals how deliberately the infrastructure was built. WordPress.com's AI agent capabilities run on the Model Context Protocol, an open standard that governs how applications provide context to large language models. Automattic first introduced MCP on WordPress.com in October 2025, but at that stage it was read-only. Agents could query a site, read its content, analyse its structure, but they could not touch anything.
A second update in January 2026 added OAuth 2.1 authentication, making it simpler to connect AI clients securely. In February, Automattic launched an official Claude Connector, still read-only. The March update was the step the company had been building towards all along: full write access.
Matt Mullenweg, the co-creator of WordPress and CEO of Automattic, has been vocal about his vision for an AI-native web. In a February 2026 blog post, he laid out a roadmap for “agentic usability,” arguing that WordPress should strengthen its APIs, command-line tools, and machine-friendly interfaces so that personal AI agents can safely operate WordPress tasks without brittle user-interface automation. He called for WordPress.org to provide markdown versions of every page, covering not just documentation but forums, directories, and bug trackers, making WordPress content more easily parseable by AI agents.
“How perfect is that for AI to work with?” Mullenweg wrote, describing how WordPress Playground can spin up fully containerised WordPress instances in 20 to 45 seconds, allowing AI to test code changes across more than 20 environments simultaneously. His stated ambition: to take WordPress “from millions of WordPresses in the world to billions.”
Automattic has built in safety mechanisms, and they are worth enumerating because they reveal how the company is thinking about the tension between automation and oversight. New posts default to draft status, giving users a chance to review before anything goes live. If you update a published post, the agent warns that changes will be visible immediately. Deletions of posts, pages, comments, and media move to trash and remain recoverable for 30 days. Permanent taxonomy deletions require a second confirmation. All agent activity appears in the site's existing Activity Log. The agent inherits standard WordPress user-role restrictions, so an Editor cannot change site settings and a Contributor cannot publish. Each of the 19 operations can be individually toggled on or off per site through the MCP dashboard at wordpress.com/me/mcp.
But the fundamental shift is unmistakable: the platform that hosts nearly half the web has decided that machines should be allowed to run it.
The Numbers Behind the Flood
The WordPress announcement did not arrive in a vacuum. It landed in a digital landscape already saturated with machine-generated text, and the data paints a picture that would have seemed absurd even three years ago.
In April 2025, Ahrefs analysed nearly 900,000 newly created English-language web pages, one per domain, using its “botornot” detection tool. The finding was stark: 74.2 per cent of those pages contained AI-generated content. Only 25.8 per cent were classified as purely human-written. The remaining 71.7 per cent were a hybrid of human and AI work, with just 2.5 per cent identified as “pure AI” with no human editing whatsoever. The study also found that 86.5 per cent of top-ranking pages in search results contained some amount of AI-generated content, and that 91.4 per cent of pages cited in Google's AI Overviews did as well.
A separate study by Graphite, which analysed 65,000 English-language URLs from Common Crawl, found that as of November 2024, 50.3 per cent of new web articles were generated primarily by AI. That figure had risen from just 5 per cent before ChatGPT launched in late 2022. The percentage briefly surpassed human-written articles in November 2024 before settling into a rough equilibrium where human and AI content exist in near-equal proportions.
Meanwhile, the Imperva Bad Bot Report, published in April 2025 by Thales subsidiary Imperva, revealed that for the first time in a decade, automated traffic had surpassed human activity online, accounting for 51 per cent of all web traffic. Malicious bots alone now represent 37 per cent of internet traffic, up from 32 per cent the previous year. The report attributed much of this surge to the rapid adoption of AI and large language models, which have made bot development accessible to people with limited technical skills. Simple, high-volume bot attacks have soared, now accounting for 45 per cent of all bot attacks, up from 40 per cent in 2023.
The picture is even more striking in specific sectors. NewsGuard, the misinformation tracking organisation, has been cataloguing what it calls “AI Content Farm” websites since May 2023, when it identified just 49 such sites. By February 2024, the count had reached 713. By November 2024, it was 1,121. As of March 2026, NewsGuard has identified 3,006 AI Content Farm sites spanning 16 languages, with Pangram Labs, its detection partner, reporting that between 300 and 500 new AI content farm sites emerge every month. That represents roughly a 60-fold increase in under three years.
These are not fringe blogs. NewsGuard found 141 major brands advertising on AI content farms during one two-month observational period, with an estimated $2.6 billion in advertising revenue per year being unintentionally directed towards misinformation news sites. In August 2025, NewsGuard also found that leading generative AI tools repeat false news claims 35 per cent of the time on average.
When Conspiracy Becomes Measurement
There was a time, not long ago, when suggesting that the internet was mostly bots talking to other bots would have marked you as a conspiracist. The Dead Internet Theory, which first appeared in a 2021 post on Agora Road's Macintosh Cafe by a user called “IlluminatiPirate,” posited that most online content was generated by automated systems rather than real people, with authentic human interaction quietly displaced. It was treated as paranoid speculation, circulated across subreddits and tech forums but never taken seriously by the mainstream.
By 2025, it had moved to the centre of industry discourse. Sam Altman, the CEO of OpenAI, wrote on X: “i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now.” At TechCrunch Disrupt in October 2025, Reddit co-founder Alexis Ohanian told Kevin Rose that “the dead internet theory is real.” The relaunch of Digg in January 2026, co-led by Ohanian and Rose, was shut down just two months later in March, citing an “unprecedented bot problem” among other issues.
The numbers validate what was once dismissed as paranoia. On X, approximately 64 per cent of accounts are estimated to be bots. LinkedIn's long-form posts are reportedly 54 per cent AI-generated. AI-generated reviews have been growing at 80 per cent month-over-month since June 2023, and by 2025, 23.7 per cent of real estate agent reviews on Zillow were likely created by AI, up from 3.63 per cent in 2019.
In 2022, Europol's Innovation Lab published a report titled “Law enforcement and the challenge of deepfakes” that included the widely cited claim that experts estimated 90 per cent of online content might be synthetically generated by 2026. That figure has been contested. Some analysts have pointed out that the original report focused specifically on deepfake technology's impact on law enforcement, not on broad AI content generation forecasts, and that for AI content to reach 90 per cent of total online material, it would need to dwarf three decades of accumulated human content. But the directional thrust of the prediction, if not its precise figure, appears increasingly difficult to dismiss.
Gartner, the technology research firm, added fuel to this narrative in February 2024 when it predicted that traditional search engine volume would drop 25 per cent by 2026, with search marketing losing market share to AI chatbots and other virtual agents. Gartner's VP Analyst Alan Antin stated that generative AI solutions were “becoming substitute answer engines, replacing user queries that previously may have been executed in traditional search engines.” Whether or not that specific prediction proves accurate, the shift in how people discover and consume content is undeniable.
The Ouroboros Problem
If the web is filling with AI-generated content, and AI models are trained on data scraped from the web, then a troubling feedback loop emerges. Researchers call it model collapse, though it has also acquired more colourful names: “AI inbreeding,” “AI cannibalism,” and “Habsburg AI.”
The landmark study on this phenomenon was published in Nature in 2024 by Ilia Shumailov of the University of Oxford, Zakhar Shumaylov of the University of Cambridge, Yiren Zhao of Imperial College London, Nicolas Papernot of the University of Toronto, and their colleagues. They investigated what happens when training data inevitably includes content produced by prior AI models, and their findings were sobering.
The team discovered that indiscriminately training generative AI on both real and generated content causes irreversible defects. Models first lose information from the tails of the data distribution, which they termed “early model collapse,” meaning that unusual, minority, or less-represented data disappears first. In later iterations, the data distribution converges so dramatically that it bears almost no resemblance to the original, a phase they called “late model collapse.” Within a few generations of recursive training, original content is replaced by what they described as unrelated nonsense.
The implications for an AI-saturated web are profound. If 74 per cent of newly published web pages already contain AI-generated content, as the Ahrefs data suggests, then the training data for next-generation models is increasingly contaminated with the output of current-generation models. Each cycle introduces small statistical distortions that compound over time, making outputs more homogeneous, less diverse, and increasingly prone to hallucinations. The phenomenon hits minority and less-represented data hardest, meaning that the voices and perspectives most at risk of being erased from AI training data are precisely those that the web was supposed to amplify.
Some researchers have pushed back against the most catastrophic framing. A response paper argued that if synthetic data accumulates alongside human-generated data rather than replacing it, model collapse can be mitigated. They contend that data accumulating over time is a more realistic description of how the web actually works than the assumption that all existing data is deleted and replaced each year. But there is broad agreement across the field that indiscriminate training on AI-generated data degrades model quality, and that the contamination of web data is accelerating faster than mitigation strategies can keep pace.
The practical consequence is that companies are now racing to secure access to verified human-generated content. Reddit signed a licensing deal with Google. News Corp signed one with OpenAI. The market for pre-2022 training data, collected before generative AI flooded the web, has become intensely competitive, and some observers have warned that this could entrench existing AI players who already possess large stores of uncontaminated data over newcomers who do not. Human-written text, once so abundant it was treated as a free resource, has become a strategic asset.
Google Does Not Care How You Made It
Search engines sit at the nexus of this transformation, and Google's response has been more nuanced than many expected. The company's official position, articulated by Google Search Liaison Danny Sullivan and consistent since the March 2024 helpful content guidance update, is straightforward: Google cares about whether content is helpful, not how it was produced.
Appropriate use of AI or automation is not against Google's guidelines. What triggers penalties is low-quality content produced at scale, regardless of whether a human or a machine wrote it. Google's enforcement actions typically result from mass production of thin, low-value pages, persistent factual inaccuracies, or republishing identical or near-identical AI output across multiple sites.
The data suggests this policy is having mixed effects. According to Ahrefs, 86.5 per cent of top-ranking pages now contain some amount of AI-generated content. Yet 86 per cent of the top-ranking pages in Google Search are still primarily human-written, with only 14 per cent classified as AI-generated. Among AI assistants like ChatGPT and Perplexity, the ratio is similar: 82 per cent human to 18 per cent AI. The message from search algorithms appears to be that AI-assisted content is fine, but AI-only content still struggles to reach the top.
Google's E-E-A-T framework, which evaluates Experience, Expertise, Authoritativeness, and Trustworthiness, remains the central ranking signal. AI content that incorporates original research, firsthand experience, clear author credentials, and comprehensive coverage performs similarly to traditional content. AI content that lacks these elements does not, regardless of how polished its prose might be.
But there is a deeper structural shift at play. Google's AI Overviews now appear in over 60 per cent of all searches, up from just 25 per cent in mid-2024. Traditional SEO metrics like domain authority have declined dramatically in importance. And 47 per cent of AI Overview citations now come from pages ranking below position five in traditional search results, suggesting that AI Overviews operate on fundamentally different ranking logic. The gatekeeping function of search, which once determined what content reached human eyes, is itself being reshaped by AI.
Labelling the Synthetic Web
If the web is becoming a place where distinguishing human from machine content matters, then provenance becomes the critical infrastructure. The most significant industry-wide effort on this front is the Coalition for Content Provenance and Authenticity, or C2PA, formed in 2021 through an alliance between Adobe, Arm, Intel, Microsoft, and Truepic, unifying two earlier initiatives: Adobe's Content Authenticity Initiative and Microsoft and the BBC's Project Origin.
C2PA's technical standard, called Content Credentials, functions like a nutrition label for digital content. Each asset carries cryptographically hashed and signed metadata that records when and where it was created, what tools were used, whether generative AI was involved, and what modifications were made along the way. The system is designed to be tamper-evident, meaning that any changes to the asset or its metadata are exposed. A small “CR” icon, the official Content Credentials mark of transparency, allows users to scroll over it and reveal the full provenance chain.
The standard has gained significant institutional backing. The U.S. National Security Agency published guidance in January 2025 recommending Content Credentials as part of a multi-faceted approach to content transparency. Google has integrated C2PA metadata into its Search and advertising systems, allowing users to see whether an image was created or edited with AI tools through the “About this image” feature. The C2PA specification is expected to be adopted as an ISO international standard, marking a milestone in content authenticity governance.
But provenance labelling faces the same challenge as every other transparency initiative in the history of the internet: voluntary adoption. Content Credentials are opt-in. Creators choose whether to apply them. Platforms choose whether to display them. And the incentive structure for AI content farms, which exist precisely because they can produce convincing content at negligible cost, does not favour transparency. The 3,006 AI content farm sites tracked by NewsGuard are unlikely to label their output as synthetic. The NSA's own guidance acknowledged this limitation, recommending that Content Credentials be deployed alongside education, policy, and detection rather than as a standalone solution.
The Human Cost of Infinite Content
The original appeal of the web was the presence of real perspectives, lived experience, and genuine stakes in a conversation. Someone who learned something and wanted to share it. Someone who built something and wanted to show it. Someone who suffered something and wanted to be heard. AI content can simulate all of these with increasing sophistication, but the simulation is, by definition, hollow. There is no person behind it who experienced anything at all.
This is not a theoretical concern. Researchers have begun studying the psychological impact of AI content in sensitive contexts. A study discussed in the Journal of Cancer Education examined what happens when patients in online cancer support forums discover that the support they received came from a large language model rather than a fellow human being. The findings suggest that the perception of authenticity matters enormously to people in vulnerable situations, and that the erosion of trust in online spaces has real consequences for mental health and community resilience.
The economic consequences are equally tangible and already measurable. Writing projects on Upwork declined 32 per cent year over year in 2025, the largest drop of any category on the platform. Within eight months of ChatGPT's launch, freelance writing jobs had dropped 30 per cent. The “Ramp Payrolls to Prompts” study from February 2026 found that more than half the businesses that spent on freelance platforms in 2022 had stopped entirely by 2025. Freelance marketplace spending as a share of total company spend fell from 0.66 per cent to 0.14 per cent, while AI model spending rose from zero to 2.85 per cent of total budgets.
The market has bifurcated. Entry-level project availability fell below 9 per cent, down from 15 per cent the year prior. The $40 blog post and the generic product description have been effectively automated out of existence. But at the top end, something unexpected is happening. Niche specialists report rising demand, with clients explicitly requesting subject-matter expertise and original content without AI involvement. AI-specialised freelancers on Upwork command 25 to 60 per cent higher rates than general practitioners, and AI-related freelance work crossed $300 million in annualised value by late 2025.
The pattern is clear: AI eliminates the floor while raising the ceiling. The writers who can offer what machines cannot, genuine expertise, original reporting, firsthand experience, and authentic voice, are more valuable than ever. Everyone else is competing against a system that works for free.
WordPress's own data illustrates the acceleration. Websites that use AI content saw a median year-over-year growth rate of 29.08 per cent, compared to 24.21 per cent for sites that did not, according to Ahrefs research. AI use allows companies to publish 42 per cent more content each month: a median of 17 articles versus 12 for those not using AI. The productivity advantage is real, and it compounds over time.
Building for Billions of Machine-Run Sites
Matt Mullenweg's vision is not shy about where this leads. He wants WordPress to become the “Web OS” for AI agents, the default platform through which machines interact with and publish to the internet. The WordPress AI Team has been shipping rapidly: the Abilities API shipped in WordPress 6.9, the WP AI Client and Workflows API are coming to WordPress 7.0, WordPress Agent Skills recently moved to an official WordPress repository, and WP-Bench launched in mid-January 2026.
Plugin submissions are accelerating towards 100,000 and beyond, with WordPress planning editorial curation to manage the AI-driven increase in development. Mullenweg has described a future in which billions of WordPress instances exist, many of them spun up and managed entirely by AI agents acting on behalf of individuals, businesses, or other AI systems. While he acknowledges the power of what he calls “vibey vibe coding,” where users prompt AI without deep technical understanding, he argues this approach “will pale in comparison to what the folks who can prompt and vibe code with a knowledge and understanding of what the agents are doing.”
The write capabilities announced on 20 March are available on all paid WordPress.com plans at no additional cost. Users enable them through the MCP dashboard, toggling on the specific operations they want to permit on each site. The barrier to autonomous publishing is now a toggle switch.
This is not a fringe experiment. WordPress holds a 60.5 per cent share of the content management system market. When the dominant platform for web publishing decides that AI agents should have full operational control, the rest of the industry faces a choice: follow WordPress into the age of autonomous publishing, or insist that humans remain in the loop. That answer, as multiple observers have noted, could define how the web works for the next decade.
The Web We Are Building
The honest answer to the question at the heart of this story, whether the internet could soon become a place where the vast majority of content was never touched by a human hand, is that it is already happening. The data from Ahrefs, Graphite, Imperva, and NewsGuard converges on the same conclusion: machine-generated content has become the default mode of web publishing. The WordPress announcement does not create this reality. It formalises it.
What remains uncertain is whether this matters. If an AI agent writes a perfectly accurate, well-structured, beautifully designed blog post about the best hiking trails in the Lake District, and a human being reads it and finds it useful, has something been lost? The information is real. The formatting is professional. The reader got what they came for.
But zoom out. If a thousand AI agents publish a thousand posts about Lake District hiking trails, each slightly rephrasing the same information scraped from the same sources, the web becomes a hall of mirrors. The diversity of perspective that once made the internet extraordinary, the idiosyncratic voice of someone who actually walked those trails in the rain and had a terrible time and wrote about it anyway, gets buried under an avalanche of competent sameness.
The mitigations being developed are real but incomplete. Content Credentials offer provenance but rely on voluntary adoption. Google's quality signals reward expertise but cannot distinguish authentic experience from convincing simulation. WordPress's safety controls default to drafts but do not prevent a determined operator from automating everything. Model collapse research warns of degradation but cannot halt the economic incentives driving synthetic content production.
The web is not dead. But it is changing in ways that demand attention. The machines are publishing now, and they are publishing at scale, with the full support of the platforms that host the internet's infrastructure. The question for the next decade is not whether AI content will dominate the web. It is whether the humans who still care about what they write, and what they read, can build the tools, standards, and cultural norms to ensure that authenticity retains its value in a world of infinite synthetic supply.
That is not a technical problem. It is a civilisational one.
References and Sources
WordPress.com Blog, “AI agents can now create and manage content on WordPress.com,” published 20 March 2026. Available at: https://wordpress.com/blog/2026/03/20/ai-agent-manage-content/
TechCrunch, “WordPress.com now lets AI agents write and publish posts, and more,” published 20 March 2026. Available at: https://techcrunch.com/2026/03/20/wordpress-com-now-lets-ai-agents-write-and-publish-posts-and-more/
The Next Web, “WordPress.com lets AI agents write, publish, and manage your site,” March 2026. Available at: https://thenextweb.com/news/wordpress-com-mcp-write-capabilities-ai-agent
Matt Mullenweg, “WP & AI,” personal blog, February 2026. Available at: https://ma.tt/2026/02/wp-ai/
Matt Mullenweg, “WP.com MCP,” personal blog, March 2026. Available at: https://ma.tt/2026/03/wp-com-mcp/
Ahrefs, “74% of New Webpages Include AI Content (Study of 900k Pages),” 2025. Available at: https://ahrefs.com/blog/what-percentage-of-new-content-is-ai-generated/
Graphite, analysis of 65,000 English-language URLs from Common Crawl, findings reported across multiple outlets including eWeek, “AI Now Writes Half of the Internet, but Still Ranks Behind Humans,” 2025. Available at: https://www.eweek.com/news/ai-writes-half-internet/
Imperva (Thales), “2025 Bad Bot Report,” published April 2025. Available at: https://www.imperva.com/resources/resource-library/reports/2025-bad-bot-report/
Thales Group press release, “AI-Driven Bots Surpass Human Traffic – Bad Bot Report 2025,” 2025. Available at: https://cpl.thalesgroup.com/about-us/newsroom/2025-imperva-bad-bot-report-ai-internet-traffic
NewsGuard, “Tracking AI-enabled Misinformation: 3,006 AI Content Farm sites (and Counting),” March 2026. Available at: https://www.newsguardtech.com/special-reports/ai-tracking-center/
NewsGuard, “Watch Out: AI 'News' Sites Are on the Rise,” 2024. Available at: https://www.newsguardtech.com/insights/watch-out-ai-news-sites-are-on-the-rise/
Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R. and Gal, Y., “AI models collapse when trained on recursively generated data,” Nature, volume 631, pages 755-759, 2024. Available at: https://www.nature.com/articles/s41586-024-07566-y
Europol Innovation Lab, “Law enforcement and the challenge of deepfakes,” 2022. Referenced across multiple outlets including Futurism, “Experts: 90% of Online Content Will Be AI-Generated by 2026.” Available at: https://futurism.com/the-byte/experts-90-online-content-ai-generated
Google Search Central Blog, “Google Search's guidance about AI-generated content,” February 2023, updated 2024. Available at: https://developers.google.com/search/blog/2023/02/google-search-and-ai-content
CMSWire, “Automattic Boosts WordPress.com with Anthropic, OpenAI & AI Agents,” March 2026. Available at: https://www.cmswire.com/digital-experience/wordpresscom-enables-ai-agents-to-write-manage-content/
C2PA (Coalition for Content Provenance and Authenticity), official website and technical specification, 2025. Available at: https://c2pa.org/
U.S. Department of Defense / NSA, “Strengthening Multimedia Integrity in the Generative AI Era,” published January 2025. Available at: https://media.defense.gov/2025/Jan/29/2003634788/-1/-1/0/CSI-CONTENT-CREDENTIALS.PDF
Google Blog, “How Google and the C2PA are increasing transparency for gen AI content,” 2025. Available at: https://blog.google/technology/ai/google-gen-ai-content-transparency-c2pa/
TIME, “Sam Altman Voices Concern Over Dead Internet Theory,” 2025. Available at: https://time.com/7316046/sam-altman-dead-internet-theory/
Wikipedia, “Dead Internet theory,” accessed March 2026. Available at: https://en.wikipedia.org/wiki/Dead_Internet_theory
WebProNews, “WordPress Hands the Keys to AI Agents – and the Implications for Publishing Are Enormous,” March 2026. Available at: https://www.webpronews.com/wordpress-hands-the-keys-to-ai-agents-and-the-implications-for-publishing-are-enormous/
Ahrefs, “Websites Using AI Content Grow 5% Faster [+ New Research Report],” 2025. Available at: https://ahrefs.com/blog/websites-using-ai-content-grow-faster/
Ahrefs, “80+ Up-to-Date AI Statistics for 2025,” 2025. Available at: https://ahrefs.com/blog/ai-statistics/
Gartner, “Gartner Predicts Search Engine Volume Will Drop 25% by 2026, Due to AI Chatbots and Other Virtual Agents,” published February 2024. Available at: https://www.gartner.com/en/newsroom/press-releases/2024-02-19-gartner-predicts-search-engine-volume-will-drop-25-percent-by-2026-due-to-ai-chatbots-and-other-virtual-agents
Mediabistro, “Freelance Writing Jobs & AI in 2026: Real Data,” 2026. Available at: https://www.mediabistro.com/go-freelance/freelance-writing-jobs-in-the-age-of-ai-what-the-data-says-and-how-to-position-yourself/
Winvesta, “AI cut freelance rates 30%: How top earners fight back in 2026,” 2026. Available at: https://www.winvesta.in/blog/freelancers/ai-cut-freelance-rates-30-how-top-earners-fight-back
NewsGuard, “NewsGuard Launches Real-time AI Content Farm Detection Datastream,” 2026. Available at: https://www.newsguardtech.com/press/newsguard-launches-real-time-ai-content-farm-detection-datastream-to-counter-onslaught-of-ai-slop-in-news/
Harvard Journal of Law and Technology, “Model Collapse and the Right to Uncontaminated Human-Generated Data,” 2025. Available at: https://jolt.law.harvard.edu/digest/model-collapse-and-the-right-to-uncontaminated-human-generated-data

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk








