Six to One: Gore, AI, and the Hollowing of Democratic Discourse

On the evening of 7 April 2026, in a ballroom at the Moscone Center in San Francisco, Al Gore shared a stage with the cardiologist and digital-medicine evangelist Eric Topol at HumanX, the AI industry's answer to Davos. The panel was billed, with characteristic conference-speak grandiosity, as “What We Choose to Hyper-Scale”. Gore, 78 years old, greying but still given to the slow, pastoral cadence that a generation of American voters once found either reassuring or exasperating, chose to hyper-scale a single number: six to one.
That is the ratio, roughly, of public relations professionals to working journalists in the United States. It is not a new figure. It has been creeping up the vertical axis of industry infographics for more than a decade, a minor-key statistic reliably deployed by media trade publications to make a well-worn point about the sickening of the information ecosystem. But Gore, who has been circling this terrain since he published The Assault on Reason in 2007, was not deploying it as a media-trade curiosity. He was using it as an entry wound. If six narrators of commercial interest already compete with every one professional explainer of the world, he argued, and if artificial intelligence now enables anyone with a credit card and a prompt window to manufacture persuasive copy at the speed of electricity and the price of a cup of coffee, then the informational substrate on which democratic decision-making depends is not merely strained. It is being dismantled in real time, and the institutions meant to protect it are moving at the speed of committee.
The question Gore left hanging over the Moscone ballroom, and the question that has haunted every serious conversation about AI and democracy since, runs as follows. If a healthy democracy requires a shared, trustworthy information commons, and if AI is systematically degrading the conditions that make such a commons possible, then what governance mechanisms, if any, can operate at the speed and scale required to respond? And when we finally reach the bottom of that question, is what we find a problem of technology, a problem of economics, or a problem of political will?
The Number, Honestly
First, the ratio. The 6:1 figure has a provenance worth pinning down, because it is the sort of statistic that travels better than it verifies. The original analysis comes from the public-relations software company Muck Rack, whose analysts have spent most of the last decade cross-referencing the US Bureau of Labor Statistics' Occupational Employment Statistics series. In 2016, Muck Rack calculated that there were just under five PR specialists for every reporter in the country, itself a near doubling from a decade earlier. By 2018, the figure had crept up to something close to six. By 2021, the company's updated analysis reported a ratio of 6.2 PR professionals per journalist, an increase driven by parallel trends: steady hiring in communications departments on one side, and continued attrition in newsrooms on the other.
The attrition side of the equation is, if anything, the more unsettling half. According to Pew Research Center, newsroom employment in the United States fell by 26 per cent between 2008 and 2020, with newspapers absorbing the heaviest losses. The newspaper sector alone shed tens of thousands of jobs over that period; by one Bureau of Labor Statistics measure, newspaper-publisher employment dropped by roughly 79 per cent between 2000 and 2024. The 2024 State of Local News Report from Penny Abernathy's research group at the Medill School at Northwestern University, which has tracked the decline of American local journalism more doggedly than any other single project, found that the loss of local newspapers was continuing at what the report called an alarming pace, that “ghost” papers operating in name only had become a recognisable category of asset, and that the creation of genuine news deserts, counties with no reliable local coverage at all, was accelerating rather than slowing.
What Gore was gesturing at in San Francisco is the compound result of these two curves. The supply of professional, institutionally accountable explanation has been falling for twenty years. The supply of professionally produced persuasion, most of it paid for and directed towards specific commercial or political ends, has been rising for the same period. Well before any large language model wrote a single press release, the information ecosystem was already lopsided by an order of magnitude.
The Abernathy data makes the analogy with environmental collapse genuinely apt rather than merely rhetorical. Local-newspaper closures do not distribute themselves evenly. They concentrate in places that are already economically and politically marginalised, so that the communities with the thinnest democratic capacity lose their mirrors first. A county without a newspaper is not a county with slightly less information; it is a county in which the civic feedback loop has been severed, which tends to correlate with lower voter turnout, higher borrowing costs for local government, and a measurable uptick in corruption. News deserts, like food deserts, do not advertise themselves.
Into this already depleted landscape, the tooling of synthetic persuasion has arrived, and arrived fast.
What AI Actually Changes
It is tempting, particularly in a WIRED-adjacent vocabulary, to talk about AI's impact on the information environment in eschatological terms. Gore, notably, did not. His rhetorical move at HumanX was subtler and more effective. He treated AI as a forcing function on pre-existing trends: the same patient degradation we have been observing for two decades, now running at ten times the clock speed. That framing is borne out by the numbers.
NewsGuard, the New York-based media monitoring outfit that has been tracking AI-generated content sites with a combination of analyst review and automated detection, reported in November 2024 that its team had identified 1,121 AI-generated news and information websites operating across more than a dozen languages. By the time the group announced its Pangram Labs collaboration and updated its tracker, the number had more than doubled, exceeding 3,000 sites, with new domains being spun up at a rate of 300 to 500 per month. The sites are crude, largely ad-revenue driven, and often trivially identifiable on close inspection. Their function is not to convince the discerning reader; it is to saturate search results and social feeds with plausible-looking copy that algorithms treat as indistinguishable from human-produced journalism until challenged.
“Pink slime” journalism, a term coined by the media scholar Ryan Smith in 2012 to describe partisan sites that mimic the visual grammar of local papers while functioning as distribution pipes for undisclosed political backers, has undergone a similar transformation. NewsGuard reported in June 2024 that the number of known pink-slime domains had reached 1,265, quietly overtaking the 1,213 daily newspapers still publishing across the United States. In the final months before the November 2024 general election, the investigative outlet ProPublica traced a cluster of newspapers branded with the word “Catholic” and distributed across five swing states back to Brian Timpone, a figure long associated with the pink-slime operator network. Most of the content undermined Vice President Kamala Harris and boosted Donald Trump. None of it disclosed the chain of ownership or the political intent.
The point is not that AI created pink slime. The point is that AI has driven the marginal cost of producing another thousand plausible articles from a salaried stringer's day rate to something very close to the electricity bill. What the political scientist Joseph Heath has called “Goodhart's law on steroids” applies at once: when the metric that governs distribution is engagement, and the cost of producing engagement-optimised content collapses, the observable ecology of published text becomes a function of whoever is most willing to flood it.
The 2023 Slovak parliamentary election, which European analysts have come to treat as an early warning system, demonstrated what this looks like in a contested democratic moment. Two days before polling day, during Slovakia's legally mandated pre-election silence period, a manipulated audio clip surfaced in which Michal Šimečka, the pro-European leader of the Progressive Slovakia party, appeared to be heard discussing vote-buying schemes with Monika Tódová, a well-known reporter for the independent outlet Denník N. Both Šimečka and Tódová denied the recording was real, and the fact-checking team at the French news agency AFP concluded it bore the hallmarks of AI generation. Because of the moratorium on election coverage, mainstream Slovak outlets could not set the record straight in the hours that mattered. The pro-Russian Smer party of Robert Fico won the election. Whether the clip was decisive is impossible to say. What is not in doubt is that the response infrastructure, regulatory, journalistic, and platform-based, was hours to days slower than the thing it needed to counter.
What Slovakia previewed, and what subsequent election cycles in India, Indonesia, the Philippines, the United Kingdom and the United States have elaborated, is that the interesting threshold is not technical. It is economic.
The Economics of Persuasion After Zero Marginal Cost
Classic political economy assumed that producing persuasive speech was expensive. Pamphlets required a printer. Broadcast required an FCC licence. Even the early digital era assumed that while distribution was cheap, production still cost something, whether measured in writers, ad buys, or opportunity cost. Goodhart's law, broadly stated, says that when a measure becomes a target, it ceases to be a good measure. When the target is attention, and the cost of producing another targeted message falls to zero, the entire information environment becomes an exercise in saturation.
This is where AI's contribution to the crisis becomes both distinctive and, arguably, irreversible. The newsroom collapse of the last two decades was a supply-side story: the advertising-funded model that had quietly subsidised accountability journalism since the late nineteenth century was cannibalised by Google and Meta, and local papers had nothing to replace it with. The AI-slop story is a demand-side asymmetry: while the production of high-quality, verifiable, labour-intensive journalism remains expensive, the production of plausible-seeming alternative content has collapsed to near zero. You can still buy a 1,500-word investigative piece for several thousand pounds. You can also commission a thousand 1,500-word pieces for the price of a large pizza, and nothing at the level of the distribution layer distinguishes them.
The implications of that asymmetry for the information commons are not subtle. If the underlying economics of good information and bad information are no longer comparable, and if the platforms on which the population encounters information optimise for engagement rather than for epistemic value, then the equilibrium state of the ecosystem is not a lively marketplace of ideas. It is a saturated swamp in which the professional journalist, the professional lobbyist, and the computationally-generated partisan advocate are all trying to shout over one another, and the latter two are operating at fundamentally different scales from the first. Reuters Institute's 2025 Digital News Report, which surveyed nearly 100,000 respondents across 48 countries, found global trust in news plateaued at 40 per cent for the third consecutive year, with 58 per cent of all respondents saying they were worried about telling real from fake online. In the United States, that anxiety level reached 73 per cent. The audience is not merely losing confidence in particular outlets. It is losing confidence in the category.
Jürgen Habermas, the German philosopher whose 1962 work on the bourgeois public sphere gave academics a vocabulary for this kind of argument, returned to the topic in a long 2022 essay in the journal Theory, Culture & Society, unsubtly titled “Reflections and Hypotheses on a Further Structural Transformation of the Political Public Sphere”. Habermas's thesis, stripped of its formal scaffolding, was that digital platforms have fragmented the public sphere to a degree that severs the feedback between informed opinion formation and political decision-making, and that the result is structurally bad for democracy. This is not a subtle man. At 96 years old when he published the piece, he effectively said that the experiment of social-media-mediated public discourse, having run for a full generation, had delivered a verdict, and the verdict was negative. An information commons that has been saturated beyond the capacity of any reasonable citizen to process it is functionally the same as an information commons that has been destroyed.
Gore, who is neither a philosopher nor a technologist by training, arrived at the Moscone stage with a version of this argument filtered through the lens of someone who has watched American deliberative democracy decay in real time. The difference is that he now has a quantitative handle on the asymmetry, and a rough sense of how much AI has worsened it.
The Governance Toolkit, Honestly Assessed
What, then, is being done about any of it?
The European Union's AI Act, which came into force in August 2024 with a staggered implementation schedule, includes in Article 50 a set of transparency obligations that are, on paper, the most ambitious regulatory intervention yet attempted. Providers of AI systems must ensure machine-readable marking of AI-generated or AI-manipulated content. Deployers must disclose when realistic synthetic content, including deepfakes, has been artificially generated. The Article 50 provisions become enforceable in August 2026, and in December 2025 the European Commission, working through the EU AI Office, published a first draft of the Code of Practice on Transparency of AI-Generated Content. A further draft was scheduled for March 2026, with a finalised code expected in June 2026 ahead of the Article 50 enforcement date. The draft code discusses watermarking, metadata, content detection, and interoperability standards.
The United Kingdom's Online Safety Act, passed in 2023 and now moving into full enforcement under the regulator Ofcom, takes a different approach, obliging platforms to assess and mitigate a long list of enumerated harms. By December 2025, Ofcom had opened 21 investigations, launched five enforcement programmes, and begun issuing fines. These included a £20,000 initial penalty against the imageboard 4chan in August 2025, a £50,000 fine against Itai Tech in November, and a £1 million fine against the AVS Group in December, all for failures around age verification and responses to statutory information requests. The pattern suggests a regulator that will use its powers briskly on procedural breaches and more hesitantly on substantive content decisions.
In the United States, the picture is messier. The NO FAKES Act, a bipartisan bill first introduced in 2024 by Senators Chris Coons, Marsha Blackburn, Amy Klobuchar and Thom Tillis, died in committee at the end of the 118th Congress. It was reintroduced in April 2025 with broader industry support, including from major record labels, SAG-AFTRA, Google and OpenAI. Its provisions cover unauthorised digital replicas of an individual's voice or likeness, with liability extending to platforms as well as creators. Civil-liberties groups, including the Foundation for Individual Rights and Expression, have argued that the bill's definitions sweep too broadly and would chill constitutionally protected speech. Separately, California's AB 2655, the Defending Democracy from Deepfake Deception Act of 2024, was struck down in August 2025 by Judge John Mendez of the Eastern District of California on Section 230 grounds in a case brought by Elon Musk's X platform. A companion law, AB 2839, fell at the same hurdle.
On the technical side, the Coalition for Content Provenance and Authenticity, known as C2PA, has been developing content credential standards that attach verifiable metadata to images, video, and audio at the moment of creation. Version 2.3 of the specification was released in 2025, the year in which Samsung's Galaxy S25 became the first smartphone line with native C2PA support, and Cloudflare became the first major content delivery network to implement content credentials across roughly a fifth of the global web. The Content Authenticity Initiative, the advocacy and adoption arm of the project, crossed 5,000 members in 2025. Provenance standards are essentially optical: if camera manufacturers, editing software, distribution platforms, and end-user devices all implement the chain, then content without credentials becomes noticeable, and content with tampered credentials becomes detectable.
Each of these interventions is credible, serious, and, taken in isolation, almost entirely outmatched by the scale and velocity of the problem.
The Speed and Scale Mismatch
To see why, consider the temporal asymmetry. The EU AI Act was first proposed in April 2021. Its transparency obligations become enforceable in August 2026, more than five years later. The associated Code of Practice, which will provide the operational detail for how synthetic media labelling is meant to work, will be finalised only a few weeks before enforcement begins. In the same five-year window, the total number of AI-generated content farm sites tracked by NewsGuard went from a figure too low to bother measuring to over 3,000, an expansion that continues at the rate of hundreds of new sites per month. Regulatory cycles in liberal democracies are measured in legislative sessions and court challenges, typically running one to three years for primary legislation and several more for implementation. Generative-AI content cycles are measured in seconds.
This is not a failure of any particular regulator. It is a structural property of the problem. Democratic lawmaking is, by design, deliberate. The slowness is a feature, intended to ensure that coercive state power is exercised with due process. But it means that by the time a regulatory regime is in place to address a given form of informational harm, the underlying technology has typically moved on by two or three generations, and the actors using that technology have migrated to jurisdictions, formats, or modalities the regime does not cover.
The scale mismatch compounds the speed mismatch. Take content provenance as a test case. The C2PA standard works only to the extent that it is universally adopted. One camera maker, one platform, one editing tool that does not honour the chain becomes the leaky boundary through which unprovenanced content flows. Major manufacturers including Leica, Nikon, Fujifilm, Canon, Panasonic and Sony have joined the initiative, but the standard has to contend with a global installed base of billions of devices, most of which will never be updated. Meanwhile, generative models capable of producing C2PA-free synthetic images are freely available and running on consumer hardware. Provenance systems can raise the cost of faking a high-value, closely scrutinised piece of content, the provenance of a front-page wire photo, say, but they cannot by themselves raise the floor on the mass-produced synthetic slop that saturates everyday feeds, because nobody is going to check.
Watermarking proposals run into a variant of the same problem. Any watermark that is robust enough to survive adversarial processing tends also to degrade the output, and any watermark that preserves quality tends to be strippable. Academic work from 2024 and 2025 has repeatedly demonstrated that, under realistic adversarial conditions, image and text watermarks are removable with modest computational effort. As a tool for high-confidence attribution, they are a useful layer. As a universal solution, they are not.
None of this means the governance toolkit is worthless. It means that each tool is operating at a scale of years and institutions while the underlying phenomenon is operating at a scale of seconds and networks. That asymmetry, left unaddressed, guarantees that the regulatory regime is always fighting the last battle.
Technology, Economics, or Political Will?
Which brings us back to the three-part question Gore posed in San Francisco. Is the crisis of the information commons fundamentally a problem of technology, a problem of economics, or a problem of political will?
The honest answer, the answer that anyone who has spent real time with the data arrives at, is that it is all three, but one of them dominates, and the other two are more tractable than they look.
The technological layer is, paradoxically, the most solvable part of the stack. Provenance standards, watermarking, authentication protocols and platform-level detection are engineering problems with engineering solutions, and the engineering is improving. C2PA's adoption curve in 2025 was steep. The issue is not that the technology cannot work; it is that it will only work if mandated, and mandates are a function of political will.
The economic layer is harder but still legible. The fundamental asymmetry is between the cost of producing accountability journalism and the cost of producing computationally generated persuasion. Closing that gap is a matter of subsidy, either directly, as in the Scandinavian model of public support for newspapers, or indirectly, through mechanisms such as the Australian News Media Bargaining Code, which forces platforms to pay publishers for content, or through tax credits, philanthropic infrastructure, public-service broadcasters, or the various bargaining codes proposed in Canada and under discussion in the United States. These mechanisms are imperfect, and several of them have backfired in interesting ways, but they demonstrate that the economics of journalism is a designed outcome rather than a natural one. Again, whether any of them happens at scale is a question of political will.
Political will, then, is where the analytical buck has to stop. It is the layer at which everything else either does or does not get done, and it is the layer at which Western democracies are most obviously failing. The European Union managed to pass the AI Act because a supranational technocratic bureaucracy is insulated from the worst effects of electoral politics; the United States, whose federal legislature is broken in ways that predate the AI crisis by a decade or more, has produced no comparable national framework, and the state-level efforts that do exist are being shredded in court. The United Kingdom managed the Online Safety Act in part because online safety had been framed as a child-protection issue rather than a speech regulation issue, which made it politically unkillable. That kind of coalition does not obviously exist for the harder problem of structural information-environment regulation.
There is also a second-order version of the political-will problem that Gore was too diplomatic to name directly. Some of the actors best positioned to degrade the information commons have every incentive to do so, and the governance mechanisms meant to constrain them have become, in some jurisdictions, the targets of active hostility from those same actors. When the owner of a major social platform is personally funding lawsuits against state deepfake laws, that is not a regulatory design problem. It is a political economy problem with no regulatory solution.
Yochai Benkler, the Harvard Law scholar who has been writing about networked public spheres since the early 2000s, and his collaborators including Ethan Zuckerman have consistently argued that the earlier, more optimistic story of the networked public sphere was always contingent on a particular configuration of platforms, incentives, and institutional counterweights, and that when those contingencies changed, the same networked structure could produce very different outcomes. The lesson is not that the public sphere was better in 1972 than in 2026, which would be a sentimental lie, but that open information ecosystems are sustained by the deliberate choices of the societies that host them, and that those choices are ultimately political rather than technical.
What Would Actually Work
If the diagnosis is correct, then the set of interventions that could in principle work is constrained but not empty.
First, the supply side of professional journalism has to be stabilised, and that almost certainly means public money. The argument that state subsidy compromises editorial independence is real, but the existing trajectory of the sector makes the argument academic: there will soon be very little independent journalism left to protect if current attrition rates continue. The Scandinavian models of direct press subsidy, insulated by arm's-length distribution mechanisms, have sustained viable media ecosystems for decades without obviously capturing editorial output. They are politically contingent, of course. They require a society that has decided journalism is worth paying for.
Second, the demand side has to be reshaped. This is a function of platform design, which is a function of liability rules, which is a function of political will. The EU's Digital Services Act, which imposes systemic risk assessments on very large online platforms, is probably the closest any jurisdiction has come to a framework that can address the structural problem rather than chasing individual pieces of content. Whether it delivers depends on how vigorously the European Commission enforces it and whether the political coalitions that supported its passage hold together under pressure from platform lobbying and from member states increasingly tempted by the authoritarian side of content regulation.
Third, and most importantly, content provenance and transparency standards need to be mandated rather than voluntary, and mandated across jurisdictions rather than in a single bloc. A universal C2PA-style regime, enforced through platform liability for unprovenanced content in high-stakes contexts such as political advertising and election coverage, would not solve the problem, but it would raise the cost of industrial-scale synthetic content to the point where the economic asymmetry becomes less catastrophic. This is probably the single intervention most amenable to multilateral coordination, and the one most immediately vulnerable to political sabotage.
Fourth, and least fashionable, is the rebuilding of the institutional middle layer of democratic information: libraries, public broadcasters, professional fact-checking organisations, local civic infrastructure. These are the civic equivalents of wetlands: unglamorous, slow-growing, and indispensable to the health of the larger system. The last two decades of policy discourse have treated them as legacy costs to be minimised. If Gore's argument is right, they are the only ballast democracies have against the saturation effects the rest of this essay has described.
A Closing That Does Not Cop Out
Gore's 6:1 ratio is not, in the end, the most important number in this story. The most important number is the one that describes the rate at which synthetic content can be produced relative to the rate at which human institutions can respond to it, and that number is moving in the wrong direction by orders of magnitude per year. Technology, economics, and political will are all layered problems, but political will is the load-bearing one. The technology is improving. The economics are tractable if anyone decides they are worth fixing. The political will to do either at the required scale is absent in most of the major democracies, and the absence is getting worse rather than better.
What makes Gore's framing useful, for all the former-vice-presidential cadence, is that he refused to rest on either of the two conventional consolations. He did not suggest that the problem would solve itself as users grew more sceptical; the Reuters Institute data make clear that scepticism has risen in lockstep with saturation, and the combined effect is not a healthier information environment but a more paralysed one. Nor did he suggest that a single technical fix, a watermark, a labelling regime, a platform feature, would be enough; he is old enough to remember the 1990s arguments about filtering and the 2000s arguments about fact-checking, and he has watched both get overtaken by the thing they were meant to contain.
The position he gestured at, and the position the evidence supports, is that the information commons is a public good that has to be maintained through deliberate, ongoing, political action, and that the only question worth arguing about is whether the societies that claim to value it are willing to pay for its maintenance in something other than retrospective regret. That argument is harder to make in a ballroom full of AI executives than almost anywhere else, because the incentives of the people in the room are, to a significant extent, aligned with the production side of the asymmetry rather than the mitigation side. Gore made it anyway.
There is a version of the optimistic tech-conference speech in which the speaker ends by asserting that the same tools that broke the information environment can be deployed to fix it, and everyone claps politely and goes to the evening reception. Gore did not give that speech. What he offered instead was closer to an invoice: the bill for two decades of neglect was being tallied in real time, the interest was compounding faster than the principal, and the creditor, in this metaphor, was democratic self-government itself. The bill will be paid. The only choice is in what currency.
Whether liberal democracies will choose to pay it in the form of regulation, subsidy, and institutional rebuilding, or in the form of the slow dissolution of the shared epistemic ground on which self-rule depends, is not a question any technologist can answer, and it is not a question any regulator can answer alone. It is the kind of question that gets answered, if it gets answered at all, one political coalition and one public decision at a time. In San Francisco on 7 April 2026, Al Gore did what Al Gore has always done, which is to keep asking it until someone listens.
References
- Muck Rack (2022). PR pros earned $10K more than journalists in 2021 and other must-know stats. Muck Rack Blog, April 2022.
- Muck Rack (2018). There are now more than 6 PR pros for every journalist. Muck Rack Blog, September 2018.
- Pew Research Center (2021). U.S. newsroom employment has fallen 26% since 2008. Pew Research Center, July 2021.
- US Bureau of Labor Statistics (2025). Industries with employment decreases from 2000 to 2024. The Economics Daily, 2025.
- Abernathy, P. and Medill Local News Initiative (2024). The State of Local News 2024. Northwestern University Medill School of Journalism, 2024.
- NewsGuard (2024-2025). Tracking AI-enabled Misinformation: AI Content Farm sites and Top False Claims Generated by Artificial Intelligence Tools. NewsGuard Special Reports, 2024-2025.
- NewsGuard and Pangram Labs (2025). NewsGuard Launches Real-time AI Content Farm Detection Datastream. NewsGuard Press Release, 2025.
- VOA News and Intel 471 (2024). In US, fake news websites now outnumber real local media sites. Voice of America, 2024.
- ProPublica (2024). Investigation into “Catholic”-branded pink-slime newspapers in swing states. ProPublica, October 2024.
- Harvard Kennedy School Misinformation Review (2024). Beyond the deepfake hype: AI, democracy, and “the Slovak case”. HKS Misinformation Review, 2024.
- Bloomberg (2023). AI Deepfakes Used In Slovakia To Spread Disinformation. Bloomberg, September 2023.
- Reuters Institute for the Study of Journalism (2025). Digital News Report 2025. University of Oxford, June 2025.
- Habermas, J. (2022). Reflections and Hypotheses on a Further Structural Transformation of the Political Public Sphere. Theory, Culture & Society, 39(4): 145-171.
- European Commission (2024-2026). Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). Official Journal of the European Union.
- European Commission (2025). Draft Code of Practice on Transparency of AI-Generated Content. EU AI Office, December 2025.
- Ofcom (2025). Online Safety Act enforcement updates and investigations. Ofcom, 2025.
- US Congress (2024-2025). NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act). US Senate and House of Representatives, 2024-2025.
- Mendez, J., US District Court for the Eastern District of California (2025). Ruling in X Corp v. Bonta on AB 2655 and AB 2839. August 2025.
- Coalition for Content Provenance and Authenticity (2025). Content Credentials 2.3 Specification and Five Year Impact Report. C2PA, 2025.
- Content Authenticity Initiative (2025). 5,000 members milestone announcement. Adobe and partners, 2025.
- Gore, A. (2007). The Assault on Reason. Penguin Press, May 2007.
- Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, April 2006.
- HumanX Conference (2026). Agenda and speaker listings. HumanX, San Francisco, April 6-9, 2026.
- Cryptonomist (2026). AI Governance: Gore and Topol at HUMANX. Cryptonomist, 7 April 2026.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk








