Capture by Design: How Frontier Labs Wrote AI Rules Before Regulators Arrived

On 27 February 2026, the United States government declared war on one of its most politically peculiar citizens: an AI company founded by people who had left OpenAI because they thought AI was too dangerous, now blacklisted by a Republican administration because they thought AI was too dangerous. Within hours, Pete Hegseth and Donald Trump took to social media to accuse Anthropic of endangering national security. Federal agencies were ordered to stop using Claude. The Pentagon began the paperwork to brand the company a “supply chain risk to national security,” a designation normally reserved for firms with ties to adversary states. Dario Amodei, in an internal memo reported by The Information, told staff the President disliked Anthropic for failing to offer “dictator-style praise.” Trump called the company “radical left” and “woke.” It was, in its peculiar way, the most clarifying moment American AI governance has had in a decade.
On 26 March, Judge Rita Lin of the Northern District of California issued a preliminary injunction blocking the ban. Her language was unusually sharp for a federal district opinion. “Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation,” she wrote, adding that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.” The administration appealed within a week. As of today, 9 April 2026, the dispute is live, unresolved, and legally unprecedented.
It is tempting to read all of this as political melodrama, one more instalment in the Trump administration's habit of punishing companies that talk back. That reading is not wrong. It is just radically insufficient. What the Anthropic fight has exposed is not a Trump problem, or an Anthropic problem, or even an AI-safety-versus-national-security problem. It is something stranger: the firms building the most consequential computational systems of our era are simultaneously the dominant voices shaping how those systems will be governed, and the public clash between one of those firms and the White House has revealed just how few independent levers anyone else has.
A commentary published in early April in the policy trade press put it this way: the dispute reveals something structurally troubling, because it shows that the only place serious arguments about frontier AI are happening at all is inside the rooms of the companies that build it. Take the companies away and the rooms are empty. That is regulatory capture of a sort, but a kind the literature has never quite described. It is capture that formed before effective regulation existed to be captured. The frontier labs did not corrupt a mature regulatory apparatus. They grew up in a vacuum and then offered, helpfully, to fill it themselves.
The Shape of the Dispute
Stripped of its political theatre, the Anthropic fight is a contract dispute. The Department of Defence wanted access to Claude for “all lawful purposes,” a formulation broad enough to encompass fully autonomous lethal targeting, mass surveillance of US persons, and any other application a creative procurement officer might dream up. Anthropic, whose usage policy explicitly prohibits those applications, refused. The company offered workable alternatives: access for non-weaponised use cases, compartmentalised deployments with documented guardrails, joint review of edge cases. The Pentagon's position hardened. Anthropic went public. The administration retaliated. A federal judge found the retaliation probably illegal. The appeal is ongoing.
What makes the dispute so destabilising for the governance conversation is that Anthropic is not behaving as the capture literature would predict. The canonical story assumes that the regulated industry quietly lobbies for weaker rules, funds sympathetic experts, and ends up with a regulatory environment that looks stringent on paper and is toothless in practice. Anthropic is doing something almost the opposite. It is publicly advocating for stricter chip export controls that antagonise Nvidia, Microsoft, and much of the rest of the industry. It has argued for pre-deployment evaluation regimes that would bind it as tightly as its competitors. It has, at real commercial cost, walked away from contracts the Pentagon desperately wanted signed.
And yet the capture problem has not gone away. It has become harder to see. Because even when the “good” frontier lab fights the administration in court over model use policies, the underlying structural condition is unchanged: Anthropic is still the entity telling the public how dangerous its own models are. Anthropic is still defining what an acceptable evaluation methodology looks like. Anthropic is still running the red teams that decide which capabilities deserve disclosure. Anthropic is still writing the blog posts the policy community quotes back to itself. The dispute is not a case of capture failing. It is a case of capture succeeding so thoroughly that the public conversation happens entirely within the conceptual vocabulary set by the labs themselves.
A New Kind of Capture
Regulatory capture, as the economists George Stigler and Sam Peltzman formalised it in the 1970s, is a corruption of maturity. It happens after a regulator exists, after rules are written, after a bureaucratic routine sets in and the small, concentrated, informed industry learns how to extract rents from the large, diffuse, ignorant public. The paradigmatic examples are the Interstate Commerce Commission and the railroads, the Civil Aeronautics Board and the airlines, the state liquor boards and the wholesalers. These are stories of drift. Institutions designed to constrain powerful interests began to serve them, because the powerful interests were the only ones who showed up to the meetings.
The AI case is categorically different. There is no mature AI regulator. There is nothing to drift away from. Instead, what the industry has done is populate the pre-regulatory space with its own objects: voluntary commitments, self-administered evaluation regimes, multi-stakeholder forums, “model cards,” “system cards,” responsible scaling policies, frontier model forums. Each has legitimate merit on its own terms. Taken together, they form a lattice of quasi-governance that occupies the conceptual territory where independent regulation might otherwise live. By the time Congress or a European regulator shows up with the ambition to do something new, the intellectual infrastructure is already in place, and it has been built by the firms being regulated. The regulator is not captured. The regulatory idea is.
Call this capture-in-utero, or pre-regulatory capture, or, more bluntly, capture by design. The mechanism is not lobbying in the traditional sense. It is something closer to epistemic dominance. The labs hold the data, run the experiments, publish the papers, train the graduates, fund the think tanks, convene the conferences, and shape the vocabulary. When a newly arrived policymaker asks what the state of the art on dangerous capability evaluation is, the only answer available is the one the labs have written. There is no counter-literature, because there is no counter-infrastructure to produce it.
The United Kingdom's AI Security Institute is one of the few attempts anywhere in the world to build such counter-infrastructure. It is important, underfunded, and fragile. It is not yet large enough to change the overall picture.
The Voluntary Commitment Trap
To see the capture dynamic concretely, consider the July 2023 White House voluntary commitments, the document that came to define Biden-era AI governance before the Executive Order did. Seven companies, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, signed up to eight principles covering security, safety, and public trust. Eight more signed on in September. Apple joined in July 2024. For two years, the voluntary commitments have been the closest thing the United States has had to a national AI policy, cited in speeches, referenced in the Executive Order, and treated in the press as a kind of proto-statute.
An academic study published in 2025 attempted, probably for the first time, to evaluate how well the signatories had actually performed against their own commitments. The results were bleak. The average score across all companies was 53 per cent. The highest scorer, OpenAI, managed 83 per cent. On the commitment most relevant to catastrophic risk, model weight security, the average was 17 per cent. Eleven of the sixteen companies scored zero. Nobody had been penalised, because there were no penalties. Nobody had been publicly shamed, because the only people qualified to evaluate compliance were the companies themselves or the small network of nonprofits they funded. The commitments functioned as a legitimising device: a way for the industry to say governance was happening, and for the administration to say governance was happening, while almost nothing resembling governance was actually happening.
The Frontier Model Forum, founded by Anthropic, Google, Microsoft, and OpenAI the same summer, performed a similar legitimising role. It produced whitepapers on responsible scaling. It issued definitional statements about frontier models. It convened working groups. Its existence has been taken as evidence of self-regulation. And it may well be. But it is self-regulation in the most literal sense: regulation of the self, by the self, for the self, with no exit option for anyone who disagrees.
This is not a moral failure on the part of the individuals involved. Most of them, including the ones at Anthropic now fighting the Pentagon in court, are earnest and thoughtful and alarmed in the way safety-focused engineers tend to be alarmed. The problem is structural. When the same small group of organisations sets the agenda, runs the evaluations, writes the papers, convenes the meetings, and authors the voluntary commitments, the resulting governance architecture reflects their view of the world, including the things they cannot see from inside it.
NIST, CAISI, and the Voluntary Framework Problem
Across town from the White House, the National Institute of Standards and Technology has spent the last three years constructing what it calls the AI Risk Management Framework. The first version was released in January 2023. A generative AI profile followed in 2024. A March 2025 update emphasises model provenance, data integrity, and third-party assessment. Colorado's AI Act now gives organisations a legal affirmative defence if they can demonstrate alignment with the framework. Regulators at the FDA, SEC, and CFPB reference it with increasing frequency. It is, in many ways, the most serious piece of technical policy work the US government has produced on AI.
It is also, by design, voluntary. The framework is a menu of considerations, not a set of binding requirements. It is the product of a lengthy consultation process in which the firms best positioned to influence its development were, inevitably, the firms with the deepest technical staff and the most resources to commit to standards meetings. The resulting document is careful, impressively researched, and structurally unable to compel anyone to do anything. Its value, advocates argue, is that it provides a common vocabulary that future binding rules can rest on. Its critics respond that the vocabulary itself was shaped by the parties being regulated, and that the “future binding rules” slot remains empty.
In June 2025, the Trump administration renamed the US AI Safety Institute the Center for AI Standards and Innovation, or CAISI. Commerce Secretary Howard Lutnick's accompanying statement was unusually blunt: “For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards.” The institute kept most of its responsibilities and lost most of its claim to being a regulator-in-waiting. “Safety” was removed from the name. “Innovation” was added. The signal was received.
The rebrand matters because it demonstrates how thin the government's own regulatory identity turned out to be. The institute had been founded in 2023 to give the federal government an independent foothold in AI evaluation. It signed memorandums of understanding with OpenAI and Anthropic that granted formal pre-release model access. It participated in joint evaluations with the UK. When the political winds shifted, it was renamed in a morning, by press release, without legislation, without hearings. An institution that can be erased by a name change was not an institution. It was a vibe.
The Epistemic Monopoly Problem
Behind all of this sits the deepest issue in contemporary AI governance: the people who know how these systems behave are the people who built them. The frontier labs employ the overwhelming majority of researchers qualified to evaluate frontier models. They own the compute required to run meaningful evaluations. They hold the data about how their models respond to inputs at scale. They control the access terms under which external parties can test anything. If a regulator wants to know whether Claude Opus 4 will attempt to exfiltrate its own weights under pressure, the only empirically grounded answer comes from Anthropic's own red team, which ran the tests and wrote the system card.
This is the epistemic monopoly problem, and it is why the usual tools of regulatory design run out of road. An environmental regulator confronting an oil refinery can, in principle, send its own inspectors with their own instruments to measure stack emissions. A pharmaceutical regulator can demand raw trial data and reproduce the analyses. An aviation regulator can order a grounding and inspect every aircraft. These tools work because the underlying phenomena can be observed and measured by parties other than the regulated entity.
Frontier AI systems are harder. The behaviours that matter only emerge at scale, require enormous compute to probe, are sensitive to exact prompting and scaffolding, and change qualitatively from one model generation to the next. An independent evaluator who shows up with last year's tools and last year's concepts will produce last year's findings. Keeping up with the frontier requires being at the frontier. Being at the frontier requires resources only the frontier labs, and a handful of national governments, can marshal.
The UK AI Security Institute, formerly the AI Safety Institute, was founded in November 2023 as the first serious national attempt to build independent evaluation capacity. It has priority access to leading models under negotiated terms. It has recruited strong technical staff from industry and academia. It has published credible evaluations of major releases. It has entered joint work with the US institute and the European Commission. It is the most important institutional innovation in AI governance of the last three years. And it is still, structurally, operating on terms the labs agree to. The access arrangements can be renegotiated. The evaluation regimes depend on lab cooperation for weights and scaffolding. The institute's budget is a rounding error next to the compute expenditure of any frontier lab it evaluates.
If capture-in-utero is going to be broken anywhere, it will probably be broken in places that look like AISI, because no other institutional form is currently on offer. But the gap between what AISI has and what genuinely independent evaluation would require is vast, and closing it would cost money no democratic government has yet shown willingness to spend.
What Independent Regulation Would Actually Need
Here is the uncomfortable checklist. If you want an AI regulator that is not structurally dependent on the industry it regulates, you need, at minimum, the following.
First, independent model access. Not memorandums of understanding that can be withdrawn. Not voluntary pre-release previews. Statutory authority to compel access to any model above a defined capability threshold, including access to weights, training data summaries, evaluation logs, and internal red team results, on terms the regulator sets and the company must obey. This is how drug regulation works. It is not how AI regulation works anywhere.
Second, independent compute. A regulator that has to ask a lab for GPU hours is not independent. The UK's AISI has begun to build its own evaluation infrastructure. The US's CAISI, while it existed as AISI, was beginning to do the same. Neither has the compute budget of even a mid-tier training run. Building a genuinely independent evaluation stack at frontier scale would cost billions of pounds or dollars per year, and would have to be refreshed as the frontier moves.
Third, independent red-teaming capacity. Not just the compute to run evaluations, but the human expertise to design them. This means recruiting senior ML researchers at salaries that compete with industry, retaining them, and resisting the gravitational pull of the revolving door. The UK has had modest success. The US has struggled. No country has cracked this at scale.
Fourth, funding models that do not depend on industry fees or voluntary cooperation. A regulator funded by the companies it regulates is, by definition, captured. A regulator funded by general taxation, with budgets insulated from political pressure, is the only durable model. The closest analogues are the UK's Office of Communications or Germany's Bundesnetzagentur, neither perfect but both demonstrating the form.
Fifth, personnel pipelines that do not rotate through frontier labs. This is the hardest, because the labs are also where most relevant tacit knowledge is held. A system in which regulators are recruited from labs, serve a term, and return to labs at higher salaries will, on average, regulate in favour of labs. Partial solutions include lifetime bans on post-regulator employment at regulated entities, public-sector research salaries, and academic programmes designed to produce regulators rather than industry researchers. None of it is currently on offer anywhere.
Sixth, statutory authority that does not depend on industry consent. The current regime is almost entirely built on consent. The voluntary commitments are consensual. The NIST framework is consensual. The frontier model forum is consensual. Even the UK AISI's access to models rests on a cooperation agreement, not a statute. Genuine independence requires the ability to act against the wishes of the regulated party, with consequences the regulated party cannot unilaterally avoid. This is the ordinary meaning of regulation in every other sector. It is the exceptional, almost fantastical prospect in AI.
A regulator with all six of these attributes exists nowhere in the world. A regulator with even three of them, applied to frontier AI, exists nowhere in the world. The question the April commentary implicitly asked is whether the current trajectory is capable of producing such a regulator, or whether the existing trajectory is in fact foreclosing it.
Why the Current Trajectory Cannot Get There
There are three structural reasons to think the current model cannot produce genuinely independent regulation, and all three are visible in the Anthropic fight.
The first is that the language of governance has already been colonised. When the Pentagon demanded access to Claude for “all lawful purposes,” it was using a contract formulation rather than a regulatory one. There is no regulatory statute it could have cited, because none exists. The dispute played out in civil court, under general administrative-law principles, because the alternative regulatory forum did not exist. And when Anthropic responded, it invoked its own usage policy, its own responsible scaling policy, its own alignment commitments, because those are the governance artefacts that exist. Both sides were arguing inside a conceptual space built by the industry.
The second is that the institutional capacity to build an alternative space is being actively dismantled. The CAISI rebrand stripped “safety” from the name of the only federal body that had begun to accumulate independent evaluation credibility. The Trump administration's March 2025 Executive Order on AI emphasised deregulation and industry partnership. The Office of Science and Technology Policy's approach to frontier AI has been to convene rather than constrain. A modest but real build-out of independent regulatory capacity that began in 2023 has, over the past twelve months, been paused or reversed.
The third is that the epistemic monopoly is not dissolving. It is intensifying. As models get larger, the compute required to evaluate them grows. As training regimes get more idiosyncratic, the institutional knowledge required to interpret behaviour grows. As release cycles accelerate, the window for external evaluation shrinks. The gap between what the frontier labs know and what anyone else knows is widening, not narrowing, and a regulatory model that assumes eventual parity is planning for a world moving in the opposite direction.
Put the three together and you get something like this: the governance conversation is in a vocabulary the industry wrote, the institutions that might have translated the conversation into law are being weakened, and the knowledge asymmetry that would make independent translation possible is getting worse.
The Alternatives Nobody Wants to Name
If the industry-led standards model cannot produce independent regulation, the honest question is what might. There are a handful of real options, and each is politically unpalatable for different reasons.
A public-option lab, funded by general taxation and operated on a non-profit basis with a mandate to produce open evaluations of frontier models, would break the epistemic monopoly at the cost of enormous public expenditure. Think of it as CERN for AI safety. The scientific precedent is sound: hard physics problems were addressed by pooling national resources into institutions too big for any single corporation to build. The political precedent is harder, because the relevant national governments are currently engaged in a race to attract private AI investment, not to compete with it.
An international body with teeth, possibly grafted onto the International Atomic Energy Agency or designed from scratch, would pool regulatory capacity across states that individually cannot afford it. The idea has been floated repeatedly, including by Amodei himself in slightly different form, and runs into the obvious problem that the only state whose participation would be decisive, the United States, is currently hostile to the very premise of international AI governance. China's participation is even more conditional. The UK, the EU, Canada, Japan, and others might form a coalition of the willing, but without US participation it has no authority over the labs, which are US-domiciled.
A pre-deployment licensing regime, in which models above a defined capability threshold cannot be deployed without regulatory approval, would replicate the model used for pharmaceuticals and civil aviation. The EU AI Act gestures at this for “general-purpose AI models with systemic risk,” though the actual technical standards defining those categories are being written, as it happens, by CEN-CENELEC committees heavily populated by industry. A study by scholars at the University of Birmingham published in late 2025 warned that the European standard-setting process is “open to influence by industry players.” A licensing regime that depends on industry-authored standards is not quite capture, but it is not independent regulation either.
Liability reform, which would expose frontier labs to damages for harms their models cause, would create market incentives for safety that do not require a functioning regulator to enforce them. The common-law position is uncertain. Federal pre-emption is being debated. The political economy is delicate, because any liability regime stringent enough to change behaviour would be, from the industry's perspective, indistinguishable from an existential threat. Expect ferocious resistance.
Antitrust as governance, the approach favoured by Lina Khan during her FTC chairship and still championed by some legal scholars, would use competition law to prevent the consolidation of the frontier lab sector into a handful of firms whose scale makes independent evaluation impossible. The theory has merit. The practical obstacle is that the horse has bolted. OpenAI, Anthropic, Google DeepMind, Meta AI, and a handful of others already constitute the competitive landscape, and breaking them up would not obviously produce the diversified ecosystem the theory requires.
None of these options is a silver bullet. All would require political will, public expenditure, and institutional courage that no major democracy has yet displayed. And all would have to contend with the argument, which the industry will press at every opportunity, that serious independent regulation risks ceding the frontier to China. That argument is not baseless. It is also the argument that has been used to justify the current regulatory vacuum, which is producing, among other things, the Anthropic fight.
A Position, Because WIRED Articles Take Them
So here is where I land. The Anthropic dispute is not evidence that the system is working. It is not the hopeful story of a responsible company standing up to an authoritarian administration, though it is also that. It is evidence that the structural condition of contemporary AI governance has become untenable: the only serious arguments about frontier AI safety are happening inside, or between, a small number of commercial entities, and the institutional forms that would allow those arguments to be adjudicated by anyone else have been allowed to atrophy or have never been built.
Anthropic is behaving well by most reasonable measures. It has taken real commercial risks. Its leadership has refused to back down under political pressure that would have caused most firms to fold in an afternoon. Its safety research is serious. Its advocacy for stricter export controls is genuinely costly. None of that changes the underlying problem, which is that we are trusting a private company to behave well because we have no other mechanism left. That is not a sustainable model of governance. It is not even a model of governance. It is an improvisation we have convinced ourselves to call one.
The realistic programme for the next five years has to include, at minimum, a ten-fold increase in public funding for independent AI evaluation capacity; statutory authority for pre-deployment model access, modelled on pharmaceutical regulation and immune from administrative whim; the rebuilding of CAISI, or something like it, with a mandate protected by legislation rather than press release; the articulation of a meaningful liability regime for frontier model harms; and the slow, unglamorous work of building academic pipelines that produce regulators, not just researchers who will be hired away by labs at three times the salary. None of this will happen quickly. Some may not happen at all. But the alternative is a governance regime defined entirely by the companies being governed, revealed as fiction the moment one of those companies and one administration happen to disagree.
The techno-optimists will tell you the market will sort this out, that safety-focused labs will outcompete reckless ones, and that regulation is premature. They are wrong. The market did not sort out financial risk before 2008. It did not sort out vehicle safety before Ralph Nader. It did not sort out pharmaceutical risk before thalidomide. Markets do not sort out externalities. They produce them.
The doomers will tell you that nothing short of a global pause will suffice, and that any attempt at meaningful regulation is futile because the labs will route around it. They are also wrong. Regulation, when it is built on independent capacity and statutory authority, works. It worked for aviation. It worked for pharmaceuticals. It worked for broadcast spectrum. It works imperfectly, slowly, and often enough to justify the effort.
What the Anthropic fight has revealed is that the current model has delivered neither the market-based correction the optimists promised nor the regulatory architecture the doomers demanded. It has delivered a regime in which a responsible firm can only resist political pressure by going to federal court, a judge can only protect it by invoking general First Amendment principles, and the only governance artefacts invoked on either side are documents the firm itself wrote. That is not capture in the classical sense. It is something more peculiar: a regulatory conversation that has outsourced its own vocabulary, its own evidence base, and its own institutional memory to the entities it was supposed to govern. Capture by design. Capture before the fact. Capture that looks, from the right angle, indistinguishable from the absence of regulation it was built to describe.
The way out is not rhetorical. It is institutional. It requires spending money and writing statutes and training people and accepting that the frontier will always be a little ahead of the oversight, and that the task is to narrow the gap, not close it. It requires, above all, abandoning the polite fiction that what we currently have is a governance regime rather than a promise one. The promise has been kept, intermittently, by companies acting in good faith. But good faith is not a regulatory design. It is a hope, and hope has never been the right instrument for managing industrial risk.
A decade from now, when the historians of AI governance try to explain how we ended up with the regime we ended up with, the Anthropic fight will appear in their footnotes as the moment the structure became visible. One company, one administration, one federal judge, and, underneath it all, the empty space where independent regulation was supposed to be. The space is still empty today. Whether it remains empty is the question we should be arguing about, in language we did not borrow from the firms that stand to benefit most from the answer.
References
- NPR. “Judge temporarily blocks Trump administration's Anthropic ban.” 26 March 2026. https://www.npr.org/2026/03/26/nx-s1-5762971/judge-temporarily-blocks-anthropic-ban
- CNBC. “Anthropic wins preliminary injunction in DOD fight as judge cites 'First Amendment retaliation'.” 26 March 2026. https://www.cnbc.com/2026/03/26/anthropic-pentagon-dod-claude-court-ruling.html
- Federal News Network. “Trump orders US agencies to stop using Anthropic technology in clash over AI safety.” February 2026. https://federalnewsnetwork.com/artificial-intelligence/2026/02/anthropic-refuses-to-bend-to-pentagon-on-ai-safeguards-as-dispute-nears-deadline/
- SiliconANGLE. “Anthropic's dispute with US government exposes deeper rifts over AI governance, risk and control.” 7 April 2026. https://siliconangle.com/2026/04/07/anthropics-dispute-us-government-exposes-deeper-rifts-ai-governance-risk-control/
- Axios. “Scoop: White House casts doubt on Pentagon-Anthropic reconciliation.” 4 March 2026. https://www.axios.com/2026/03/04/pentagon-anthropic-white-house-amodei
- The National. “Pentagon declares Anthropic AI 'supply chain risk to national security'.” 27 February 2026. https://www.thenationalnews.com/future/technology/2026/02/27/trump-anthropic-ai-dario-amodei/
- The Hill. “Anthropic CEO urges tighter AI chip export controls.” https://thehill.com/policy/technology/5504408-anthropic-ceo-dario-amodei-trump-chip-policy/
- Washington Technology. “Judge blocks DOD's ban on Anthropic, calls it First Amendment retaliation.” March 2026.
- National Institute of Standards and Technology. “AI Risk Management Framework.” https://www.nist.gov/itl/ai-risk-management-framework
- FedScoop. “Trump administration rebrands AI Safety Institute.” June 2025. https://fedscoop.com/trump-administration-rebrands-ai-safety-institute-aisi-caisi/
- TechPolicy.Press. “Renaming the US AI Safety Institute Is About Priorities, Not Semantics.” https://www.techpolicy.press/from-safety-to-security-renaming-the-us-ai-safety-institute-is-not-just-semantics/
- Broadband Breakfast. “AI Safety Institute Renamed Center for AI Standards and Innovation.” https://broadbandbreakfast.com/ai-safety-institute-renamed-center-for-ai-standards-and-innovation/
- UK AI Security Institute. https://www.aisi.gov.uk
- TIME. “Inside the U.K.'s Bold Experiment in AI Safety.” https://time.com/collections/davos-2025/7204670/uk-ai-safety-institute/
- Centre for Future Generations. “The AI safety institute network: who, what and how?” https://cfg.eu/the-ai-safety-institute-network-who-what-and-how/
- Bommasani et al. “Do AI Companies Make Good on Voluntary Commitments to the White House?” arXiv:2508.08345. https://arxiv.org/pdf/2508.08345
- MIT Technology Review. “AI companies promised to self-regulate one year ago. What's changed?” 22 July 2024. https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/
- GovAI Blog. “Putting New AI Lab Commitments in Context.” https://www.governance.ai/post/putting-new-ai-lab-commitments-in-context
- Cantero Gamito, Marta. “From Consensus to Exceptionality: What the EU's AI Standards Crisis Reveals About Delegated Technical Governance.” realaw.blog, 28 November 2025. https://realaw.blog/2025/11/28/from-consensus-to-exceptionality-what-the-eus-ai-standards-crisis-reveals-about-delegated-technical-governance-by-marta-cantero-gamito/
- CEPS. “With the AI Act, we need to mind the standards gap.” https://www.ceps.eu/with-the-ai-act-we-need-to-mind-the-standards-gap/
- University of Birmingham. “European technical standard-setting process open to influence by industry players, experts warn.” 2025. https://www.birmingham.ac.uk/news/2025/european-technical-standard-setting-process-open-to-influence-by-industry-players-experts-warn
- CMS Law. “Speed vs Safety: CEN-CENELEC fast-tracks AI standards.” https://cms.law/en/gbr/publication/speed-vs-safety-cen-cenelec-fast-tracks-ai-standards
- Amodei, Dario. “Machines of Loving Grace.” October 2024. https://www.darioamodei.com/essay/machines-of-loving-grace
- Amodei, Dario. “The Adolescence of Technology.” January 2026. https://darioamodei.com/essay/the-adolescence-of-technology
- TIME. “Anthropic's Big Washington Push.” https://time.com/7317553/anthropic-futures-forum-dc/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk