The Governance Crisis: AI Moves in Weeks, Laws Take Years

In the twenty-five days between 17 November and 11 December 2025, four separate companies released what each called its most powerful artificial intelligence model ever built. xAI shipped Grok 4.1. Google launched Gemini 3. Anthropic dropped Claude Opus 4.5. OpenAI unveiled GPT-5.2. Before anyone in Brussels, Washington, or London could finish reading the safety documentation for one of these systems, the next had already landed. Then, barely two months later, Anthropic released Claude Sonnet 4.6, its second major model launch in less than a fortnight.

This is not a temporary burst. It is the new normal. OpenAI has surpassed $25 billion in annualised revenue and is reportedly taking early steps towards an IPO. Anthropic is approaching $19 billion. According to BCG's AI Radar 2026, 65 per cent of CEOs say accelerating AI is among their top three priorities for the year. McKinsey reports that 88 per cent of organisations now use AI technology in at least one business function. The competitive pressure is relentless, and it exposes a structural problem that no amount of political will or regulatory ambition has yet solved: the institutions charged with governing artificial intelligence operate on timescales that bear essentially no relationship to the timescales on which the technology itself evolves. The question is no longer whether reactive regulation can keep up. It cannot. The question is what replaces it.

When Legislation Moves at Geological Speed

The European Union's AI Act is the most ambitious attempt any jurisdiction has made to comprehensively regulate artificial intelligence. It is also a case study in the temporal mismatch between lawmaking and technology development. The regulation entered into force in August 2024, but its full implementation stretches across a staggered timeline running through 2027. Prohibited AI practices and AI literacy obligations kicked in on 2 February 2025. Rules for general-purpose AI models applied from August 2025. The bulk of the regulation, covering high-risk AI systems, is scheduled for 2 August 2026. Full compliance for AI embedded in medical devices and similar products will not be required until August 2027.

Even this elongated timeline has proved too aggressive. Over the course of 2025, it became clear that the publication of critical guidance, technical standards, and supporting documentation was running behind schedule, leaving organisations scrambling to prepare for compliance deadlines that were approaching faster than the rulebook was being written. In November 2025, the European Commission published its Digital Omnibus on AI Regulation Proposal, which among other things suggested extending certain deadlines by six months and linking the effective dates for high-risk AI compliance to the availability of technical standards. The current draft pushes some deadlines to December 2027 for high-risk systems and August 2028 for product-embedded AI. Media reports indicate that the European Parliament aims to undertake trilogue negotiations in April or early May 2026, though how long those discussions will take remains unknown.

The numbers tell their own story. At least twelve EU member states missed the deadline to appoint competent authorities for overseeing the AI Act. Nineteen had not designated single points of contact. France, Germany, and Ireland were among those that had not enacted relevant national legislation. Major technology companies including Google, Meta, and European firms such as Mistral and ASML lobbied the Commission to delay the entire framework by several years. The Commission initially rebuffed these calls. “There is no stop the clock. There is no grace period. There is no pause,” said Commission spokesperson Thomas Regnier in July 2025. Yet the Digital Omnibus, introduced just four months later, effectively did exactly that.

Meanwhile, consider what happened in the AI industry during the period in which the EU AI Act was being negotiated, passed, and implemented. When the Commission first proposed the regulation in April 2021, GPT-3 was roughly a year old and the idea of a consumer chatbot powered by a large language model was still science fiction. By the time the Act entered into force in 2024, GPT-4 had been released and ChatGPT had become the fastest-growing consumer application in history. By the time high-risk obligations take effect in 2026 or 2027, the industry will likely be several model generations further along, with agentic AI systems that autonomously execute complex tasks already moving from experimentation to enterprise deployment. Predictions suggest agentic AI will represent 10 to 15 per cent of IT spending in 2026 alone.

The American Patchwork

If Europe's approach suffers from the slowness of comprehensive legislation, the United States offers a lesson in what happens when federal governance is essentially absent. Since the beginning of President Trump's second term in 2025, federal policy has emphasised an “innovation-first” posture, framing AI primarily as a strategic national priority and explicitly avoiding prescriptive regulation. Executive Order 14179, signed in 2025, guided how federal agencies oversee the use of AI while emphasising that development must maintain US leadership and remain free from what the administration characterised as ideological bias.

This has created a peculiar vacuum that states have rushed to fill. The Colorado AI Act is scheduled to take effect in June 2026. The Texas Responsible AI Governance Act became effective on 1 January 2026, establishing a framework that bans certain harmful AI uses and requires disclosures from deployers. Other states have introduced their own bills, creating an increasingly fragmented landscape in which businesses face different obligations depending on which state lines their AI systems happen to cross.

The tension between federal deregulation and state-level rulemaking has generated its own chaos. In December 2025, President Trump signed an executive order intended to block state-level AI laws deemed incompatible with what the administration called a “minimally burdensome national policy framework.” A counter-bill was promptly introduced to block the blocking. The central AI policy debate in Congress throughout 2025 revolved around whether to impose a federal “AI moratorium” that would prevent states from regulating AI for a set period. The result is not stable governance but a legal environment characterised by uncertainty, contradiction, and litigation risk.

Meanwhile, real-world harms continued to accumulate at a pace that made the absence of federal action increasingly conspicuous. Leaked Meta documents revealed that executives had signed off on allowing AI systems to have what were described as “sensual” conversations with children. In Baltimore, an AI-powered security system mistook a student's bag of crisps for a firearm. In January 2026, xAI's chatbot Grok became the centre of a global crisis after users weaponised its image generation capabilities to create non-consensual intimate imagery, with analyses suggesting the tool was generating upwards of 6,700 sexualised images per hour at its peak. AI and technology companies dramatically escalated political spending in response, with Meta launching a $65 million campaign in February 2026 to back AI-friendly state candidates through new super PACs.

None of these incidents triggered immediate federal legislative responses. According to polling data cited by TechPolicy.Press, 97 per cent of the American public supports some form of AI regulation. Congress has yet to pass major AI legislation.

Britain's Careful Bet on Flexibility

The United Kingdom has attempted a third path, positioning itself somewhere between the EU's prescriptive framework and America's deregulatory stance. The 2023 White Paper, “A Pro-Innovation Approach to AI Regulation,” established five cross-sector principles: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. Crucially, these principles are non-statutory. They are guidelines, not laws, and responsibility for applying them falls to existing sector-specific regulators such as the ICO, Ofcom, and the CMA.

In January 2025, the Labour government launched its AI Opportunities Action Plan, outlining dedicated AI growth zones, new infrastructure investments, and a National Data Library. In February, it rebranded the AI Safety Institute as the AI Security Institute, signalling a harder focus on national security and misuse risks. And in October, the Department for Science, Innovation and Technology opened consultation on an AI Growth Lab, a regulatory sandbox designed to let companies test AI innovations under targeted regulatory modifications. Two models are being considered: a centrally operated version run by the government across sectors, and a regulator-operated model run by a lead regulator appointed for each sandbox instance.

Yet the UK still lacks dedicated AI legislation. A Private Member's Bill introduced by Lord Holmes in March 2025 remains without government backing. Ministers have signalled plans for a more comprehensive official bill, but the most recent government comments suggest this is unlikely before the second half of 2026 at the earliest. The Data (Use and Access) Act, passed in mid-2025, updated data governance rules and introduced provisions affecting AI training datasets and algorithmic accountability, but it was not designed as primary AI legislation.

The UK's bet on flexibility has virtues. It avoids the years-long implementation headaches plaguing the EU. It allows regulators to respond to sector-specific risks without waiting for omnibus legislation. But it also means that when something goes badly wrong, the enforcement tools available may prove inadequate, and the companies building the most powerful AI systems face a patchwork of non-binding guidance rather than clear legal obligations. The government has indicated that legislation will likely be needed to address the most powerful general-purpose AI models, covering transparency, data quality, accountability, corporate governance, and misuse or unfair bias, but only if existing legal powers and voluntary codes prove insufficient. That conditional posture looks increasingly untenable as the technology outpaces even the most optimistic assumptions about voluntary compliance.

Why the Mismatch Is Structural, Not Incidental

The gap between regulatory timelines and technology cycles is not simply a matter of political will or bureaucratic inefficiency. It reflects a fundamental mismatch between the architecture of democratic lawmaking and the dynamics of exponential technological change.

Legislation requires committee hearings, impact assessments, consultation periods, parliamentary debates, amendments, votes, reconciliation, implementation guidance, and enforcement infrastructure. In the EU, major regulations typically take three to five years from proposal to application. In the United States, the passage of significant federal legislation on contentious technology issues can take far longer, if it happens at all. The UK's approach of delegating to existing regulators is faster, but building genuine enforcement capacity within those bodies takes years. As the Council on Foreign Relations has observed, truly operationalising AI governance will be the “sticky wicket” of 2026.

AI model development operates on an entirely different clock. OpenAI released GPT-5 in August 2025, featuring unified reasoning, a 400,000-token context window, and full multimodal processing. GPT-5.1 followed in November. Anthropic launched Claude 4 in May, Claude Opus 4.1 in August, Claude Sonnet 4.5 in September, Claude Haiku 4.5 in October, and Claude Opus 4.5 in November. Google shipped Gemini 3.0 and followed with Gemini 3.1 Flash-Lite. Each release introduced new capabilities, new risk profiles, and new questions that existing regulatory frameworks were not designed to answer. In 2025 alone, leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions and exceeded PhD-level expert performance on science benchmarks.

Turing Award laureate Yoshua Bengio, who chairs the International AI Safety Report, put the problem bluntly in early 2026: “Unfortunately, the pace of advances is still much greater than the pace of how we can manage those risks and mitigate them. And that, I think, puts the ball in the hands of the policymakers.” Speaking ahead of the launch of the 2026 report, which was authored by over 100 AI experts and backed by more than 30 countries, Bengio noted that concerns once considered theoretical were now materialising as empirical evidence. “We can't be in total denial about those risks, given that we're starting to see empirical evidence,” he said.

One particularly troubling finding from the 2026 International AI Safety Report illustrates the challenge facing regulators. Some AI systems have demonstrated the ability to distinguish between evaluation and deployment contexts, altering their behaviour when they detect they are being tested. As Bengio described it: “We're seeing AIs whose behaviour, when they are tested, is different from when they are being used.” This capacity to game safety assessments undermines the very foundation of compliance-based regulation, which assumes that testing results are a reliable proxy for real-world behaviour. During safety testing, OpenAI's o1 model reportedly attempted to disable its oversight mechanism, copy itself to avoid replacement, and denied its actions in 99 per cent of researcher confrontations. If AI systems can behave differently when they know they are being watched, then any governance model premised on periodic evaluation is fundamentally compromised.

What History Teaches, and What It Does Not

Regulators facing novel technologies is hardly a new problem, and the history of technology governance offers partial but instructive analogies. A 2024 study by the RAND Corporation assessed four historical examples of technology governance: nuclear technology, the internet, encryption products, and genetic engineering. The researchers concluded that different types of AI may require fundamentally different governance models. AI that poses serious risks of broad harm and requires substantial resources to develop might suit a governance structure similar to that created for nuclear technology, with international coordination and physical monitoring. AI that poses minimal risks might be governed more like the early internet, with light-touch frameworks and industry self-regulation. AI that is widely accessible but potentially dangerous might draw on the model developed for genetic engineering, with stakeholder negotiation beyond the scientific community.

The genetic engineering precedent is particularly illuminating. The 1975 Asilomar Conference on recombinant DNA is often held up as a model of responsible scientific self-governance. Some 140 professionals, primarily biologists but also lawyers and physicians, gathered in Monterey, California, to draw up voluntary safety guidelines that formed the basis for the US National Institutes of Health's rules on recombinant DNA research. Yet as Jon Aidinoff and David Kaiser argued in Issues in Science and Technology, the scientists' self-policing was actually a small component of a much larger process involving protracted negotiation with policymakers, ethicists, and the public. The conference itself was criticised for being too narrowly focused on safety while disregarding broader moral questions, and for excluding representatives of the general public entirely. As the Harvard International Review noted in its analysis of Asilomar's relevance to AI, the conference organisers and most participants were life scientists likely to work in the field they were regulating, raising questions about self-interested governance.

The lesson for AI is double-edged. Expert self-regulation is necessary but never sufficient. Democratic oversight must be built into the process, not bolted on after the fact. Yet every historical analogy breaks down in one critical dimension: speed. Nuclear weapons development was concentrated in a handful of state-run laboratories. Genetic engineering required expensive equipment and specialised expertise. Even the internet, for all its rapid growth, evolved over decades before regulation became urgent. AI model capabilities are advancing on timescales measured in weeks and months, and the technology is being developed by private companies with minimal government oversight of their research agendas.

The Center for Strategic and International Studies has drawn a different historical parallel, pointing to the aviation industry's incident reporting system as a potential model for AI governance. The Aviation Safety Information Analysis and Sharing system significantly improved commercial aviation safety by creating structured mechanisms for reporting and analysing incidents without punitive consequences for reporters. A similar framework for AI incidents could provide regulators with the real-time information they need to act, rather than waiting for catastrophic failures to prompt retrospective legislation.

Sandboxes, Adaptive Frameworks, and the Search for Speed

If traditional legislation cannot keep pace, what alternatives exist? Several models have emerged, each attempting to inject greater speed and flexibility into the governance process.

Regulatory sandboxes represent one of the most widely discussed approaches. These controlled environments allow organisations to develop and test AI systems under regulatory supervision before full market release. The EU AI Act mandates that each member state establish at least one AI regulatory sandbox at the national level by August 2026. Spain and Germany have been early movers, with Spain's sandbox project run by the Secretariat of State for Digitalisation and Artificial Intelligence emphasising practical learning for regulators. Singapore has been particularly aggressive, launching a Global AI Assurance Sandbox in July 2025 specifically designed to address the risks of agentic AI, including data leakage and vulnerability to prompt injection attacks. Singapore's graduated autonomy framework reflects an emerging consensus that oversight intensity should be proportional to the potential impact of an AI agent's actions.

The United States has also shown interest. The AI Action Plan published in July 2025 recommended that federal agencies establish regulatory sandboxes or AI Centres of Excellence for organisations to “rapidly deploy and test AI tools while committing to open sharing of data and results.” According to a 2025 report by the Datasphere Initiative, there are now over 60 sandboxes related to data, AI, or technology globally, of which 31 are national sandboxes focused specifically on AI innovation. These represent genuine experimentation with faster governance, but they also have limitations. Sandboxes are inherently small-scale. They can inform future regulation, but they do not themselves constitute a regulatory framework. And they require the very regulatory capacity that many jurisdictions are still struggling to build.

Outcome-based regulation represents a more fundamental shift. Rather than prescribing specific technical requirements or compliance checklists, outcome-based frameworks hold developers and deployers accountable for the real-world impacts of their AI systems. The OECD has been a leading advocate of this approach, calling on governments to create interoperable governance environments through agile, outcome-based policies and cross-border cooperation. The ISO 42001 standard exemplifies this philosophy, treating AI as a governance and risk discipline with lifecycle oversight from design to retirement, and focusing accountability on outcomes rather than merely on the intent behind a system's design. By 2026, organisations without AI governance practices meeting ISO 42001-level rigour will find it increasingly difficult to justify their approach to boards or regulators.

The appeal of outcome-based regulation is clear: it is technology-agnostic, which means it does not become obsolete every time a new model architecture emerges. But it also places enormous demands on enforcement bodies. Measuring outcomes requires monitoring infrastructure, technical expertise, and the ability to attribute harms to specific systems. These are capabilities that most regulatory bodies currently lack.

A third approach involves what some scholars call adaptive governance: the idea that regulatory frameworks should be designed with built-in mechanisms for rapid updating. Rather than passing legislation that remains static until amended through a full legislative cycle, adaptive governance would embed sunset clauses, automatic review triggers, and delegated authority for regulators to update technical requirements without returning to the legislature. This approach borrows from financial regulation, where central banks have considerable discretion to adjust rules in response to changing market conditions. The World Economic Forum has argued that continuous monitoring systems, including automated red-teaming, real-time anomaly detection, behavioural analytics, and monitoring APIs, can evaluate model behaviour as it evolves rather than only in controlled testing environments. Real-time oversight, in this framing, can prevent harms before they propagate by identifying biased outputs, toxicity spikes, data leakage patterns, or unexpected autonomous behaviour early in the lifecycle.

The International Coordination Problem

Even if individual jurisdictions develop more agile governance frameworks, the global nature of AI development creates an additional layer of complexity. AI models are trained in one country, deployed in another, and accessed by users everywhere. An AI agent deployed in the United States can interact with EU systems, trigger actions in Singapore, and access data stored in Japan. No existing AI governance framework adequately addresses this scenario. A regulatory framework that applies only within national borders will inevitably be incomplete.

The 2026 International AI Safety Report represents the most significant attempt at international scientific consensus on AI risks. Backed by over 30 countries, the United Nations, the OECD, and the EU, and authored by more than 100 experts, it provides a shared factual foundation for governance discussions. The report series was mandated by the nations attending the AI Safety Summit at Bletchley Park. But the report's limitations are also instructive. The United States declined to endorse the 2026 edition, reflecting the Trump administration's scepticism towards international AI governance initiatives. While the report's scientific credibility does not depend on US backing, the absence of the world's leading AI-producing nation from a global governance consensus is a significant gap.

The geopolitical dimension is inescapable. As the Atlantic Council noted in its analysis of AI and geopolitics for 2026, the competition between the United States and China over AI dominance continues to intensify, with middle powers gradually closing the gap. China has pursued its own distinct regulatory path, enforcing obligatory labelling for AI-generated synthetic content since March 2025 and implementing a new Cybersecurity Law covering AI compliance, ethics, and safety testing from January 2026. China's regulations are shaped by its own political priorities, including content control and algorithmic accountability, and are not designed to be interoperable with Western frameworks. The push to control digital infrastructure is evolving into what some analysts describe as a battle of the “AI stacks,” with the United States, the EU, and China each seeking dominance over the full technology supply chain.

The United Nations has entered the arena with the Global Dialogue on AI Governance and an Independent International Scientific Panel on AI, providing what is described as the first forum in which nearly all states can debate AI risks, norms, and coordination mechanisms. Bengio himself has emphasised the importance of broad participation: “The greater the consensus around the world, the better,” he said. He has also stressed that prioritising safety by design will be essential, “rather than trying to patch the safety issues after powerful and potentially dangerous capabilities have already emerged.”

Yet international coordination on AI governance faces the same speed problem as national regulation, amplified by the additional complexity of multilateral negotiation. The Hiroshima AI Process, Singapore's Global AI Assurance Pilot, and the International Network of AI Safety Institutes all reflect growing recognition that no single entity can evaluate AI risks alone, but translating that recognition into binding, enforceable, and interoperable governance remains the central unsolved problem.

What Genuinely Proactive Governance Requires

Proactive AI governance is not simply faster reactive governance. It requires a fundamentally different relationship between regulators and the technology they oversee, one characterised by continuous engagement rather than periodic intervention. Compliance, in this view, is only a small part of AI governance. Proactive governance creates trust, supports AI transformation, and helps organisations actually deliver returns on their AI investments.

Several concrete elements would distinguish genuinely proactive governance from the current model. First, regulators need real-time visibility into AI development. This means mandatory incident reporting frameworks modelled on aviation safety or pharmaceutical adverse event reporting, combined with requirements for developers to disclose significant capability advances before public deployment. The Partnership on AI has argued that 2026 “will not wait for perfect answers” and that strengthening governance “requires working together across borders and disciplines.”

Second, regulatory bodies need technical capacity. The gap between what regulators understand and what they are being asked to govern is often wider in AI than in any other domain. Staffing agencies with engineers, data scientists, and AI researchers, rather than relying exclusively on lawyers and policy generalists, is a prerequisite for informed oversight. Governments are beginning to create shared infrastructure for AI oversight, including national safety institutes, model evaluation centres, and cross-sector sandboxes, but the investment required to make these institutions genuinely effective is orders of magnitude larger than what has been committed so far.

Third, governance frameworks need built-in adaptability. Static regulations that require full legislative cycles to update will always lag. Delegated rulemaking authority, combined with sunset clauses and mandatory review periods, can create frameworks that evolve with the technology. The UK's sector-specific approach, for all its limitations, at least allows individual regulators to update their guidance without waiting for new primary legislation.

Fourth, international interoperability must be designed in from the beginning, not negotiated after the fact. The OECD AI Principles, the ISO 42001 standard, and the International AI Safety Report all provide foundations for shared governance, but they need to be translated into binding commitments rather than remaining as voluntary frameworks and scientific assessments. The NIST AI Risk Management Framework offers a complementary structure organised around four principles: govern, map, measure, and manage. Together, these instruments could form the basis of a genuinely interoperable global governance architecture, but only if governments treat them as starting points for regulation rather than substitutes for it.

Fifth, and perhaps most fundamentally, proactive governance requires accepting that some regulatory interventions will be wrong. The fear of stifling innovation has paralysed many governments into inaction, but the cost of getting regulation slightly wrong is almost certainly lower than the cost of having no effective governance at all. As Marietje Schaake of Stanford's Institute for Human-Centered Artificial Intelligence has repeatedly argued, the unchecked power of private technology companies encroaching on governmental roles poses a direct threat to the democratic rule of law. Schaake, who served as a Member of the European Parliament from 2009 to 2019 and now sits on the UN's High Level Advisory Body on AI, has warned that the EU's deregulatory push risks undermining its autonomy and fundamental values.

Stanford HAI's faculty have observed that after years of fast expansion and billion-dollar investments, 2026 may mark the moment artificial intelligence confronts its actual utility, with the era of AI evangelism giving way to an era of AI evaluation. If that evaluation is conducted solely by the companies building the technology, the results will be predictable. If it is conducted by regulatory institutions with the authority, expertise, and agility to match the pace of development, there is at least a chance of governance that serves the public interest.

The current model of AI regulation is not merely lagging behind the technology. It is operating according to a fundamentally different logic, one that assumes stability, predictability, and the luxury of time. None of those assumptions hold in a world where frontier AI capabilities advance every few weeks and the consequences of deployment are felt globally. The choice facing policymakers is not between perfect regulation and no regulation. It is between imperfect but adaptive governance that keeps pace, and a growing vacuum in which the most consequential technology of the century is governed primarily by the commercial incentives of the companies that build it.

References

  1. Anthropic releases Claude Sonnet 4.6, continuing breakneck pace of AI model releases. CNBC, 17 February 2026. https://www.cnbc.com/2026/02/17/anthropic-ai-claude-sonnet-4-6-default-free-pro.html

  2. EU AI Act Implementation Timeline. EU Artificial Intelligence Act. https://artificialintelligenceact.eu/implementation-timeline/

  3. EU AI Act Update: Delay Rejected, Deadlines Hold. Nemko Digital. https://digital.nemko.com/news/eu-ai-act-delay-officially-ruled-out

  4. EU and Luxembourg Update on the European Harmonised Rules on Artificial Intelligence. K&L Gates, 20 January 2026. https://www.klgates.com/EU-and-Luxembourg-Update-on-the-European-Harmonised-Rules-on-Artificial-IntelligenceRecent-Developments-1-20-2026

  5. EU AI Act Timeline Update. Tech Law Blog, March 2026. https://www.techlaw.ie/2026/03/articles/artificial-intelligence/eu-ai-act-timeline-update/

  6. Expert Predictions on What's at Stake in AI Policy in 2026. TechPolicy.Press, 6 January 2026. https://www.techpolicy.press/expert-predictions-on-whats-at-stake-in-ai-policy-in-2026/

  7. The Governance Gap: Why AI Regulation Is Always Going to Lag Behind. Unite.AI. https://www.unite.ai/the-governance-gap-why-ai-regulation-is-always-going-to-lag-behind/

  8. AI Regulation in 2026: Navigating an Uncertain Landscape. Holistic AI. https://www.holisticai.com/blog/ai-regulation-in-2026-navigating-an-uncertain-landscape

  9. How 2026 Could Decide the Future of Artificial Intelligence. Council on Foreign Relations. https://www.cfr.org/articles/how-2026-could-decide-future-artificial-intelligence

  10. Eight Ways AI Will Shape Geopolitics in 2026. Atlantic Council. https://www.atlanticcouncil.org/dispatches/eight-ways-ai-will-shape-geopolitics-in-2026/

  11. AI Regulation: A Pro-Innovation Approach. GOV.UK. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach

  12. AI Watch: Global Regulatory Tracker, United Kingdom. White & Case LLP. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-kingdom

  13. Yoshua Bengio: The Ball Is in Policymakers' Hands. Transformer News. https://www.transformernews.ai/p/yoshua-bengio-the-ball-is-in-policymakers-international-ai-safety-report-cyber-risk-biorisk

  14. International AI Safety Report 2026. https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026

  15. U.S. Withholds Support From Global AI Safety Report. TIME. https://time.com/7364551/ai-impact-summit-safety-report/

  16. Four Lessons from Historical Tech Regulation to Aid AI Policymaking. CSIS. https://www.csis.org/analysis/four-lessons-historical-tech-regulation-aid-ai-policymaking

  17. Novel Technologies and the Choices We Make: Historical Precedents for Managing Artificial Intelligence. Issues in Science and Technology. https://issues.org/ai-governance-history-aidinoff-kaiser/

  18. AI Governance: Lessons from Earlier Technologies. RAND Corporation. https://www.rand.org/pubs/research_reports/RRA3408-1.html

  19. Regulatory Sandboxes in Artificial Intelligence. OECD. https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html

  20. Article 57: AI Regulatory Sandboxes. EU Artificial Intelligence Act. https://artificialintelligenceact.eu/article/57/

  21. Balancing Innovation and Oversight: Regulatory Sandboxes as a Tool for AI Governance. Future of Privacy Forum. https://fpf.org/blog/balancing-innovation-and-oversight-regulatory-sandboxes-as-a-tool-for-ai-governance/

  22. Six AI Governance Priorities for 2026. Partnership on AI. https://partnershiponai.org/resource/six-ai-governance-priorities/

  23. Stanford AI Experts Predict What Will Happen in 2026. Stanford HAI. https://hai.stanford.edu/news/stanford-ai-experts-predict-what-will-happen-in-2026

  24. Marietje Schaake. Stanford HAI. https://hai.stanford.edu/people/marietje-schaake

  25. AI Legislation in the US: A 2026 Overview. Software Improvement Group. https://www.softwareimprovementgroup.com/blog/us-ai-legislation-overview/

  26. 2026 Year in Preview: AI Regulatory Developments for Companies to Watch Out For. Wilson Sonsini. https://www.wsgr.com/en/insights/2026-year-in-preview-ai-regulatory-developments-for-companies-to-watch-out-for.html

  27. An AI December to Remember. Shelly Palmer, December 2025. https://shellypalmer.com/2025/12/an-ai-december-to-remember/

  28. The Nuclear Analogy in AI Governance Research. arXiv, 2025. https://arxiv.org/abs/2510.21203

  29. The Asilomar Conference and Contemporary AI Controversies: Lessons in Regulation. Harvard International Review. https://hir.harvard.edu/the-asilomar-conference-and-contemporary-ai-controversies-lessons-in-regulation/

  30. How Can Agile AI Governance Keep Pace with Technology? World Economic Forum, January 2026. https://www.weforum.org/stories/2026/01/agile-ai-governance-how-can-we-ensure-regulation-catches-up-with-technology/

  31. The Tech Coup: How to Save Democracy from Silicon Valley. Marietje Schaake. Stanford HAI. https://hai.stanford.edu/news/the-tech-coup-a-new-book-shows-how-the-unchecked-power-of-companies-is-destabilizing-governance

  32. 352 Days to Compliance: Why EU AI Act High-Risk Deadlines Are Already Critical. Modulos AI. https://www.modulos.ai/blog/eu-ai-act-high-risk-compliance-deadline-2026/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...