Autonomous AI in Legal Limbo: The Race to Write New Rules

When Anthropic released Claude's “computer use” feature in October 2024, the AI could suddenly navigate entire computers by interpreting screen content and simulating keyboard and mouse input. OpenAI followed in January 2025 with Operator, powered by its Computer-Using Agent model. Google deployed Gemini 2.0 with Astra for low-latency multimodal perception. The age of agentic AI, systems capable of autonomous decision-making without constant human oversight, had arrived. So had the regulatory panic.

Across government offices in Brussels, London, Washington, and beyond, policymakers face an uncomfortable truth: their legal frameworks were built for software that follows instructions, not AI that makes its own choices. When an autonomous agent can book flights, execute financial transactions, manage customer relationships, or even write and deploy code independently, who bears responsibility when things go catastrophically wrong? The answer, frustratingly, depends on which jurisdiction you ask, and whether you ask today or six months from now.

This regulatory fragmentation isn't just an academic concern. It's reshaping how technology companies build products, where they deploy services, and whether smaller competitors can afford to play the game at all. The stakes extend far beyond compliance costs: they touch questions of liability, data sovereignty, competitive dynamics, and whether innovation happens in regulatory sandboxes or grey market jurisdictions with looser rules.

The European Vanguard

The European Union's AI Act, which entered into force on 1 August 2024, represents the world's first comprehensive attempt to regulate artificial intelligence through binding legislation. Its risk-based approach categorises AI systems from prohibited to minimal risk, with agentic AI likely falling into the “high-risk” category depending on deployment context. The Act's phased implementation means some requirements already apply: prohibited AI practices and AI literacy obligations took effect on 2 February 2025, whilst full compliance obligations arrive on 2 August 2026.

For agentic AI, the EU framework presents unprecedented challenges. Article 9's risk management requirements mandate documented processes extending beyond one-time validation to include ongoing testing, real-time monitoring, and clearly defined response strategies. Because agentic systems engage in multi-step decision-making and operate autonomously, they require continuous safeguards, escalation protocols, and oversight mechanisms throughout their lifecycle. Traditional “deploy and monitor” approaches simply don't suffice when an AI agent might encounter novel situations requiring judgement calls.

Documentation requirements under Article 11 prove equally demanding. Whilst the provision requires detailed technical documentation for high-risk AI systems, agentic AI demands comprehensive transparency beyond traditional practices like model cards or AI Bills of Materials. Organisations must document not just initial model architecture but also decision-making processes, reasoning chains, tool usage patterns, and behavioural evolution over time. This depth proves essential for auditing and compliance, especially when systems behave dynamically or interact with third-party APIs in ways developers cannot fully anticipate.

Article 12's event recording requirements create similar challenges at scale. Agentic systems make independent decisions and generate logs across diverse environments, from cloud infrastructure to edge devices. Structured logs including timestamps, reasoning chains, and tool usage become critical for post-incident analysis, compliance verification, and accountability attribution. The European Commission's proposed amendments even introduce “AIH Codes” covering underlying AI technologies, explicitly including agentic AI as a distinct category requiring regulatory attention.

The penalties for non-compliance carry genuine teeth: up to €35 million or 7% of global annual turnover for prohibited practices, €15 million or 3% for violations involving high-risk AI systems, and €7.5 million or 1% for providing false information. These aren't hypothetical fines but real financial exposures that force board-level attention.

Yet implementation questions abound. The European Data Protection Board has highlighted that “black-box AI” cannot justify failure to comply with transparency requirements, particularly challenging for agentic AI where actions may derive from intermediate steps or model outputs not directly supervised or even understood by human operators. How organisations demonstrate compliance whilst maintaining competitive advantages in proprietary algorithms remains an open question, one the European Commission's July 2025 voluntary Code of Practice for general-purpose AI developers attempts to address through standardised disclosure templates.

The British Experiment

Across the Channel, the United Kingdom pursues a markedly different strategy. As of 2025, no dedicated AI law exists in force. Instead, the UK maintains a flexible, principles-based approach through existing legal frameworks applied by sectoral regulators. Responsibility for AI oversight falls to bodies like the Information Commissioner's Office (ICO) for data protection, the Financial Conduct Authority (FCA) for financial services, and the Competition and Markets Authority (CMA) for market competition issues.

This sectoral model offers advantages in agility and domain expertise. The ICO published detailed guidance on AI and data protection, operates a Regulatory Sandbox for AI projects, and plans a statutory code of practice for AI and automated decision-making. The FCA integrates AI governance into existing risk management frameworks, expecting Consumer Duty compliance, operational resilience measures, and Senior Manager accountability. The CMA addresses AI as a competition issue, warning that powerful incumbents might restrict market entry through foundation model control whilst introducing new merger thresholds specifically targeting technology sector acquisitions.

Coordination happens through the Digital Regulation Cooperation Forum, a voluntary alliance of regulators with digital economy remits working together on overlapping issues. The DRCF launched an AI and Digital Hub pilot to support innovators facing complex regulatory questions spanning multiple regulators' remits. This collaborative approach aims to prevent regulatory arbitrage whilst maintaining sector-specific expertise.

Yet critics argue this fragmented structure creates uncertainty. Without central legislation, organisations face interpretative challenges across different regulatory bodies. The proposed Artificial Intelligence (Regulation) Bill, reintroduced in the House of Lords in March 2025, would establish a new “AI Authority” and codify five AI principles into binding duties, requiring companies to appoint dedicated “AI Officers.” The UK government has indicated a comprehensive AI Bill could arrive in 2026, drawing lessons from the EU's experience.

For agentic AI specifically, the UK's 2025 AI Opportunities Action Plan and earlier White Paper identified “autonomy risks” as requiring further regulatory attention. Sectoral regulators like the FCA, ICO, and CMA are expected to issue guidance capturing agentic behaviours within their domains. This creates a patchwork where financial services AI agents face different requirements than healthcare or employment screening agents, even when using similar underlying technologies.

The American Reversal

The United States regulatory landscape underwent dramatic shifts within weeks of the Trump administration's January 2025 inauguration. The Biden administration's Executive Order 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” issued in October 2023, represented a comprehensive federal approach addressing transparency, bias mitigation, explainability, and privacy. It required federal agencies to appoint Chief AI Officers, mandated safety testing for advanced AI systems, and established guidelines for AI use in critical infrastructure.

Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” issued in the Trump administration's first days, reversed course entirely. The new order eliminated many Biden-era requirements, arguing they would “stifle American innovation and saddle companies with burdensome new regulatory requirements.” The AI Diffusion Rule, issued on 15 January 2025 by the Bureau of Industry and Security, faced particular criticism before its May 2025 implementation deadline. Industry giants including Nvidia and Microsoft argued the rules would result in billions in lost sales, hinder innovation, and ultimately benefit Chinese competitors.

The rescinded AI Diffusion Framework proposed a three-tiered global licensing system for advanced chips and AI model weights. Tier I included the United States and 18 allied countries exempt from licensing. Tier II covered most other parts of the world, licensed through a data centre Validated End-User programme with presumption of approval. Tier III, encompassing China, Russia, and North Korea, faced presumption of denial. The framework created ECCN 4E091 to control AI model weights, previously uncontrolled items, and sought to curtail China's access to advanced chips and computing power through third countries.

This reversal reflects deeper tensions in American AI policy: balancing national security concerns against industry competitiveness, reconciling federal authority with state-level initiatives, and navigating geopolitical competition whilst maintaining technological leadership. The rescission doesn't eliminate AI regulation entirely but shifts it toward voluntary frameworks, industry self-governance, and state-level requirements.

State governments stepped into the breach. Over 1,000 state AI bills were introduced in 2025, creating compliance headaches for businesses operating nationally. California continues as a regulatory frontrunner, with comprehensive requirements spanning employment discrimination protections, transparency mandates, consumer privacy safeguards, and safety measures. Large enterprises and public companies face the most extensive obligations: organisations like OpenAI, Google, Anthropic, and Meta must provide detailed disclosures about training data, implement watermarking and detection capabilities, and report safety incidents to regulatory authorities.

New York City's Automated Employment Decision Tools Law, enforced since 5 July 2023, exemplifies state-level specificity. The law prohibits using automated employment decision tools, including AI, to assess candidates for hiring or promotion in New York City unless an independent auditor completes a bias audit beforehand and candidates who are New York City residents receive notice. Bias audits must include calculations of selection and scoring rates plus impact ratios across sex categories, race and ethnicity categories, and intersectional categories.

The Equal Employment Opportunity Commission issued technical guidance in May 2023 on measuring adverse impact when AI tools are used for employment selection. Critically, employers bear liability for outside vendors who design or administer algorithmic decision-making tools on their behalf and cannot rely on vendor assessments of disparate impact. This forces companies deploying agentic AI in hiring contexts to conduct thorough vendor due diligence, review assessment reports and historical selection rates, and implement bias-mitigating techniques when audits reveal disparate impacts.

The GDPR Collision

The General Data Protection Regulation, designed in an era before truly autonomous AI, creates particular challenges for agentic systems. Article 22 grants individuals the right not to be subject to decisions based solely on automated processing that produces legal or significant effects. This provision, interpreted by Data Protection Authorities as a prohibition rather than something requiring active invocation, directly impacts agentic AI deployment.

The challenge lies in the “solely” qualifier. European Data Protection Board guidance emphasises that human involvement must be meaningful, not merely supplying data the system uses or rubber-stamping automated decisions. For human review to satisfy Article 22, involvement should come after the automated decision and relate to the actual outcome. If AI merely produces information someone uses alongside other information to make a decision, Article 22 shouldn't apply. But when does an agentic system's recommendation become the decision itself?

Agentic AI challenges the traditional data controller and processor dichotomy underlying GDPR. When an AI acts autonomously, who determines the purpose and means of processing? How does one attribute legal responsibility for decisions taken without direct human intervention? These questions lack clear answers, forcing businesses to carefully consider their governance structures and documentation practices.

Data Protection Impact Assessments become not just best practice but legal requirements for agentic AI. Given the novel risks associated with systems acting independently on behalf of users, conducting thorough DPIAs proves both necessary for compliance and valuable for understanding system behaviour. These assessments should identify specific risks created by the agent's autonomy and evaluate how the AI might repurpose data in unexpected ways as it learns and evolves.

Maintaining comprehensive documentation proves critical. For agentic AI systems, this includes detailed data flow maps showing how personal data moves through the system, records of processing activities specific to the AI agent, transparency mechanisms explaining decision-making processes, and evidence of meaningful human oversight where required. The EDPB's recent opinions note that consent becomes particularly challenging for agentic AI because processing scope may evolve over time as the AI learns, users may not reasonably anticipate all potential uses of their data, and traditional consent mechanisms may not effectively cover autonomous agent activities.

The Liability Gap

Perhaps no question proves more vexing than liability attribution when agentic AI causes harm. Traditional legal frameworks struggle with systems that don't simply execute predefined instructions but make decisions based on patterns learned from vast datasets. Their autonomous action creates a liability gap current frameworks cannot adequately address.

The laws of agency and vicarious liability require there first to be a human agent or employee primarily responsible for harm before their employer or another principal can be held responsible. With truly autonomous AI agents, there may be no human “employee” acting at the moment of harm: the AI acts on its own algorithmic decision-making. Courts and commentators have consistently noted that without a human “agent,” vicarious liability fails by definition.

The July 2024 California district court decision in the Workday case offers a potential path forward. The court allowed a case against HR and finance platform Workday to proceed, stating that an employer's use of Workday's AI-powered HR screening algorithm may create direct liability for both the employer and Workday under agency liability theory. By deeming Workday an “agent,” the court created potential for direct liability for AI vendors, not just employers deploying the systems.

This decision's implications for agentic AI prove significant. First, it recognises that employers delegating traditional functions to AI tools cannot escape responsibility through vendor relationships. Second, it acknowledges AI tools playing active roles in decisions rather than merely implementing employer-defined criteria. Third, by establishing vendor liability potential, it creates incentives for AI developers to design systems with greater care for foreseeable risks.

Yet no specific federal law addresses AI liability, let alone agentic AI specifically. American courts apply existing doctrines like tort law, product liability, and negligence. If an autonomous system causes damage, plaintiffs might argue developers or manufacturers were negligent in designing or deploying the system. Negligence requires proof that the developer or user failed to act as a reasonable person would, limiting liability compared to strict liability regimes.

The United Kingdom's approach to autonomous vehicles offers an intriguing model potentially applicable to agentic AI. The UK framework establishes that liability should follow control: as self-driving technology reduces human driver influence over a vehicle, law shifts legal responsibility from users toward developers and manufacturers. This introduces autonomy not just as a technical measure but as a legal determinant of liability. AI agents could be similarly classified, using autonomy levels to define when liability shifts from users to developers.

Despite different regulatory philosophies across jurisdictions, no nation has fully resolved how to align AI's autonomy with existing liability doctrines. The theoretical discussion of granting legal personhood to AI hangs as an intriguing yet unresolved idea. The most promising frameworks recognise that agentic AI requires nuanced approaches acknowledging distributed nature of AI development and deployment whilst ensuring clear accountability for harm.

Export Controls and Geopolitical Fragmentation

AI regulation extends beyond consumer protection and liability into national security domains through export controls. The rescinded Biden administration AI Diffusion Framework attempted to create a secure global ecosystem for AI data centres whilst curtailing China's access to advanced chips and computing power. Its rescission reflects broader tensions between technological leadership and alliance management, between protecting strategic advantages and maintaining market access.

The United States and close allies dominate the advanced AI chip supply chain. Given technological complexity of design and manufacturing processes, China remains reliant on these suppliers for years to come. According to recent congressional testimony by Commerce Secretary Howard Lutnick, Huawei will produce only 200,000 AI chips in 2025, a marginal output compared to American production. Yet according to Stanford University benchmarks, American and Chinese model capabilities are fairly evenly matched, with Chinese AI labs functioning as fast followers at worst.

This paradox illustrates export control limitations: China continues producing competitive state-of-the-art models and dominating AI-based applications like robotics and autonomous vehicles despite chip controls implemented over recent years. The controls made chip development a matter of national pride and triggered waves of investment into domestic AI chip ecosystems within China. Whether the United States ever regains market share even if chip controls are reversed remains unclear.

The Trump administration argued the Biden-era framework would hinder American innovation and leadership in the AI sector. Industry concerns centred on billions in lost sales, reduced global market share, and acceleration of foreign AI hardware ecosystem growth. The framework sought to turn AI chips into diplomatic tools, extracting geopolitical and technological concessions through export leverage. Its rescission signals prioritising economic competitiveness over strategic containment, at least in the near term.

For companies developing agentic AI, this creates uncertainty. Will future administrations reimpose controls? How should global supply chains be structured to withstand regulatory whiplash? Companies face impossible planning horizons when fundamental policy frameworks reverse every four years.

Cross-Border Chaos

The divergent approaches across jurisdictions create opportunities for regulatory arbitrage and challenges for compliance. When different jurisdictions develop their own AI policies, laws, and regulations, businesses face increased compliance costs from navigating complex regulatory landscapes, market access barriers limiting operational geography, and innovation constraints slowing cross-border collaboration. These challenges prove particularly acute for small and medium-sized enterprises lacking resources to manage complex, jurisdiction-specific requirements.

The transnational nature of AI, where algorithms, data, and systems operate across borders, makes it difficult for individual nations to control cross-border flows and technology transfer. Incompatible national rules create compliance challenges whilst enabling regulatory arbitrage that undermines global governance efforts. For companies, divergent frameworks serve as invitations to shift operations to more permissive environments. For countries pursuing stricter AI rules, this raises stakes of maintaining current approaches against competitive pressure.

Without harmonisation, regulatory arbitrage risks worsen, with firms relocating operations to jurisdictions with lenient regulations to circumvent stricter compliance obligations, potentially undermining global AI oversight effectiveness. The European Banking Institute advocates robust and centralised governance to address risks and regulatory fragmentation, particularly in cross-border financial technology trade, whilst the United States has adopted more decentralised approaches raising standardisation and harmonisation concerns.

Yet experts expect strategic fragmentation rather than global convergence. AI regulation proves too entangled with geopolitical competition, economic sovereignty, and industrial policy. Jurisdictions will likely assert regulatory independence where it matters most, such as compute infrastructure or training data, whilst cooperating selectively in areas where alignment yields real economic benefits.

Proposed solutions emphasise multilateral processes making AI rules among jurisdictions interoperable and comparable to minimise regulatory arbitrage risks. Knowledge sharing could be prioritised through standards development, AI sandboxes, large public AI research projects, and regulator-to-regulator exchanges. Regulatory sandboxes foster adaptability by allowing companies to test AI solutions in controlled environments with regulatory oversight, enabling experimentation without immediate compliance failure risks.

Restructuring for Compliance

Organisations deploying agentic AI must fundamentally restructure product development, governance, and transparency practices to comply with evolving requirements. Over 68% of multinational corporations are restructuring AI workflows to meet evolving regulatory standards on explainability and bias mitigation. With 59% of AI systems now under internal audit programmes, governments push stricter compliance benchmarks whilst global enforcement actions related to unethical AI use have increased over 33%.

The Chief AI Officer role has nearly tripled in the past five years according to LinkedIn data, with positions expanding significantly in finance, manufacturing, and retail. Companies including JPMorgan Chase, Walmart, and Siemens employ AI executives to manage automation and predictive analytics efforts. The CAIO serves as operational and strategic leader for AI initiatives, ensuring technologies are properly selected, executed, and monitored to align with visions and goals driving company success.

Key responsibilities span strategic leadership, AI governance and risk management, ethical AI management, regulatory compliance, and cultural transformation. CAIOs must establish robust governance frameworks ensuring safe, ethical, and compliant AI development across organisations. They create clear guidelines, accountability measures, and control mechanisms addressing data handling, model validation, and usage. Four major risks grouped under the acronym FATE drive this work: Fairness (AI models can perpetuate biases), Accountability (responsibility when models fail), Transparency (opacity of algorithms makes explaining conclusions difficult), and Ethics (AI can face ethical dilemmas).

The regulatory framework's emphasis on meaningful human involvement in automated decision-making may require restructuring operational processes previously fully automated. For agentic AI, this means implementing escalation protocols, defining autonomy boundaries, creating human oversight mechanisms, and documenting decision-making processes. Organisations must decide whether to centralise AI governance under single executives or distribute responsibilities across existing roles. Research indicates centralised AI governance provides better risk management and policy consistency, whilst distributed models may offer more agility but can create accountability gaps.

Product development lifecycle changes prove equally significant. The NIST AI Risk Management Framework, whilst voluntary, offers resources to organisations designing, developing, deploying, or using AI systems to help manage risks and promote trustworthy and responsible development. The framework's MAP, MEASURE, and MANAGE functions can be applied in AI system-specific contexts and at specific stages of the AI lifecycle, whilst GOVERN applies to all stages of organisations' AI risk management processes and procedures.

Lifecycle risk management should be embedded into workflows, not added as compliance afterthoughts. Best practices include establishing risk checkpoints at every phase requiring documentation and approval, using structured risk assessment tools like NIST AI RMF or OECD AI Principles, and ensuring data scientists, legal, product, and ethics teams share ownership of risk. AI governance should be built into every process in the AI development and maintenance journey, with AI Impact Assessments and threat modelling conducted at least annually on existing systems and prior to deploying any new AI function.

Transparency Requirements and AI Bills of Materials

Transparency in AI systems has become a cornerstone of proposed regulatory frameworks across jurisdictions. Upcoming mandates require companies to disclose how AI models make decisions, datasets used for training, and potential system limitations. The European Commission's July 2025 voluntary Code of Practice for general-purpose AI developers includes a chapter on transparency obligations and provides template forms for AI developers to share information with downstream providers and regulatory authorities.

The AI Bill of Materials has emerged as a critical transparency tool. Just as Software Bills of Materials and Hardware Bills of Materials brought clarity to software and hardware supply chains, AIBOMs aim to provide transparency into how AI models are built, trained, and deployed. An AIBOM is a structured inventory documenting all components within an AI system, including datasets used to train or fine-tune models, models themselves (open-source or proprietary), software dependencies supporting AI pipelines, and deployment environments where models run.

Additional elements include digital signatures for the model and AIBOM ensuring authenticity and integrity, model developer names, parent model information, base model details, model architecture and architecture family, hardware and software used to run or train models, required software downloads, and datasets with their names, versions, sources, and licensing information.

AIBOMs help organisations demonstrate adherence to evolving frameworks like the EU AI Act, NIST AI RMF, and Department of Defense AI security directives. Whilst software supply chains face vulnerabilities through third-party libraries, AI systems introduce new risks via external datasets, model weights, and training pipelines. An AIBOM plays crucial roles in AI supply chain security by tracking third-party models, documenting pre-trained models, their sources, and any modifications.

The OWASP AI Bill of Materials project leads AI security and transparency efforts, organised into ten strategic workstreams focused on critical aspects of AI transparency and security. The Linux Foundation's work on AI-BOM with SPDX 3.0 expands on SBOM concepts to include documentation of algorithms, data collection methods, frameworks and libraries, licensing information, and standard compliance. Industry leaders advance standardisation of AI transparency through efforts like the AIBOM extension to the CycloneDX specification, a widely adopted SBOM format.

For agentic AI specifically, AIBOMs must extend beyond static component listings to capture dynamic behaviours, tool integrations, API dependencies, and decision-making patterns. Traditional documentation practices prove insufficient when systems evolve through learning and interaction. This requires new approaches to transparency balancing competitive concerns about proprietary methods with regulatory requirements for explainability and accountability.

The Path Through Uncertainty

The regulatory landscape for agentic AI remains in flux, characterised by divergent approaches, evolving frameworks, and fundamental questions without clear answers. Organisations deploying these systems face unprecedented compliance challenges spanning multiple jurisdictions, regulatory bodies, and legal domains. The costs of getting it wrong, whether through massive fines, legal liability, or reputational damage, prove substantial.

Yet the absence of settled frameworks also creates opportunities. Companies engaging proactively with regulators, participating in sandboxes, contributing to standards development, and implementing robust governance structures position themselves advantageously as requirements crystallise. Those treating compliance as pure cost rather than strategic investment risk falling behind competitors who embed responsible AI practices into their organisational DNA.

The next several years will prove decisive. Will jurisdictions converge toward interoperable frameworks or fragment further into incompatible regimes? Will liability doctrines evolve to address autonomous systems adequately or will courts struggle with ill-fitting precedents? Will transparency requirements stifle innovation or foster trust enabling broader adoption? The answers depend not just on regulatory choices but on how industry, civil society, and technologists engage with the challenge.

What seems certain is that agentic AI will not remain in regulatory limbo indefinitely. The systems are too powerful, the stakes too high, and the public attention too focused for governments to maintain hands-off approaches. The question is whether the resulting frameworks enable responsible innovation or create bureaucratic moats favouring incumbents over challengers. For organisations building the future of autonomous AI, understanding this evolving landscape isn't optional. It's existential.


References & Sources


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...