Consensus Without Consequence: The Collapse of AI Accountability

Everyone agrees that artificial intelligence should be fair, transparent, and accountable. That sentence could have been written in 2018, and it would have been just as true then as it is now. The difference is that in 2018, arriving at consensus on those principles felt like the hard part. In 2026, we know better. The hard part was never agreeing on what AI ethics should look like. The hard part is making anyone actually do it.
A growing body of research confirms what practitioners and regulators have been circling for years: the global AI ethics landscape has converged around a remarkably stable set of principles. Transparency. Fairness. Non-maleficence. Accountability. Privacy. These five values appear in the vast majority of the more than 200 ethics guidelines and governance documents that researchers have catalogued worldwide. A landmark review by Anna Jobin, Marcello Ienca, and Effy Vayena, published through ETH Zurich and later expanded through broader global analysis, found that transparency appeared in 86 per cent of guidelines examined, justice and fairness in 81 per cent, and non-maleficence in 71 per cent. The world, it turns out, has been surprisingly good at articulating what responsible AI ought to involve. The world has been catastrophically bad at enforcing it.
That gap between articulation and enforcement defines the current moment in AI governance. And it is not an abstract policy debate. It is the difference between a hiring algorithm that discriminates against older workers and one that does not. It is the difference between a facial recognition system that operates with impunity and one that faces genuine consequences. It is the difference between a corporate ethics board that exists to absorb criticism and one that has the power to halt a product launch.
The question that matters now is deceptively simple: what does meaningful accountability actually look like in practice? And when enforcement mechanisms fail to materialise in time, who bears the cost?
The Principles Paradox
The proliferation of AI ethics guidelines over the past decade represents one of the most remarkable exercises in global norm-setting since the Universal Declaration of Human Rights. Governments, corporations, academic institutions, and civil society organisations have produced hundreds of frameworks, each articulating some version of the same core commitments. The World Economic Forum has described the challenge as one of “scaling trustworthy AI” by turning ethical principles into tangible practices. The International Labour Organization has reviewed global ethics guidelines specifically for AI in the workplace, finding consistent themes around worker protection and human oversight.
Yet this apparent consensus masks a deeper dysfunction. As research published in Patterns journal noted, while the most advocated ethical principles show significant convergence, there remains “substantive divergence in how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented.” In other words, everyone agrees on the words. Nobody agrees on what the words mean in practice.
This is the principles paradox. The more guidelines that exist, the easier it becomes for organisations to claim alignment with ethical AI while doing very little to change their behaviour. The phenomenon has a name: ethics washing. And in 2025 and 2026, it has become a defining feature of the corporate AI landscape.
The United States Securities and Exchange Commission has flagged “AI washing” as an enforcement priority, scrutinising whether company disclosures about artificial intelligence capabilities match actual practices. The SEC and the Department of Justice have already taken action against companies for exaggerating AI capabilities to attract investment. But the problem extends far beyond securities fraud. When a company publishes a set of AI ethics principles, appoints a chief ethics officer, and then deploys systems that systematically discriminate, the principles themselves become a form of camouflage. They provide the appearance of responsibility without the substance of it, a shield against criticism rather than a genuine constraint on conduct.
The most notorious illustration of this dynamic played out at Google in late 2020 and early 2021. Timnit Gebru, co-lead of Google's Ethical AI team, was fired after the company demanded she retract a research paper examining the environmental costs and bias risks of large language models. Three months later, Margaret Mitchell, the team's founder, was also terminated. Roughly 2,700 Google employees and more than 4,300 academics and civil society supporters signed a letter condemning Gebru's departure. Nine members of the United States Congress sent a letter to Google seeking clarification. The paper that triggered the conflict, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?“, was subsequently presented at the ACM FAccT conference in March 2021 and has since become one of the most cited works in the field.
The Google episode demonstrated something that has only become clearer with time: internal ethics teams, no matter how credentialed or well-intentioned, cannot function as accountability mechanisms when they exist at the pleasure of the organisations they are meant to constrain. The fox does not appoint its own gamekeeper.
Deployment at Speed, Governance at a Crawl
The numbers tell a stark story. According to ISACA's 2025 global survey of more than 3,200 business and IT professionals, nearly three out of four European IT and cybersecurity professionals reported that staff were already using generative AI at work, a figure that had risen ten percentage points in a single year. Yet only 31 per cent of organisations had a formal, comprehensive AI policy in place. The gap was not closing. It was widening.
The same survey found that 63 per cent of respondents were extremely or very concerned that generative AI could be weaponised against their organisations, while 71 per cent expected deepfakes to grow sharper and more widespread. Despite these anxieties, only 18 per cent of organisations were investing in deepfake detection tools. The pattern is consistent: organisations recognise the risks, articulate concern, and then fail to allocate the resources necessary to address them. A separate finding from the same research revealed that 42 per cent of professionals believed they would need to increase their AI-related skills within six months simply to retain their current position, a figure that had risen eight percentage points from the previous year. The workforce, in other words, is being transformed by AI faster than individuals or institutions can adapt.
Globally, the picture is even more fragmented. A separate analysis found that 94 per cent of global companies reported using or piloting some form of AI in IT operations, while only 44 per cent said their security architecture was fully equipped to support secure AI deployment. More than half of organisations surveyed, 57 per cent, acknowledged that AI was advancing more quickly than they could secure it. The phrase “governance gap” has become a staple of policy discourse, but it undersells the scale of the problem. This is not a gap. It is a chasm.
The Partnership on AI, a multi-stakeholder organisation that includes major technology companies, academic institutions, and civil society groups, identified six governance priorities for 2026. These include responsible adoption of agentic AI systems, improved documentation and transparency standards, governance convergence across jurisdictions, and protections for authentic human voice in an era of synthetic content. The priorities are sensible. They are also an implicit admission that none of these foundations are yet in place, despite years of discussion.
Meanwhile, the technology itself continues to accelerate. Agentic AI systems, which can take autonomous actions in the real world rather than simply generating text or images, introduce what the Partnership on AI describes as “non-reversibility of actions, open-ended decision-making pathways, and privacy vulnerabilities from expanded data access.” These are not theoretical risks. They are features of systems already being deployed in customer service, software development, and financial trading. The governance frameworks meant to constrain these systems are, in many cases, still being drafted. The speed of silicon, as one commentator put it, outpaces the speed of statute.
Regulation Arrives, Eventually
The European Union's AI Act represents the most ambitious attempt to date to translate ethical principles into enforceable law. The legislation entered into force on 1 August 2024, with a phased implementation timeline extending through 2027. Prohibitions on AI systems posing unacceptable risk took effect on 2 February 2025. Obligations for general-purpose AI models became applicable on 2 August 2025. The bulk of requirements for high-risk systems take effect on 2 August 2026, when authorities will gain the power to enforce compliance through administrative fines reaching up to 35 million euros or seven per cent of global annual turnover.
The EU AI Act adopts a tiered, risk-based approach, classifying AI applications from minimal to unacceptable risk. High-risk systems are subject to strict oversight, including conformity assessments, technical documentation, CE marking, transparency requirements, and post-market monitoring. The European AI Office became operational on 2 August 2025, taking on responsibility for supervising and enforcing the Act alongside Member State authorities.
This is, by any measure, a significant regulatory achievement. But it also illustrates the temporal mismatch that defines AI governance. The Act was first proposed by the European Commission in April 2021. It was adopted in March 2024. Full enforcement does not arrive until August 2026 at the earliest, with some provisions extending to 2027. During that five-year legislative journey, the AI landscape transformed beyond recognition. When the Commission drafted its proposal, ChatGPT did not exist. Nor did the current generation of multimodal models, autonomous agents, or AI-powered code generation tools. The regulation is, by design, chasing a target that moved while lawmakers were still aiming.
The situation in the United States presents a different set of challenges entirely. Rather than pursuing comprehensive federal legislation, the US has relied on a decentralised approach combining agency-specific enforcement, voluntary frameworks, and sector-level regulation. The National Institute of Standards and Technology published its AI Risk Management Framework, with a February 2025 revision adding testable controls for continuous monitoring. The Federal Trade Commission and Department of Justice have used existing consumer protection and anti-discrimination statutes to pursue AI-related enforcement actions.
Then, in December 2025, President Donald Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which sought to advance what the administration called “a minimally burdensome national policy framework.” The order directed the Attorney General to establish an AI Litigation Task Force to challenge state AI laws deemed inconsistent with federal policy. It instructed the Secretary of Commerce to evaluate existing state AI legislation and identify laws considered “onerous.” It even tied broadband infrastructure funding to compliance, specifying that states with AI laws identified as problematic would be ineligible for certain federal grants.
The order was, in effect, an attempt to pre-empt the patchwork of state-level regulations that had been emerging across the country. Colorado's SB 205, effective February 2026, requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination, implement risk management policies, and conduct impact assessments. New York City's Local Law 144 had already established bias audit requirements for automated employment decision tools. More than a hundred state AI laws were enacted across the United States in 2025 alone.
Governors in California, Colorado, and New York issued statements indicating the executive order would not stop them from enforcing their existing AI statutes. Legal scholars noted that the administration's ability to restrict state regulation without Congressional action was constitutionally questionable. The result is a governance landscape that is not merely fragmented but actively contested, with federal and state authorities pulling in opposing directions while companies navigate overlapping and sometimes contradictory obligations.
When Enforcement Fails, the Vulnerable Pay
The consequences of the enforcement gap do not fall equally. They concentrate, with brutal predictability, on those with the least power to resist.
In employment, the case of Mobley v. Workday, Inc. illustrates the human cost. Five individuals over the age of forty applied for hundreds of jobs through Workday's automated hiring platform and were rejected in nearly every instance without receiving a single interview. The plaintiffs alleged that Workday's AI recommendation system discriminated on the basis of age. In 2024, a court allowed the disparate impact claim to proceed under the Age Discrimination in Employment Act and the Americans with Disabilities Act, holding that Workday bore liability as an agent of the employers using its product. The case remains one of the most significant tests of whether existing anti-discrimination law can reach the companies that build, rather than merely deploy, algorithmic decision-making tools.
In housing, the SafeRent algorithm case exposed how automated tenant screening can systematically disadvantage Black and Hispanic applicants. Plaintiffs demonstrated that SafeRent's scoring system produced discriminatory outcomes, and the court held that the company bore responsibility because its product claimed to “automate human judgement” by making housing recommendations. SafeRent agreed to pay more than two million dollars to settle the litigation in 2024. The settlement was significant as legal precedent, but for the applicants who were denied housing on the basis of an opaque algorithmic score, the damage was already done.
In biometric surveillance, Clearview AI's trajectory encapsulates the enforcement timeline problem. The company scraped billions of photographs from social media platforms without consent and sold facial recognition services to law enforcement agencies worldwide. In September 2024, the Dutch Data Protection Authority fined Clearview 30.5 million euros for constructing what the agency described as an illegal database. In March 2025, a US federal court approved a class action settlement valued at roughly 51.75 million dollars, structured as a 23 per cent equity stake in the company itself, because Clearview had insufficient assets to pay a traditional cash settlement. The settlement structure was unprecedented in biometric privacy litigation, and its adequacy was contested by a bipartisan group of state attorneys general who filed formal objections.
These cases share a common structure. Harm occurs. Years pass. Legal proceedings unfold. Settlements are reached or fines imposed. But the systems that caused the harm often continue operating during the entire adjudication process, and the individuals affected rarely receive compensation proportional to their injury. The enforcement mechanisms exist, technically. They simply do not work fast enough to prevent the damage they are meant to address.
In consumer markets, similar patterns have emerged. Instacart drew widespread criticism after reports revealed the company was using an AI-powered pricing experiment that displayed different grocery prices to different customers for the same items at the same store. The programme, designed to test price sensitivity, was condemned by consumer advocacy groups and policymakers who argued it constituted algorithmic price discrimination without adequate disclosure. The controversy highlighted a recurring blind spot in AI governance: the gap between what is technically possible and what existing consumer protection frameworks are equipped to regulate.
A study from the University of Washington provided stark evidence of the scale of algorithmic bias in employment contexts. Researchers presented three AI models with job applications that were identical in every respect except the name of the applicant. The models preferred resumes with white-associated names in 85 per cent of cases and those with Black-associated names only 9 per cent of the time. A separate study led by researchers at Cedars-Sinai, published in June 2025, found that leading large language models generated less effective treatment recommendations when a patient's race was identified as African American.
These are not edge cases or hypothetical scenarios. They are documented patterns of discriminatory behaviour embedded in systems that millions of people interact with daily. And they persist not because the ethical principles governing AI are inadequate, but because the mechanisms for enforcing those principles remain woefully underdeveloped.
The Audit Illusion
One of the most commonly proposed solutions to the enforcement gap is algorithmic auditing: the idea that independent third parties can evaluate AI systems for bias, accuracy, and compliance with ethical standards, much as financial auditors examine corporate accounts. The concept has gained significant traction in policy circles. New York City's Local Law 144 requires annual bias audits for automated employment decision tools. Colorado's SB 205 mandates impact assessments for high-risk systems. The EU AI Act requires conformity assessments for high-risk AI applications.
But the AI Now Institute, in a report titled “Algorithmic Accountability: Moving Beyond Audits,” has mounted a detailed critique of the audit-centred approach. The institute argues that technical evaluations “narrowly position bias as a flaw within an algorithmic system that can be fixed and eliminated,” when in fact algorithmic harms are often structural, reflecting the social contexts in which systems are designed and deployed. Audits, the report contends, “run the risk of entrenching power within the tech industry” and “take focus away from more structural responses.”
The critique has substance. Current algorithmic auditing suffers from several fundamental limitations. There are no universally accepted standards for what constitutes a passing score. Audit costs range from 5,000 to 50,000 dollars depending on system complexity, placing the financial burden disproportionately on smaller organisations while allowing well-resourced technology companies to treat audits as a cost of doing business. Audits evaluate systems at a single point in time, but AI models drift as they encounter new data, meaning a system that passes an audit today may produce discriminatory outcomes next month.
Perhaps most critically, audits place the primary burden for algorithmic accountability on those with the fewest resources. Community organisations, civil rights groups, and affected individuals must navigate complex technical and legal processes to challenge algorithmic decisions, while the companies deploying those systems retain control over the data, models, and documentation necessary to evaluate their performance. The information asymmetry is profound and, under current frameworks, largely unaddressed.
The Ada Lovelace Institute, the AI Now Institute, and the Open Government Partnership have partnered to examine alternatives to the audit-centred approach, including algorithm registers, impact assessments, and other transparency measures that distribute accountability more broadly. These efforts are promising but nascent, and they face the same temporal challenge that afflicts all AI governance: by the time robust accountability frameworks are established, the systems they are meant to govern will have evolved.
Geopolitical Fractures and the Sovereignty Question
The enforcement gap is not merely a domestic policy challenge. It is a geopolitical one. The February 2025 AI Action Summit in Paris, co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, drew more than 1,000 participants from over 100 countries. Fifty-eight nations signed a joint declaration on inclusive and sustainable artificial intelligence. The United States and the United Kingdom, notably, refused to sign.
France announced a 400 million dollar endowment for a new foundation to support the creation of AI “public goods,” including high-quality datasets and open-source infrastructure. A Coalition for Sustainable AI was launched, backed by France, the United Nations Environment Programme, and the International Telecommunication Union, with support from 11 countries and 37 technology companies. Anthropic CEO Dario Amodei described the summit as a “missed opportunity” for addressing AI safety, reflecting a broader frustration among researchers that international forums produce declarations rather than binding commitments.
The geopolitical dimension becomes even more fraught when considering the position of developing nations. Research from E-International Relations and other academic sources has documented how AI development mirrors historical patterns of colonial resource extraction. Control over data infrastructures, computational resources, and algorithmic systems remains concentrated in a small number of wealthy nations and corporations. Regulatory gaps in many developing countries make the deployment of biased AI systems more likely while preventing communities from taking legal action against discriminatory algorithmic decisions. The environmental costs of AI computation fall disproportionately on these same regions, where data centres proliferate because electricity and land are cheap, exporting the benefits of artificial intelligence while localising its burdens.
The disparity in content moderation illustrates the pattern. Reports have shown that major technology platforms allocate the vast majority of their moderation resources to the Global North, with only a fraction addressing content from other regions. Algorithms deployed without cultural context produce moderation decisions that are at best irrelevant and at worst actively harmful to the communities they affect. When 98 per cent of AI research originates from wealthy institutions, the resulting systems embed assumptions that may be irrelevant or damaging elsewhere.
Some scholars have called for a shift towards what they term “global co-creation,” an approach to AI development that prioritises local participation, data sovereignty, and algorithmic transparency. The concept recognises that meaningful accountability cannot be imposed from outside but must be built through inclusive governance structures that reflect the diverse contexts in which AI systems operate. One hundred and twenty countries representing 85 per cent of humanity, researchers argue, have the collective leverage to insist on these conditions. Whether they will exercise that leverage remains an open question.
Building Accountability That Works
If the current approach to AI governance is inadequate, what would a more effective system look like? The evidence points to several structural requirements that go beyond the familiar call for more principles or better audits.
First, accountability must be anticipatory rather than reactive. The current model waits for harm to occur, then attempts to assign responsibility through litigation or regulatory action. By the time a court rules on an algorithmic discrimination case, the affected individuals may have lost housing, employment, or access to healthcare. Meaningful accountability requires mechanisms that identify and address potential harms before deployment, not after damage has been documented across thousands of decisions.
Second, enforcement must be resourced proportionally to the scale of AI deployment. The ISACA survey finding that only 31 per cent of organisations have comprehensive AI policies is not simply a failure of corporate governance. It reflects a broader reality in which the institutions responsible for oversight, whether regulatory agencies, standards bodies, or civil society organisations, lack the funding, technical expertise, and legal authority to match the pace of industry. The EU AI Office is a start, but its capacity to oversee a technology sector that spans hundreds of thousands of organisations across 27 Member States remains untested.
Third, transparency must extend beyond model documentation to encompass the full chain of AI development and deployment. The Partnership on AI's call for standardised documentation templates and strengthened reporting frameworks is necessary but insufficient. What is needed is a transparency regime that enables affected communities, not just regulators and auditors, to understand how algorithmic decisions are made, what data they rely on, and what recourse is available when those decisions cause harm.
Fourth, the costs of non-compliance must be sufficiently high to alter corporate behaviour. The EU AI Act's fines of up to seven per cent of global annual turnover are significant on paper. Whether they will be enforced consistently, and whether they will prove sufficient to deter violations by companies with revenues in the hundreds of billions, remains to be seen. The history of technology regulation suggests that fines alone are rarely sufficient; structural remedies, including requirements to modify or withdraw harmful systems, are necessary to create genuine accountability.
Fifth, governance frameworks must be designed for iteration, not permanence. The five-year legislative cycle that produced the EU AI Act is incompatible with a technology that transforms every six months. Regulatory approaches must incorporate mechanisms for rapid adaptation, whether through delegated authority, technical standards that can be updated without legislative amendment, or sunset clauses that force periodic reassessment.
None of these requirements are novel. Researchers, civil society organisations, and some regulators have been advocating for them for years. The obstacle is not a lack of ideas but a lack of political will, complicated by the enormous economic interests that benefit from the current arrangement in which deployment runs ahead of governance and the costs of failure are borne by those least equipped to absorb them.
The Cost Ledger
When enforcement mechanisms fail to materialise in time, the costs are distributed with grim predictability. Workers screened out by biased hiring algorithms never know why they were rejected. Tenants denied housing by opaque scoring systems cannot challenge a decision they cannot see. Patients who receive inferior treatment recommendations based on their race are unlikely to discover that an algorithm played a role. Consumers shown different prices for identical goods based on algorithmic profiling have no way to compare their experience against other buyers.
These costs are real but largely invisible, diffused across millions of individual decisions and absorbed by people who lack the resources, information, or institutional support to seek redress. The aggregate effect is a systematic transfer of risk from the organisations that build and deploy AI systems to the individuals and communities that interact with them. That transfer is not an accident. It is the predictable consequence of a governance architecture that prioritises speed of deployment over adequacy of oversight.
The financial scale of the problem is staggering when considered in aggregate. Individual settlements and fines, whether SafeRent's two million dollar payout, Clearview AI's 51.75 million dollar settlement, or the Dutch data authority's 30.5 million euro fine, may appear substantial in isolation. But set against the revenues of the companies deploying these systems and the cumulative harm inflicted on millions of affected individuals, they represent a cost of doing business rather than a meaningful deterrent. The economics of non-compliance remain, for the moment, firmly in favour of deployment first and accountability later.
The question of who bears the cost when accountability fails is, ultimately, a question about power. Those with the resources to influence policy, fund litigation, and shape public discourse are best positioned to protect themselves from algorithmic harm. Those without those resources are not. Until governance frameworks are designed to address that asymmetry directly, rather than assuming that better principles or more audits will suffice, the enforcement gap will persist.
The field of AI ethics has accomplished something genuinely remarkable in building global consensus around core values. That achievement should not be dismissed. But consensus without enforcement is aspiration without consequence. And aspiration without consequence is, in the end, just another way of saying that nobody is responsible.
References and Sources
Jobin, A., Ienca, M., and Vayena, E. “Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance.” Patterns, 2023. Available at: https://www.sciencedirect.com/science/article/pii/S2666389923002416
ISACA. “AI Use Is Outpacing Policy and Governance, ISACA Finds.” Press release, June 2025. Available at: https://www.isaca.org/about-us/newsroom/press-releases/2025/ai-use-is-outpacing-policy-and-governance-isaca-finds
Partnership on AI. “Six AI Governance Priorities for 2026.” 2026. Available at: https://partnershiponai.org/resource/six-ai-governance-priorities/
European Commission. “AI Act: Shaping Europe's Digital Future.” Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
International Labour Organization. “Governing AI in the World of Work: A Review of Global Ethics Guidelines.” Available at: https://www.ilo.org/resource/article/governing-ai-world-work-review-global-ethics-guidelines
World Economic Forum. “Scaling Trustworthy AI: How to Turn Ethical Principles into Tangible Practices.” January 2026. Available at: https://www.weforum.org/stories/2026/01/scaling-trustworthy-ai-into-global-practice/
AI Now Institute. “Algorithmic Accountability: Moving Beyond Audits.” Available at: https://ainowinstitute.org/publications/algorithmic-accountability
Trump, D. “Ensuring a National Policy Framework for Artificial Intelligence.” Executive Order, December 2025. Available at: https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/
MIT Technology Review. “We Read the Paper That Forced Timnit Gebru Out of Google. Here's What It Says.” December 2020. Available at: https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Quinn Emanuel Urquhart and Sullivan, LLP. “When Machines Discriminate: The Rise of AI Bias Lawsuits.” Available at: https://www.quinnemanuel.com/the-firm/publications/when-machines-discriminate-the-rise-of-ai-bias-lawsuits/
Clearview AI Class Action Settlement, Northern District of Illinois. Approved March 2025. Available at: https://clearviewclassaction.com/
Dutch Data Protection Authority. Clearview AI fine of EUR 30.5 million, September 2024. Reported by US News and World Report. Available at: https://www.usnews.com/news/business/articles/2024-09-03/clearview-ai-fined-33-7-million-by-dutch-data-protection-watchdog-over-illegal-database-of-faces
AI Action Summit, Paris, February 2025. Available at: https://en.wikipedia.org/wiki/AI_Action_Summit
E-International Relations. “Tech Imperialism Reloaded: AI, Colonial Legacies, and the Global South.” February 2025. Available at: https://www.e-ir.info/2025/02/17/tech-imperialism-reloaded-ai-colonial-legacies-and-the-global-south/
Colorado SB 205 (2024). AI bias audit and risk assessment requirements, effective February 2026.
AIhub. “Top AI Ethics and Policy Issues of 2025 and What to Expect in 2026.” March 2026. Available at: https://aihub.org/2026/03/04/top-ai-ethics-and-policy-issues-of-2025-and-what-to-expect-in-2026/
Crescendo AI. “27 Biggest AI Controversies of 2025-2026.” Available at: https://www.crescendo.ai/blog/ai-controversies
Harvard Journal of Law and Technology. “AI Auditing: First Steps Towards the Effective Regulation of AI.” February 2025. Available at: https://jolt.law.harvard.edu/assets/digestImages/Farley-Lansang-AI-Auditing-publication-2.13.2025.pdf
RealClearPolicy. “America's AI Governance Gap Needs Independent Oversight.” April 2026. Available at: https://www.realclearpolicy.com/articles/2026/04/03/americas_ai_governance_gap_needs_independent_oversight_1174471.html
Cedars-Sinai study on LLM treatment recommendation bias by patient race. Published June 2025. Reported in multiple sources.
Ada Lovelace Institute, AI Now Institute, and Open Government Partnership. “Algorithmic Accountability for the Public Sector.” Available at: https://www.adalovelaceinstitute.org/project/algorithmic-accountability-public-sector/
Infosecurity Magazine. “Two-Thirds of Organizations Failing to Address AI Risks, ISACA Finds.” Available at: https://www.infosecurity-magazine.com/news/failing-address-ai-risks-isaca/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk