The AI Governance Crisis: Principles Everywhere, Protection Nowhere

In November 2021, something remarkable happened. All 193 member states of UNESCO, a body not known for unanimous agreement on much of anything, adopted the first global standard on the ethics of artificial intelligence. The Recommendation on the Ethics of Artificial Intelligence was heralded as a watershed moment. Finally, the international community had come together to establish common values and principles for the responsible development of AI. The document spoke of transparency, accountability, human rights, and dignity. It was, by all accounts, a triumph of multilateral cooperation.

Four years later, the triumph looks rather hollow. In Denmark, algorithmic systems continue to flag ethnic minorities and people with disabilities as potential welfare fraudsters. In the United States, facial recognition technology still misidentifies people of colour at rates that should make any engineer blush. And across the European Union, companies scramble to comply with the AI Act whilst simultaneously lobbying to hollow out its most meaningful provisions. The principles are everywhere. The protections remain elusive.

This is the central paradox of contemporary AI governance: we have never had more ethical frameworks, more principles documents, more international recommendations, and more national strategies. Yet the gap between what these frameworks promise and what they deliver continues to widen. The question is no longer whether we need AI governance. The question is why, despite an abundance of stated commitments, so little has changed for those most vulnerable to algorithmic harm.

The Multiplication of Frameworks Without Accountability

The landscape of AI governance has become remarkably crowded. The OECD AI Principles, first adopted in 2019 and updated in 2024, now count 47 adherents including the European Union. The G7's Hiroshima AI Process has produced its own set of guiding principles. China has issued a dense web of administrative rules on algorithmic recommendation, deep synthesis, and generative AI. The United States has seen more than 1,000 AI-related bills introduced across nearly every state in 2024 and 2025. The European Union's AI Act, which entered into force on 1 August 2024, represents the most comprehensive attempt yet to create binding legal obligations for AI systems.

On paper, this proliferation might seem like progress. More governance frameworks should mean more accountability, more oversight, more protection. In practice, something quite different is happening. The multiplication of principles has created what scholars describe as a “weak regime complex,” a polycentric structure where work is generally siloed and coordination remains elusive. Each new framework adds to a growing cacophony of competing standards, definitions, and enforcement mechanisms that vary wildly across jurisdictions.

The consequences of this fragmentation are not abstract. Companies operating internationally face a patchwork of requirements that creates genuine compliance challenges whilst simultaneously providing convenient excuses for inaction. The EU AI Act defines AI systems one way; Chinese regulations define them another. What counts as a “high-risk” application in Brussels may not trigger any regulatory attention in Beijing or Washington. This jurisdictional complexity does not merely burden businesses. It creates gaps through which harm can flow unchecked.

Consider the fundamental question of what an AI system actually is. The EU AI Act has adopted a definition that required extensive negotiation and remains subject to ongoing interpretation challenges. As one analysis noted, “Defining what counts as an 'AI system' remains challenging and requires multidisciplinary input.” This definitional ambiguity matters because it determines which systems fall within regulatory scope and which escape it entirely. When sophisticated algorithmic decision-making tools can be classified in ways that avoid scrutiny, the protective intent of governance frameworks is undermined from the outset.

The three dominant approaches to AI regulation illustrate this fragmentation. The European Union has opted for a risk-based framework with binding legal obligations, prohibited practices, and substantial penalties. The United States has pursued a sectoral approach, with existing regulators adapting their mandates to address AI within their domains whilst federal legislation remains stalled. China has developed what analysts describe as an “agile and iterative” approach, issuing targeted rules on specific applications rather than comprehensive legislation. Each approach reflects different priorities, different legal traditions, and different relationships between state and industry. The result is a global governance landscape in which compliance with one jurisdiction's requirements may not satisfy another's, and in which the gaps between frameworks create opportunities for harm to proliferate.

The Industry's Hand on the Regulatory Pen

Perhaps nowhere is the gap between stated principles and lived reality more stark than in the relationship between those who develop AI systems and those who regulate them. The technology industry has not been a passive observer of the governance landscape. It has been an active, well-resourced participant in shaping it.

Research from Corporate Europe Observatory found that the technology industry now spends approximately 151 million euros annually on lobbying in Brussels, a rise of more than 50 per cent compared to four years ago. The top spenders include Meta at 10 million euros, and Microsoft and Apple at 7 million euros each. During the final stages of the EU AI Act negotiations, technology companies were given what watchdog organisations described as “privileged and disproportionate access” to high-level European decision-makers. In 2023, fully 86 per cent of meetings on AI held by high-level Commission officials were with industry representatives.

This access has translated into tangible outcomes. Important safeguards on general-purpose AI, including fundamental rights checks, were removed from the AI Act during negotiations. The German and French governments pushed for exemptions that benefited domestic AI startups, with German company Aleph Alpha securing 12 high-level meetings with government representatives, including Chancellor Olaf Scholz, between June and November 2023. France's Mistral AI established a lobbying office in Brussels led by Cedric O, the former French secretary of state for digital transition known to have the ear of President Emmanuel Macron.

The result is a regulatory framework that, whilst representing genuine progress in many areas, has been shaped by the very entities it purports to govern. As one analysis observed, “there are signs of a regulatory arms race where states, private firms and lobbyists compete to set the shape of AI governance often with the aim of either forestalling regulation or privileging large incumbents.”

This dynamic is not unique to Europe. In the United States, efforts to establish federal AI legislation have repeatedly stalled, with industry lobbying playing a significant role. A 2025 budget reconciliation bill would have imposed a ten-year moratorium on enforcement of state and local AI laws, a provision that was ultimately stripped from the bill only after the Senate voted 99 to 1 against penalising states for enacting AI legislation. The provision's very inclusion demonstrated the industry's ambition; its removal showed that resistance remains possible, though hardly guaranteed.

The Dismantling of Internal Oversight

The power imbalance between AI developers and those seeking accountability is not merely a matter of lobbying access. It is structurally embedded in how the industry organises itself around ethics. In recent years, major technology companies have systematically dismantled or diminished the internal teams responsible for ensuring their products do not cause harm.

In March 2023, Microsoft laid off its entire AI ethics team whilst simultaneously doubling down on its integration of OpenAI's technology into its products. An employee speaking about the layoffs stated: “The worst thing is we've exposed the business to risk and human beings to risk in doing this.” Amazon eliminated its ethical AI unit at Twitch. Meta disbanded its Responsible Innovation team, reassigning approximately two dozen engineers and ethics researchers to work directly with product teams, effectively dispersing rather than concentrating ethical oversight. Twitter, following Elon Musk's acquisition, eliminated all but one member of its 17-person AI ethics team; that remaining person subsequently resigned.

These cuts occurred against a backdrop of accelerating AI deployment and intensifying public concern about algorithmic harm. The timing was not coincidental. As the Washington Post reported, “The slashing of teams tasked with trust and safety and AI ethics is a sign of how far companies are willing to go to meet Wall Street demands for efficiency.” When efficiency is defined in terms of quarterly returns rather than societal impact, ethics becomes a cost centre to be eliminated rather than a function to be strengthened.

The departure of Timnit Gebru from Google in December 2020 presaged this trend whilst also revealing its deeper dynamics. Gebru, the co-lead of Google's ethical AI team and a widely respected leader in AI ethics research, announced via Twitter that the company had forced her out after she co-authored a paper questioning the ethics of large language models. The paper suggested that, in their rush to build more powerful systems, companies including Google were not adequately considering the biases being built into them or the environmental costs of training increasingly large models.

As Gebru has subsequently observed: “What I've realised is that we can talk about the ethics and fairness of AI all we want, but if our institutions don't allow for this kind of work to take place, then it won't. At the end of the day, this needs to be about institutional and structural change.” Her observation cuts to the heart of the implementation gap. Principles without power are merely words. When those who raise concerns can be dismissed, when ethics teams can be eliminated, when whistleblowers lack protection, the governance frameworks that exist on paper cannot be translated into practice.

Algorithmic Systems and the Destruction of Vulnerable Lives

The human cost of this implementation gap is not theoretical. It has been documented in excruciating detail across multiple jurisdictions where algorithmic systems have been deployed against society's most vulnerable members.

The Dutch childcare benefits scandal stands as perhaps the most devastating example. Between 2005 and 2019, approximately 26,000 parents were wrongfully accused of making fraudulent benefit claims. A “self-learning” algorithm classified benefit claims by risk level, and officials then scrutinised the claims receiving the highest risk labels. As subsequent investigation revealed, claims by parents with dual citizenship were systematically identified as high-risk. Families from ethnic minority backgrounds were 22 times more likely to be investigated than native Dutch citizens. The Dutch state has formally acknowledged that “institutional racism” was part of the problem.

The consequences for affected families were catastrophic. Parents were forced to repay tens of thousands of euros in benefits they never owed. Many lost their homes, their savings, and their marriages. At least 3,532 children were taken from their families and forced into foster care. There were suicides. On 15 January 2021, Prime Minister Mark Rutte announced the resignation of his government, accepting responsibility for what he described as a fundamental failure of the rule of law. “The rule of law must protect its citizens from an all-powerful government,” Rutte told reporters, “and here that's gone terribly wrong.”

This was not an isolated failure. In Australia, a system called Robodebt accused 400,000 welfare recipients of misreporting their income, generating automated debt notices based on flawed calculations. By 2019, a court ruled the programme unlawful, and the government was forced to repay 1.2 billion Australian dollars. Analysis of the system found that it was “especially harmful for populations with a volatile income and numerous previous employers.” When technological limitations were coupled with reduced human agency, the conditions for a destructive system were established.

These cases share common characteristics: algorithmic systems deployed against people with limited power to contest decisions, opacity that prevented individuals from understanding why they had been flagged, and institutional cultures that prioritised efficiency over accuracy. As Human Rights Watch has observed, “some of the algorithms that attract the least attention are capable of inflicting the most harm, for example, algorithms that are woven into the fabric of government services and dictate whether people can afford food, housing, and health care.”

The pattern extends beyond welfare systems. In Denmark, data-driven fraud control algorithms risk discriminating against low-income groups, racialised groups, migrants, refugees, ethnic minorities, people with disabilities, and older people. By flagging “unusual” living situations such as multi-occupancy, intergenerational households, and “foreign affiliations” as indicators of higher risk of benefit fraud, the government has employed what critics describe as social scoring, a practice that would be prohibited under the EU's AI Act once its provisions on banned practices take full effect.

Opacity, Remedies, and the Failure of Enforcement

Understanding why governance frameworks fail to prevent such harms requires examining the structural barriers to accountability. AI systems are frequently described as “black boxes,” their decision-making processes obscure even to those who deploy them. The European Network of National Human Rights Institutions has identified this opacity as a fundamental challenge: “The decisions made by machine learning or deep learning processes can be impossible for humans to trace and therefore to audit or explain. The obscurity of AI systems can preclude individuals from recognising if and why their rights were violated and therefore from seeking redress.”

This technical opacity is compounded by legal and institutional barriers. Even when individuals suspect they have been harmed by an algorithmic decision, the pathways to remedy remain unclear. The EU AI Act does not specify applicable deadlines for authorities to act, limitation periods, the right of complainants to be heard, or access to investigation files. These procedural elements are largely left to national law, which varies significantly among member states. The absence of a “one-stop shop” mechanism means operators will have to deal with multiple authorities in different jurisdictions, creating administrative complexity that benefits well-resourced corporations whilst disadvantaging individual complainants.

The enforcement mechanisms that do exist face their own challenges. The EU AI Act grants the AI Office exclusive jurisdiction to enforce provisions relating to general-purpose AI models, but that same office is tasked with developing Union expertise and capabilities in AI. This dual role, one analysis noted, “may pose challenges for the impartiality of the AI Office, as well as for the trust and cooperation of operators.” When the regulator is also charged with promoting the technology it regulates, the potential for conflict of interest is structural rather than incidental.

Penalties for non-compliance exist on paper but remain largely untested. The EU AI Act provides for fines of up to 35 million euros or 7 per cent of worldwide annual turnover for the most serious violations. Whether these penalties will be imposed, and whether they will prove sufficient to deter well-capitalised technology companies, remains to be seen. A 2024 Gartner survey found that whilst 80 per cent of large organisations claim to have AI governance initiatives, fewer than half can demonstrate measurable maturity. Most lack a structured way to connect policies with practice. The result is a widening “governance gap” where technology advances faster than accountability frameworks.

Exclusion and the Voices Left Out of Governance

The fragmentation of AI governance carries particular implications for the Global South. Fewer than a third of developing countries have national AI strategies, and 118 mostly developing nations remain absent from global AI governance discussions. The OECD's 38 member states comprise solely high-income countries and do not provide a forum for negotiation with low and middle-income countries. UNESCO is more inclusive with its 193 signatories, but inclusion in a recommendation does not translate into influence over how AI systems are actually developed and deployed.

The digital infrastructure necessary to participate meaningfully in the AI economy is itself unevenly distributed. Africa holds less than 1 per cent of global data capacity and would need 2.6 trillion dollars in investment by 2030 to bridge the infrastructure gap. AI is energy-intensive; training a frontier-scale model can consume thousands of megawatt-hours, a burden that fragile power grids in many developing countries cannot support. Developing countries account for less than 10 per cent of global AI patents as of 2024, outside of China.

This exclusion matters because governance frameworks are being written primarily in Washington, Brussels, and Beijing. Priorities get set without participation from those who will implement and use these tools. Conversations about which AI applications matter, whether crop disease detection or automated trading systems, climate early warning or content moderation, happen without Global South governments at the table. As one analysis from Brookings observed, “If global AI governance continues to predominantly exclude the Global South, then economic and developmental disparities between upper-income and lower-income countries will worsen.”

Some initiatives have attempted to address this imbalance. The Partnership for Global Inclusivity on AI, led by the United States and eight prominent AI companies, has committed more than 100 million dollars to enhancing AI capabilities in developing countries. Ghana's ten-year National AI Strategy aims to achieve significant AI penetration in key sectors. The Global Digital Compact, adopted in September 2024, recognises digital connectivity as foundational to development. But these efforts operate against a structural reality in which the companies developing the most powerful AI systems are concentrated in a handful of wealthy nations, and the governance frameworks shaping their deployment are crafted primarily by and for those same nations.

Ethics as Performance, Compliance as Theatre

Perhaps the most troubling aspect of the current governance landscape is the extent to which the proliferation of principles has itself become a form of compliance theatre. When every major technology company has a responsible AI policy, when every government has signed onto at least one international AI ethics framework, when every industry association can point to voluntary commitments, the appearance of accountability can substitute for its substance.

The Securities and Exchange Commission in the United States has begun pursuing charges against companies for “AI washing,” a term describing the practice of overstating AI capabilities and credentials. In autumn 2024, the SEC announced Operation AI Comply, an enforcement sweep targeting companies that allegedly misused “AI hype” to defraud consumers. The SEC flagged AI washing as a top examination priority for 2025. But this enforcement action addresses only the most egregious cases of misrepresentation. It does not reach the more subtle ways in which companies can appear to embrace ethical AI whilst resisting meaningful accountability.

The concept of “ethics washing” has gained increasing recognition as a descriptor for insincere corporate initiatives. As Carnegie Council President Joel Rosenthal has stated: “Ethics washing is a reality in the performative environment in which we live, whether by corporations, politicians, or universities.” In the AI context, ethics washing occurs when companies overstate their capabilities in responsible AI, creating an uneven playing field where genuine efforts are discouraged or overshadowed by exaggerated claims.

This performative dimension helps explain why the proliferation of principles has not translated into proportionate protections. When signing onto an ethical framework carries no enforcement risk, when voluntary commitments can be abandoned when they become inconvenient, when internal ethics teams can be disbanded without consequence, principles function as reputation management rather than genuine constraint. The multiplicity of frameworks may actually facilitate this dynamic by allowing organisations to select the frameworks most amenable to their existing practices whilst claiming compliance with international standards.

Competition, Institutions, and the Barriers to Progress

Scholars of AI governance have identified fundamental barriers that explain why progress remains so difficult. First-order cooperation problems stem from interstate competition; nations view AI as strategically important and are reluctant to accept constraints that might disadvantage their domestic industries. Second-order cooperation problems arise from dysfunctional international institutions that lack the authority or resources to enforce meaningful standards. The weak regime complex that characterises global AI governance has some linkages between institutions, but work is generally siloed and coordination insufficient.

The timelines for implementing governance frameworks compound these challenges. The EU AI Act will not be fully applicable until August 2026, with some provisions delayed until August 2027. As one expert observed, “two years is just about the minimum an organisation needs to prepare for the AI Act, and many will struggle to achieve this.” During these transition periods, AI technology continues to advance. The systems that will be regulated in 2027 may look quite different from those contemplated when the regulations were drafted.

The emergence of agentic AI systems, capable of autonomous decision-making, introduces new risks that existing frameworks were not designed to address. These systems operate with less human oversight, make decisions in ways that may be difficult to predict or explain, and create accountability gaps when things go wrong. The governance frameworks developed for earlier generations of AI may prove inadequate for technologies that evolve faster than regulatory capacity.

Independent Voices and the Fight for Accountability

Despite these structural barriers, individuals and organisations continue to push for meaningful accountability. Joy Buolamwini, who founded the Algorithmic Justice League in 2016, has demonstrated through rigorous research how facial recognition systems fail people of colour. Her “Gender Shades” project at MIT showed that commercial facial recognition systems had error rates of less than 1 per cent for lighter-skinned males but as high as 35 per cent for darker-skinned females. Her work prompted IBM and Microsoft to take corrective actions, and by 2020, every U.S.-based company her team had audited had stopped selling facial recognition technology to law enforcement. In 2019, she testified before the United States House Committee on Oversight and Reform about the risks of facial recognition technology.

Safiya Umoja Noble, a professor at UCLA and 2021 MacArthur Foundation Fellow, has documented in her book “Algorithms of Oppression” how search engines reinforce racism and sexism. Her work has established that data discrimination is a real social problem, demonstrating how the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of internet search engines, leads to biased algorithms that privilege whiteness and discriminate against people of colour. She is co-founder of the UCLA Center for Critical Internet Inquiry and received the inaugural NAACP-Archewell Digital Civil Rights Award in 2022.

The AI Now Institute, co-led by Amba Kak, continues to advance policy recommendations addressing concerns with artificial intelligence and concentrated power. In remarks before the UN General Assembly in September 2025, Kak emphasised that “the current scale-at-all-costs trajectory of AI is functioning to further concentrate power within a handful of technology giants” and that “this ultra-concentrated power over AI is increasingly a threat to nations' strategic independence, and to democracy itself.”

These researchers and advocates operate largely outside the corporate structures that dominate AI development. Their independence allows them to raise uncomfortable questions that internal ethics teams might be discouraged from pursuing. But their influence remains constrained by the resource imbalance between civil society organisations and the technology industry.

What Real Accountability Would Require

If the current trajectory of AI governance is insufficient, what might genuine accountability look like? The evidence suggests several necessary conditions.

First, enforcement mechanisms must have real teeth. Penalties that represent a meaningful fraction of corporate revenues, not just headline-grabbing numbers that are rarely imposed, would change the calculus for companies weighing compliance costs against potential fines. The EU AI Act's provisions for fines up to 7 per cent of worldwide turnover represent a step in this direction, but their effectiveness will depend on whether authorities are willing to impose them.

Second, those affected by algorithmic decisions need clear pathways to challenge them. This requires both procedural harmonisation across jurisdictions and resources to support individuals navigating complex regulatory systems. The absence of a one-stop shop in the EU creates barriers that sophisticated corporations can manage but individual complainants cannot.

Third, the voices of those most vulnerable to algorithmic harm must be centred in governance discussions. This means not just including Global South countries in international forums but ensuring that communities affected by welfare algorithms, hiring systems, and predictive policing tools have meaningful input into how those systems are governed.

Fourth, transparency must extend beyond disclosure to comprehensibility. Requiring companies to explain their AI systems is meaningful only if those explanations can be understood by regulators, affected individuals, and the public. The technical complexity of AI systems cannot become a shield against accountability.

Fifth, the concentration of power in AI development must be addressed directly. When a handful of companies control the most advanced AI capabilities, governance frameworks that treat all developers equivalently will fail to address the structural dynamics that generate harm. Antitrust enforcement, public investment in alternatives, and requirements for interoperability could all contribute to a more distributed AI ecosystem.

The Distance Between Rhetoric and Reality

The gap between AI governance principles and their practical implementation is not merely a technical or bureaucratic problem. It reflects deeper questions about who holds power in the digital age and whether democratic societies can exercise meaningful control over technologies that increasingly shape life chances.

The families destroyed by the Dutch childcare benefits scandal were not failed by a lack of principles. The Netherlands was a signatory to human rights conventions, a member of the European Union, a participant in international AI ethics initiatives. What failed them was the translation of those principles into systems that actually protected their rights. The algorithm that flagged them did not consult the UNESCO Recommendation on the Ethics of Artificial Intelligence before classifying their claims as suspicious.

As AI systems become more capable and more pervasive, the stakes of this implementation gap will only increase. Agentic AI systems making autonomous decisions, large language models reshaping information access, algorithmic systems determining who gets housing, employment, healthcare, and welfare, all of these applications amplify both the potential benefits and the potential harms of artificial intelligence. Governance frameworks that exist only on paper will not protect people from systems that operate in the real world.

The proliferation of principles may be necessary, but it is manifestly not sufficient. What is needed is the political will to enforce meaningful accountability, the structural changes that would give affected communities genuine power, and the recognition that governance is not a technical problem to be solved but an ongoing political struggle over who benefits from technological change and who bears its costs.

The researchers who first documented algorithmic bias, the advocates who pushed for stronger regulations, the journalists who exposed scandals like Robodebt and the Dutch benefits affair, all of them understood something that the architects of governance frameworks sometimes miss: accountability is not a principle to be declared. It is a practice to be enforced, contested, and continuously renewed. Until that practice matches the rhetoric, the mirage of AI governance will continue to shimmer on the horizon, always promised, never quite arrived.


References and Sources

  1. UNESCO. “193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence.” UN News, November 2021. https://news.un.org/en/story/2021/11/1106612

  2. European Commission. “AI Act enters into force.” 1 August 2024. https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_en

  3. OECD. “OECD updates AI Principles to stay abreast of rapid technological developments.” May 2024. https://www.oecd.org/en/about/news/press-releases/2024/05/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.html

  4. European Digital Strategy. “Governance and enforcement of the AI Act.” https://digital-strategy.ec.europa.eu/en/policies/ai-act-governance-and-enforcement

  5. MIT Sloan Management Review. “Organizations Face Challenges in Timely Compliance With the EU AI Act.” https://sloanreview.mit.edu/article/organizations-face-challenges-in-timely-compliance-with-the-eu-ai-act/

  6. Corporate Europe Observatory. “Don't let corporate lobbying further water down the AI Act.” March 2024. https://corporateeurope.org/en/2024/03/dont-let-corporate-lobbying-further-water-down-ai-act-lobby-watchdogs-warn-meps

  7. Euronews. “Big Tech spending on Brussels lobbying hits record high.” October 2025. https://www.euronews.com/next/2025/10/29/big-tech-spending-on-brussels-lobbying-hits-record-high-report-claims

  8. Washington Post. “Tech companies are axing 'ethical AI' teams just as the tech explodes.” March 2023. https://www.washingtonpost.com/technology/2023/03/30/tech-companies-cut-ai-ethics/

  9. Stanford HAI. “Timnit Gebru: Ethical AI Requires Institutional and Structural Change.” https://hai.stanford.edu/news/timnit-gebru-ethical-ai-requires-institutional-and-structural-change

  10. Wikipedia. “Dutch childcare benefits scandal.” https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal

  11. Human Rights Watch. “The Algorithms Too Few People Are Talking About.” January 2024. https://www.hrw.org/news/2024/01/05/algorithms-too-few-people-are-talking-about

  12. MIT News. “Study finds gender and skin-type bias in commercial artificial-intelligence systems.” February 2018. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

  13. NYU Press. “Algorithms of Oppression” by Safiya Umoja Noble. https://nyupress.org/9781479837243/algorithms-of-oppression/

  14. AI Now Institute. “AI Now Co-ED Amba Kak Gives Remarks Before the UN General Assembly on AI Governance.” September 2025. https://ainowinstitute.org/news/announcement/ai-now-co-ed-amba-kak-gives-remarks-before-the-un-general-assembly-on-ai-governance

  15. CSIS. “From Divide to Delivery: How AI Can Serve the Global South.” https://www.csis.org/analysis/divide-delivery-how-ai-can-serve-global-south

  16. Brookings. “AI in the Global South: Opportunities and challenges towards more inclusive governance.” https://www.brookings.edu/articles/ai-in-the-global-south-opportunities-and-challenges-towards-more-inclusive-governance/

  17. Carnegie Council. “Ethics washing.” https://carnegiecouncil.org/explore-engage/key-terms/ethics-washing

  18. Oxford Academic. “Global AI governance: barriers and pathways forward.” International Affairs. https://academic.oup.com/ia/article/100/3/1275/7641064

  19. IAPP. “AI Governance in Practice Report 2024.” https://iapp.org/resources/article/ai-governance-in-practice-report

  20. ENNHRI. “Key human rights challenges of AI.” https://ennhri.org/ai-resource/key-human-rights-challenges/

  21. ProMarket. “The Politics of Fragmentation and Capture in AI Regulation.” July 2025. https://www.promarket.org/2025/07/07/the-politics-of-fragmentation-and-capture-in-ai-regulation/

  22. UNCTAD. “AI's $4.8 trillion future: UN Trade and Development alerts on divides, urges action.” https://unctad.org/news/ais-48-trillion-future-un-trade-and-development-alerts-divides-urges-action

  23. ScienceDirect. “Agile and iterative governance: China's regulatory response to AI.” https://www.sciencedirect.com/science/article/abs/pii/S2212473X25000562

  24. Duke University Sanford School of Public Policy. “Dr. Joy Buolamwini on Algorithmic Bias and AI Justice.” https://sanford.duke.edu/story/dr-joy-buolamwini-algorithmic-bias-and-ai-justice/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...