Western AI Rules for Everyone: The Hidden Cost to Developing Nations

In July 2024, African Union ministers gathered in Accra, Ghana for the 45th Ordinary Session of the Executive Council. Their agenda included a document that had been two years in the making: the Continental Artificial Intelligence Strategy. When they endorsed it, something remarkable happened. Africa had, for the first time, articulated a collective vision for artificial intelligence governance that explicitly rejected the one-size-fits-all approach emanating from Brussels and Washington. The strategy called for “adapting AI to African realities,” with systems that “reflect our diversity, languages, culture, history, and geographical contexts.”

Commissioner Amani Abou-Zeid, who leads the African Union's infrastructure and energy portfolio, framed the endorsement as both timely and strategic. The document represented years of expert consultations, technical committee reviews, and ministerial negotiations. It positioned Africa not as a passive recipient of global technology standards but as a continent capable of authoring its own governance vision.

Yet the celebration was tempered by a sobering reality. Even as African nations crafted their own vision, the European Union's AI Act had already entered into force on 1 August 2024, establishing what many expect to become the de facto global standard. Companies doing business with European markets must comply regardless of where they are headquartered. The compliance costs alone, estimated at approximately 52,000 euros annually per high-risk AI model according to a 2021 EU study, represent a significant barrier for technology firms in developing economies. This figure comprises roughly 29,000 euros for internal compliance requirements such as documentation and human oversight, plus 23,000 euros for external auditing costs related to mandatory conformity assessments. The penalties for non-compliance are even more daunting: fines up to 35 million euros or 7 per cent of annual turnover for the most serious violations.

This is the new architecture of power in the age of artificial intelligence. And for nations across the Global South, it poses a question that cuts to the heart of sovereignty itself: when wealthy nations establish regulatory frameworks that claim universal applicability while embedding distinctly Western assumptions about privacy, individual autonomy, and acceptable risk, does adoption by developing countries constitute genuine choice or something more coercive?

The Mechanics of Regulatory Hegemony

The phenomenon has a name: the Brussels Effect. Coined by Anu Bradford, the Henry L. Moses Professor of Law and International Organization at Columbia Law School, the term describes the European Union's extraordinary ability to shape global markets through unilateral regulation. Bradford, who also serves as Director of the European Legal Studies Center at Columbia and a senior scholar at the Columbia Business School's Chazen Institute for Global Business, published her foundational research on this topic in 2012. Her 2020 book expanding the concept was recognised by Foreign Affairs as one of that year's most important works, with reviewer Andrew Moravcsik calling it “the single most important book on Europe's influence to appear in a decade.”

Bradford's research demonstrates how EU standards become entrenched in legal frameworks across both developed and developing markets, leading to what she calls a “Europeanization” of global commerce. The Brussels Effect manifests in two forms: de facto, when companies universally follow EU rules to standardise products across markets, and de jure, when formal legislation is passed in other countries aligning with EU law. Both dynamics serve to expand the reach of European regulatory philosophy far beyond the continent's borders.

The mechanism is deceptively simple. The EU represents approximately 450 million consumers with significant purchasing power. Companies seeking access to this market must comply with EU regulations. Rather than maintaining separate product lines for different jurisdictions, multinational corporations typically adopt EU standards globally. It proves more economical to build one product that meets the strictest requirements than to maintain parallel systems for different regulatory environments. Local firms in developing countries that wish to participate in supply chains or partnerships with these multinationals then find themselves adopting the same standards by necessity rather than choice.

The General Data Protection Regulation offers a preview of how this dynamic unfolds. Since its implementation in 2018, GDPR-style data protection laws have proliferated worldwide. Brazil enacted its Lei Geral de Protecao de Dados. India has implemented personal data protection legislation. South Africa, Kenya, and dozens of other nations have followed suit with laws that closely mirror European frameworks. Within two years of GDPR's enactment, major technology companies including Meta and Microsoft had updated their global services to comply, making European privacy standards the effective baseline for much of the digital world.

The question of whether these adoptions represented genuine policy preferences or structural compulsion remains contested. Supporters point to the genuine harms of unregulated data collection and the value of strong privacy protections. Critics note that the costs and administrative requirements embedded in these frameworks often exceed the capacity of smaller nations and companies to implement, effectively forcing adoption of Brussels-designed solutions rather than enabling indigenous alternatives.

The AI Act appears positioned to follow a similar trajectory. Countries including Canada, Brazil, and South Korea are already developing AI governance frameworks that borrow heavily from the EU's risk-based classification system. Canada's proposed Artificial Intelligence and Data Act, in development since 2022, mirrors Europe's approach. Brazil's AI bill, approved by the Senate in late 2024, classifies systems as excessive, high, or lower risk in direct parallel to the EU model. South Korea's AI Basic Act, passed in December 2024, borrows the EU's language of “risk” and “transparency,” though it stops short of mandating third-party audits. The Atlantic Council has noted that the Act “sets the stage for global AI governance,” while researchers at Brookings observe that its influence extends far beyond formal adoption, shaping how companies worldwide develop and deploy artificial intelligence systems.

The Values Embedded in Code and Compliance

To understand why this matters, one must examine what precisely gets encoded in these regulatory frameworks. The EU AI Act is not simply a neutral set of technical standards. It embodies specific philosophical commitments about the relationship between individuals, technology, and the state.

At its foundation lies an emphasis on individual rights, transparency, and human oversight. These principles emerge from a distinctly Western liberal tradition that prioritises personal autonomy and treats privacy as an individual entitlement rather than a collective concern. The Act's risk classification system divides AI applications into four tiers: unacceptable risk, high risk, limited risk, and minimal risk. This categorisation reflects assumptions shaped by European historical experiences, particularly around surveillance, discrimination, and the protection of fundamental rights as articulated in the EU Charter.

Practices deemed unacceptable and therefore prohibited include AI systems designed for subliminal manipulation, those exploiting vulnerabilities of specific groups, social scoring by public authorities, and certain forms of biometric identification. High-risk applications, subject to extensive compliance requirements, include AI in critical infrastructure, education, employment, law enforcement, and migration management. These categories reflect European priorities: the continent's twentieth-century experiences with totalitarianism and state surveillance have shaped particular sensitivity to government overreach and discriminatory classification systems.

But these categories may not map neatly onto the priorities and experiences of other societies. Research published in AI and Society in 2025, examining perspectives from practitioners in both Global North and Global South contexts, found that “global debates on artificial intelligence ethics and governance remain dominated by high-income, AI-intensive nations, marginalizing perspectives from low- and middle-income countries and minoritized practitioners.” The study documented how power asymmetries shape not only who participates in governance discussions but what counts as legitimate ethical concern in the first place.

Scholars at Chatham House have been more direct. In a 2024 analysis of AI governance and colonialism, researchers argued that “while not all European values are bad per se, the imposition of the values of individualism that accompany Western-developed AI and its regulations may not be suitable in communities that value communal approaches.” The report noted that the regulatory power asymmetry between Europe and Africa “that is partly a historical legacy may come into play again where AI regulation is concerned.”

Consider how different cultural frameworks might approach AI governance. The African concept of Ubuntu, increasingly discussed in technology ethics circles, offers a fundamentally different starting point. Ubuntu, a word meaning “human-ness” or “being human” in Zulu and Xhosa languages, emphasises that personhood is attained through interpersonal and communal relations rather than individualist, rational, and atomistic endeavours.

As Sabelo Mhlambi, a Fellow at Harvard's Berkman Klein Center for Technology and the Carr Center for Human Rights Policy, has argued, Ubuntu's relational framework suggests that personhood is constituted through interconnection with others rather than through individual rational autonomy. Mhlambi, a computer scientist whose research examines the ethical implications of technology in the developing world, uses this framework to argue that the harms caused by artificial intelligence are in essence violations of Ubuntu's relational model. His work proposes shifting the focus of AI governance from protecting individual rationality to maintaining the relationality between humans.

The implications for AI governance are significant. Where European frameworks emphasise protecting individual users from algorithmic harm, a Ubuntu-informed approach might prioritise how AI systems affect community bonds and collective wellbeing. Where GDPR treats data as individual property requiring consent for use, communitarian perspectives might view certain data as belonging to communities or future generations. These are not merely academic distinctions. They represent fundamentally different visions of what technology governance should accomplish.

The African Commission on Human and Peoples' Rights, in a 2021 Resolution, called upon State Parties to give serious consideration to African “values, norms and ethics” in the formulation of AI governance frameworks, explicitly identifying Ubuntu and communitarian ethos as components of such indigenous values. The UNESCO Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states in November 2021, includes what scholars have termed an “Ubuntu paragraph” acknowledging these alternative frameworks. But acknowledgment is not the same as incorporation into binding regulatory standards.

The Infrastructure of Dependency

The challenge facing developing nations extends beyond philosophical differences. The material requirements of AI governance create their own forms of dependency.

Consider the compliance infrastructure that the EU AI Act demands. High-risk AI systems must undergo conformity assessments, maintain extensive documentation, implement human oversight mechanisms, and submit to regulatory review. Providers must establish risk management systems, maintain detailed technical documentation, keep comprehensive logs of system operation, and ensure accuracy, robustness, and cybersecurity. They must register in an EU-wide database and submit to post-market monitoring requirements. The European Commission's own impact assessment estimated that compliance would add approximately 17 per cent overhead to AI development costs. For well-resourced technology companies in California or London, these requirements represent a manageable expense. For startups in Nairobi or Mumbai, they may prove prohibitive.

The numbers tell a stark story of global AI inequality. According to analysis from the Tony Blair Institute for Global Change, developing countries account for less than 10 per cent of global AI patents as of 2024. The projected 15.7 trillion dollar contribution of AI to the global economy by 2030, a figure widely cited from PwC analysis, is expected to flow disproportionately to nations that already dominate the technology sector. Without sufficient capacity to participate in AI development and governance, many Global South countries may find themselves relegated to the role of rule-takers rather than rule-makers.

Infrastructure gaps compound the challenge. India, despite generating roughly one-fifth of the world's data according to estimates from the Center for Strategic and International Studies, holds only about 3 per cent of global data centre capacity. The nation is, in the language of one CSIS analysis, “data rich but infrastructure poor.” Sub-Saharan Africa faces even more severe constraints. Only one quarter of the population has access to reliable internet, and a 29 per cent gender gap exists in mobile phone usage.

The energy requirements of AI infrastructure often exceed what fragile power grids can support. The International Energy Agency estimates that global data centre electricity consumption reached 415 terawatt-hours in 2024, approximately 1.5 per cent of worldwide electricity demand, with this figure expected to triple by 2035. To put that in perspective, the total energy consumption of households in sub-Saharan Africa is expected to reach between 430 and 500 terawatt-hours by 2030. Training a single frontier-scale AI model can consume thousands of megawatt-hours, a burden many power grids in developing nations simply cannot support.

Investment is beginning to flow. AWS opened a cloud region in Cape Town in 2020, adding approximately 673 million dollars to South Africa's GDP according to company estimates. Google launched a Johannesburg cloud region in early 2024. Microsoft and Abu Dhabi-based G42 are investing 1 billion dollars in a geothermal-powered data campus in Kenya. Yet these investments remain concentrated in a handful of countries, leaving most of the continent dependent on foreign infrastructure.

Against this backdrop, the option to develop indigenous AI governance frameworks becomes not merely a regulatory choice but a question of resource allocation. Should developing nations invest limited technical and bureaucratic capacity in implementing frameworks designed in Brussels? Or should they pursue alternative approaches better suited to local conditions, knowing that divergence from EU standards may limit access to global markets and investment?

Historical Echoes and Structural Patterns

For scholars of development and international political economy, these dynamics have a familiar ring. The parallels to previous episodes of regulatory imposition are striking, if imperfect.

The TRIPS Agreement, concluded as part of the Uruguay Round of GATT negotiations in the early 1990s, offers a particularly instructive comparison. That agreement required all World Trade Organisation members to implement minimum standards for intellectual property protection, standards that largely reflected the interests of pharmaceutical and technology companies in wealthy nations. The Electronic Frontier Foundation has documented how campaigns of unilateral economic pressure under Section 301 of the US Trade Act played a role in defeating alternative policy positions favoured by developing countries including Brazil, India, and Caribbean Basin states.

Developing countries secured transition periods and promises of technical assistance, but the fundamental architecture of the agreement reflected power asymmetries that critics described as neo-colonial. The United Nations Conference on Trade and Development documented that implementing TRIPS required “significant improvements, adaptation and enlargement of legal, administrative and particularly enforcement frameworks, as well as human resource development.” The Doha Declaration of 2001, which clarified that TRIPS should not prevent states from addressing public health crises through compulsory licensing and other mechanisms, came only after intense developing country advocacy and a global campaign around access to medicines for HIV/AIDS.

Research from the Dharmashastra National Law University's Student Law Journal argues that “the adoption of AI laws by countries in the Global South perpetuates the idea of continuing colonial legacies. Such regulatory models adopted from the Global North are not reflective of the existing needs of native societies.” The analysis noted that while African states have not been formally coerced into adopting EU regulations, they may nonetheless choose to comply to access European markets, “in much the same way as some African states have already adopted European cyber governance standards.”

A 2024 analysis published in the National Institutes of Health database examining decolonised AI governance in Sub-Saharan Africa found that “the call for decolonial ethics arises from long-standing patterns of extractive practices and power consolidation of decision-making authority between the Global North and Global South.” The researchers documented how “the infrastructures and data economies underpinning AI often replicate earlier colonial patterns of resource and labor extraction, where regions in the Global South provide data, annotation work, and computational resources while deriving limited benefit.”

Abeba Birhane, an Ethiopian-born cognitive scientist now at Trinity College Dublin and a Senior Fellow in Trustworthy AI at the Mozilla Foundation, has developed the concept of “algorithmic colonisation” to describe how Western technology companies' expansion into developing markets shares characteristics with historical colonialism. Her research, which earned her recognition as one of TIME's 100 most influential persons in AI for 2023, documents how “traditional colonialism has been driven by political and government forces; algorithmic colonialism, on the other hand, is driven by corporate profits. While the former used brute force domination, colonialism in the age of AI takes the form of 'state-of-the-art algorithms' and 'AI solutions' to social problems.”

Competing Visions and Emerging Alternatives

Yet the story is not simply one of imposition and acquiescence. Across the Global South, alternative approaches to AI governance are taking shape, demonstrating that multiple regulatory paradigms are possible.

India offers perhaps the most developed alternative model. The India AI Governance Guidelines, developed under the IndiaAI Mission and released for public consultation in 2025, explicitly reject the need for comprehensive AI-specific legislation at this stage. Instead, they advocate a “techno-legal model” in which law and technology co-evolve, allowing compliance to be “verifiable by design rather than enforced ex post.” The guidelines note that existing laws on information technology, data protection, consumer protection, and statutory civil and criminal codes can address many AI-related risks. Rather than creating an entirely new regulatory apparatus, the framework proposes building on India's existing digital public infrastructure.

The approach reflects India's distinctive position. The nation hosts the world's largest digital identity system, Aadhaar, which has enrolled over 1.3 billion residents. It operates the biggest digital payments system by volume through the Unified Payments Interface. According to the Stanford Artificial Intelligence Index Report 2025, India ranks second globally in AI skill penetration from 2015 to 2024. Rather than importing the regulatory architecture of the EU, Indian policymakers are building on existing digital public infrastructure to create governance frameworks suited to local conditions. The framework establishes an AI Governance Group for overall policy formulation and coordination across agencies, while sector-specific regulators like the Reserve Bank of India handle domain-specific rules.

The Indian framework explicitly positions itself as an alternative model for the Global South. Through the G20 Digital Economy Working Group, India has proposed extending its digital public infrastructure model into an international partnership, a logic that could be applied to AI governance as well. India's leadership of the Global Partnership on AI, culminating in the 2024 New Delhi Summit, demonstrated that developing nations can shape global discussions when they participate from positions of technical and institutional strength.

Singapore has pursued yet another approach, prioritising innovation through voluntary frameworks rather than prescriptive mandates. Singapore's National Artificial Intelligence Strategy 2.0, launched in December 2023, commits over 1 billion Singapore dollars over five years to advance AI capabilities. The Model AI Governance Framework for Generative AI, developed in consultation with over 70 global organisations including Microsoft, OpenAI, and Google, establishes nine dimensions for responsible AI deployment without imposing mandatory compliance requirements.

This flexibility has enabled Singapore to position itself as a governance innovation hub. In February 2025, Singapore's Infocomm Media Development Authority and the AI Verify Foundation launched the Global AI Assurance Pilot to codify emerging norms for technical testing. In late 2024, Singapore conducted the world's first multilingual AI safety red-teaming exercise focused on the Asia-Pacific, bringing together over 350 participants from 9 countries to test large language models for cultural bias. Singapore is also working with Rwanda to develop a Digital Forum of Small States AI Governance Playbook, recognising that smaller nations face unique challenges in AI governance.

China, meanwhile, has developed its own comprehensive governance ecosystem that operates entirely outside the EU framework. The AI Safety Governance Framework, released in September 2024 by China's National Technical Committee 260 on Cybersecurity, takes a fundamentally different approach to risk classification. Rather than dividing AI systems into risk levels, it categorises the types of risks themselves, distinguishing between inherent risks from the technology and risks posed by its application. Beijing's approach combines tiered supervision, security assessments, regulatory sandboxes, and app-store enforcement.

These divergent approaches matter because they demonstrate that multiple regulatory paradigms are possible. The question is whether developing nations without China's market power or India's technical capacity will have the space to pursue alternatives, or whether market pressures and institutional constraints will channel them toward EU-style frameworks regardless of local preferences.

The Institutional Preconditions for Genuine Choice

What would it take for developing countries to exercise meaningful sovereignty over AI governance? The preconditions are formidable but not impossible.

First, and most fundamentally, developing nations require technical capacity. This means not only the engineering expertise to develop AI systems but the regulatory expertise to evaluate their risks and benefits. Currently, the knowledge needed to assess AI systems is concentrated overwhelmingly in wealthy nations. Building this capacity requires sustained investment in education, research institutions, and regulatory bodies, investments that compete with other urgent development priorities including healthcare, infrastructure, and climate adaptation.

The African Union's Continental AI Strategy recognises this challenge. Its implementation timeline extends from 2025 to 2030, with the first phase focused on “establishing governance structures, creating national AI strategies, and mobilizing resources.” UNESCO has provided technical and financial support for the strategy's development and implementation planning. Yet even with this assistance, the strategy faces significant obstacles. Analysis of 18-month implementation data reveals stark geographic concentration, with 83 per cent of funding concentrated in four countries: Kenya, Nigeria, South Africa, and Egypt.

Total tech funding for Africa reached 2.21 billion dollars in 2024, down 22 per cent from the previous year according to industry tracking. Of this, AI-specific startups received approximately 400 to 500 million dollars. These figures, while growing, remain a fraction of the investment flowing to AI development in North America, Europe, and China. Local initiatives are emerging: Johannesburg-based Lelapa AI launched InkubaLM in September 2024, a small language model focused on five African languages including Swahili, Hausa, Yoruba, isiZulu, and isiXhosa. With only 0.4 billion parameters, it performs comparably to much larger models, demonstrating that efficient, locally-relevant AI development is possible.

Second, developing nations need platforms for collective action. Individual countries lack the market power to resist regulatory convergence toward EU standards, but regional blocs potentially offer countervailing force. The African Union, ASEAN, and South American regional organisations could theoretically develop common frameworks that provide alternatives to Brussels-designed governance.

Some movement in this direction is visible. ASEAN countries have been developing AI guidelines that, while borrowing elements from the EU approach, also reflect regional priorities around national development and ecosystem building. Southeast Asian nations have generally adopted a wait-and-see approach toward global regulatory trends, observing international developments before crafting their own frameworks. The African Union's strategy explicitly calls for unified national approaches among member states and encourages cross-border data sharing to support AI development. Yet these regional initiatives remain in early stages, lacking the enforcement mechanisms and market leverage that give EU regulations their global reach.

Third, and perhaps most controversially, developing nations may need to resist the framing of alternative regulatory approaches as “races to the bottom” or “regulatory arbitrage.” The discourse surrounding AI governance often assumes that weaker regulation necessarily means exploitation and harm. This framing can delegitimise genuine attempts to develop governance frameworks suited to different conditions and priorities.

There is a legitimate debate about whether communitarian approaches to data governance, or more permissive frameworks for AI experimentation, or different balances between innovation and precaution, represent valid alternative visions or merely excuses for corporate exploitation. But foreclosing this debate by treating EU standards as the benchmark of responsible governance effectively denies developing nations the agency to make their own assessments.

The Question of Epistemology

At the deepest level, the challenge facing the Global South is epistemological. Whose knowledge counts in defining what responsible AI looks like?

Current governance frameworks draw primarily on Western philosophical traditions, Western academic research, and Western institutional expertise. The major AI ethics guidelines, the prominent research institutions, the influential think tanks and policy organisations, these are concentrated overwhelmingly in North America and Western Europe. When developing countries adopt frameworks designed in these contexts, they are not simply accepting regulatory requirements. They are accepting particular ways of understanding technology, society, and the relationship between them.

The concept of Ubuntu challenges the assumption that ethical frameworks should centre on individual rights and protections. As scholars in Ethics and Information Technology have argued, “under the African ethics of Ubuntu, for an individual to fully become a person, her positive relations with others are fundamental. Personhood is attained through interpersonal and communal relations, rather than individualist, rational and atomistic endeavours.” This stands in stark contrast with Western philosophy, where individual autonomy, rationality, and prudence are considered crucial for personhood.

Governance in liberal democracies of the Global North focuses primarily on protecting autonomy within the individual private sphere. Ubuntu-informed governance would take a different starting point, focusing on how systems affect relational bonds and collective flourishing. The implications extend beyond abstract ethics to practical questions of AI design, deployment, and oversight.

Similar challenges come from other philosophical traditions. Indigenous knowledge systems, religious frameworks, and non-Western philosophical schools offer distinct perspectives on questions of agency, responsibility, and collective action that current AI governance frameworks largely ignore. Safiya Umoja Noble, the David O. Sears Presidential Endowed Chair of Social Sciences at UCLA and a 2021 MacArthur Fellow, has documented how search algorithms and AI systems embed particular cultural assumptions that disadvantage marginalised communities. Her research challenges the idea that technology platforms offer neutral playing fields.

The Distributed AI Research Institute, founded by Timnit Gebru in 2021 with 3.7 million dollars in foundation funding from the Ford Foundation, MacArthur Foundation, Kapor Center, and Open Society Foundation, represents one effort to create space for alternative perspectives. DAIR prioritises work that benefits Black people in Africa and the diaspora, documents the effects of AI on marginalised groups, and operates explicitly outside the influence of major technology companies. One of the institute's initial projects analyses satellite imagery of townships in South Africa using AI to better understand legacies of apartheid.

The question is whether global AI governance can genuinely pluralise or whether structural pressures will continue to centre Western perspectives while marginalising alternatives. The experience of previous regulatory regimes, from intellectual property to data protection, suggests that dominant frameworks tend to reproduce themselves even as they claim universal applicability.

The Stakes of the Present Moment

The decisions made in the next few years will shape global AI governance for decades. The EU AI Act implementation timeline extends through 2027, with major provisions taking effect incrementally. Prohibited AI practices became applicable in February 2025. Governance rules for general-purpose AI models took effect in August 2025. Rules for high-risk AI systems have an extended transition period until August 2027. The African Union's strategy runs to 2030. India's guidelines are just beginning their implementation journey. These overlapping timelines create a critical window in which the architecture of global AI governance will solidify.

For developing nations, the stakes extend beyond technology policy. The question of whether they can exercise genuine sovereignty over AI governance is ultimately a question about the structure of the global order itself. If the answer is no, if structural pressures channel developing countries toward Western regulatory frameworks regardless of local preferences, then the promise of a multipolar world in which diverse societies chart their own paths will have proven hollow in the very domain most likely to shape the coming century.

The alternative is not isolation or rejection of global standards. It is the creation of governance architectures that genuinely accommodate plurality, that treat different societies' preferences as legitimate rather than deviant, and that build capacity for developing nations to participate as authors rather than merely adopters of global norms. The Global Partnership on AI, now hosting 44 member countries across six continents, represents one forum where such pluralism might develop. The partnership explicitly aims to welcome developing and emerging economies committed to responsible AI principles.

Whether such alternatives can emerge remains uncertain. The forces favouring convergence toward EU-style frameworks are powerful: market pressures from companies standardising on EU-compliant products, institutional constraints from international organisations dominated by wealthy nations, capacity asymmetries that make it easier to adopt existing frameworks than develop new ones, and the sheer momentum of existing regulatory trajectories. But the growing articulation of alternative visions, from the African Union's Continental Strategy to India's techno-legal model to academic frameworks grounded in Ubuntu and other non-Western traditions, suggests that the debate is far from settled.

The Global South's response to Western AI governance frameworks will not be uniform. Some nations will embrace EU standards as pathways to global market access and signals of regulatory credibility. Others will resist, developing indigenous approaches better suited to local conditions and philosophical traditions. Most will pursue hybrid strategies, adopting elements of Western frameworks while attempting to preserve space for alternative approaches.

What is certain is that the framing of these choices matters. If developing nations are seen as simply choosing between responsible regulation and regulatory arbitrage, the outcome is predetermined. If, instead, they are recognised as legitimate participants in a global conversation about how societies should govern artificial intelligence, the possibilities expand. The architecture of AI governance can either reproduce historical patterns of dependency or open space for genuine pluralism. The choices made now will determine which future emerges.


References and Sources

African Union. “Continental Artificial Intelligence Strategy.” African Union, August 2024. https://au.int/en/documents/20240809/continental-artificial-intelligence-strategy

African Union. “African Ministers Adopt Landmark Continental Artificial Intelligence Strategy.” African Union Press Release, June 2024. https://au.int/en/pressreleases/20240617/african-ministers-adopt-landmark-continental-artificial-intelligence-strategy

Birhane, Abeba. “Algorithmic Colonization of Africa.” Oxford Academic, 2020. https://academic.oup.com/book/46567/chapter/408130272

Bradford, Anu. “The Brussels Effect: How the European Union Rules the World.” Oxford University Press, 2020. Columbia Law School Faculty Profile: https://www.law.columbia.edu/faculty/anu-bradford

Brookings Institution. “The EU AI Act will have global impact, but a limited Brussels Effect.” Brookings, 2024. https://www.brookings.edu/articles/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/

Centre for European Policy Studies. “Clarifying the costs for the EU's AI Act.” CEPS, 2024. https://www.ceps.eu/clarifying-the-costs-for-the-eus-ai-act/

Chatham House. “Artificial intelligence and the challenge for global governance: Resisting colonialism.” Chatham House, 2024. https://www.chathamhouse.org/2024/06/artificial-intelligence-and-challenge-global-governance/06-resisting-colonialism-why-ai

CSIS. “From Divide to Delivery: How AI Can Serve the Global South.” Center for Strategic and International Studies, 2025. https://www.csis.org/analysis/divide-delivery-how-ai-can-serve-global-south

Dharmashastra National Law University. “Challenging the Coloniality in Global AI Regulation Frameworks.” Student Law Journal, 2024. https://dnluslj.in/challenging-the-coloniality-in-global-ai-regulation-frameworks/

European Commission. “AI Act: Shaping Europe's Digital Future.” European Commission, 2024. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Government of India. “India AI Governance Guidelines.” Ministry of Electronics and IT, 2025. https://indiaai.gov.in/article/india-ai-governance-guidelines-empowering-ethical-and-responsible-ai

Harvard Kennedy School. “From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance.” Carr Center for Human Rights Policy, 2020. https://carrcenter.hks.harvard.edu/publications/rationality-relationality-ubuntu-ethical-and-human-rights-framework-artificial

IAPP. “Global AI Governance Law and Policy: Singapore.” International Association of Privacy Professionals, 2025. https://iapp.org/resources/article/global-ai-governance-singapore

IMDA Singapore. “Model AI Governance Framework 2024.” Infocomm Media Development Authority, 2024. https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2024/public-consult-model-ai-governance-framework-genai

Mhlambi, Sabelo. Harvard Berkman Klein Center Profile. https://cyber.harvard.edu/people/sabelo-mhlambi

Noble, Safiya Umoja. UCLA Faculty Profile. https://seis.ucla.edu/faculty/safiya-umoja-noble/

OECD. “Global Partnership on Artificial Intelligence.” OECD, 2024. https://www.oecd.org/en/about/programmes/global-partnership-on-artificial-intelligence.html

PMC. “Decolonizing global AI governance: assessment of the state of decolonized AI governance in Sub-Saharan Africa.” National Institutes of Health, 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11303018/

PMC. “The role of the African value of Ubuntu in global AI inclusion discourse.” National Institutes of Health, 2022. https://pmc.ncbi.nlm.nih.gov/articles/PMC9023883/

Springer. “Ethics of AI in Africa: Interrogating the role of Ubuntu and AI governance initiatives.” Ethics and Information Technology, 2025. https://link.springer.com/article/10.1007/s10676-025-09834-5

Springer. “Understanding AI and power: situated perspectives from Global North and South practitioners.” AI and Society, 2025. https://link.springer.com/article/10.1007/s00146-025-02731-x

Stanford University. “Human-Centered Artificial Intelligence Index Report.” Stanford HAI, 2025. https://hai.stanford.edu/

Tony Blair Institute for Global Change. “How Leaders in the Global South Can Devise AI Regulation That Enables Innovation.” Institute for Global Change, 2024. https://institute.global/insights/tech-and-digitalisation/how-leaders-in-the-global-south-can-devise-ai-regulation-that-enables-innovation

UNCTAD. “The TRIPS Agreement.” United Nations Conference on Trade and Development. https://unctad.org/system/files/official-document/ite1_en.pdf

UNESCO. “Recommendation on the Ethics of Artificial Intelligence.” UNESCO, 2021. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

Washington Post. “Timnit Gebru launches DAIR, her new AI ethics research institute.” Washington Post, December 2021. https://www.washingtonpost.com/technology/2021/12/02/timnit-gebru-dair/

White and Case. “AI Watch: Global regulatory tracker – China.” White and Case LLP, 2024. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...