Sovereignty Costs Trillions: The True Price of AI Independence

When the Biden administration unveiled sweeping export controls on advanced AI chips in October 2022, targeting China's access to cutting-edge semiconductors, it triggered a chain reaction that continues to reshape the global technology landscape. These restrictions, subsequently expanded in October 2023 and December 2024, represent far more than trade policy. They constitute a fundamental reorganisation of the technological substrate upon which artificial intelligence depends, forcing nations, corporations, and startups to reconsider everything from supply chain relationships to the very architecture of sovereign computing.
The December 2024 controls marked a particularly aggressive escalation, adding 140 companies to the Entity List and, for the first time, imposing country-wide restrictions on high-bandwidth memory (HBM) exports to China. The Bureau of Industry and Security strengthened these controls by restricting 24 types of semiconductor manufacturing equipment and three types of software tools. In January 2025, the Department of Commerce introduced the AI Diffusion Framework and the Foundry Due Diligence Rule, establishing a three-tier system that divides the world into technological haves, have-somes, and have-nots based on their relationship with Washington.
The implications ripple far beyond US-China tensions. For startups in India, Brazil, and across the developing world, these controls create unexpected bottlenecks. For governments pursuing digital sovereignty, they force uncomfortable calculations about the true cost of technological independence. For cloud providers, they open new markets whilst simultaneously complicating existing operations. The result is a global AI ecosystem increasingly defined not by open collaboration, but by geopolitical alignment and strategic autonomy.
The Three-Tier World
The AI Diffusion Framework establishes a hierarchical structure that would have seemed absurdly dystopian just a decade ago, yet now represents the operational reality for anyone working with advanced computing. Tier one consists of 18 nations receiving essentially unrestricted access to US chips: the Five Eyes intelligence partnership (Australia, Canada, New Zealand, the United Kingdom, and the United States), major manufacturing and design partners (Japan, the Netherlands, South Korea, and Taiwan), and close NATO allies. These nations maintain unfettered access to cutting-edge processors like NVIDIA's H100 and the forthcoming Blackwell architecture.
Tier two encompasses most of the world's nations, facing caps on computing power that hover around 50,000 advanced AI chips through 2027, though this limit can double if countries reach specific agreements with the United States. For nations with serious AI ambitions but outside the inner circle, these restrictions create a fundamental strategic challenge. A country like India, building its first commercial chip fabrication facilities and targeting a 110 billion dollar semiconductor market by 2030, finds itself constrained by external controls even as it invests billions in domestic capabilities.
Tier three effectively includes China and Russia, facing the most severe restrictions. These controls extend beyond chips themselves to encompass semiconductor manufacturing equipment, electronic design automation (EDA) software, and even HBM, the specialised memory crucial for training large AI models. The Trump administration has since modified aspects of this framework, replacing blanket restrictions with targeted bans on specific chips like NVIDIA's H20 and AMD's MI308, but the fundamental structure of tiered access remains.
According to US Commerce Secretary Howard Lutnick's congressional testimony, Huawei will produce only 200,000 AI chips in 2025, a figure that seems almost quaint compared to the millions of advanced processors flowing to tier-one nations. Yet this scarcity has sparked innovation. Chinese firms like Alibaba and DeepSeek have produced large language models scoring highly on established benchmarks despite hardware limitations, demonstrating how constraint can drive architectural creativity.
For countries caught between tiers, the calculus becomes complex. Access to 50,000 H100-equivalent chips represents substantial computing power, roughly 700 exaflops of AI performance at FP8 precision. But it pales compared to the unlimited access tier-one nations enjoy. This disparity creates strategic pressure to either align more closely with Washington or pursue expensive alternatives.
The True Cost of Technological Sovereignty
When nations speak of “sovereign AI,” they typically mean systems trained on domestic data, hosted in nationally controlled data centres, and ideally running on domestically developed hardware. The rhetorical appeal is obvious: complete control over the technological stack, from silicon to software. The practical reality proves far more complicated and expensive than political speeches suggest.
France's recent announcement of €109 billion in private AI investment illustrates both the ambition and the challenge. Even with this massive commitment, French AI infrastructure will inevitably rely heavily on NVIDIA chips and US hyperscalers. True sovereignty would require control over the entire vertical stack, from semiconductor design and fabrication through data centres and energy infrastructure. No single nation outside the United States currently possesses this complete chain, and even America depends on Taiwan for advanced chip manufacturing.
The numbers tell a sobering story. By 2030, data centres worldwide will require 6.7 trillion dollars in investment to meet demand for compute power, with 5.2 trillion dollars specifically for AI infrastructure. NVIDIA CEO Jensen Huang estimates that between three and four trillion dollars will flow into AI infrastructure by decade's end. For individual nations pursuing sovereignty, even fractional investments of this scale strain budgets and require decades to bear fruit.
Consider India's semiconductor journey. The government has approved ten semiconductor projects with total investment of 1.6 trillion rupees (18.2 billion dollars). The India AI Mission provides over 34,000 GPUs to startups and researchers at subsidised rates. The nation inaugurated its first centres for advanced 3-nanometer chip design in May 2025. Yet challenges remain daunting. Initial setup costs for fabless units run at least one billion dollars, with results taking four to five years. R&D and manufacturing costs for 5-nanometer chips approach 540 million dollars. A modern semiconductor fabrication facility spans the size of 14 to 28 football fields and consumes around 169 megawatt-hours of energy annually.
Japan's Rapidus initiative demonstrates the scale of commitment required for semiconductor revival. The government has proposed over 10 trillion yen in funding over seven years for semiconductors and AI. Rapidus aims to develop mass production for leading-edge 2-nanometer chips, with state financial support reaching 920 billion yen (approximately 6.23 billion dollars) so far. The company plans to begin mass production in 2027, targeting 15 trillion yen in sales by 2030.
These investments reflect a harsh truth: localisation costs far exceed initial projections. Preliminary estimates suggest tariffs could raise component costs anywhere from 10 to 30 per cent, depending on classification and origin. Moreover, localisation creates fragmentation, potentially reducing economies of scale and slowing innovation. Where the global semiconductor industry once optimised for efficiency through specialisation, geopolitical pressures now drive redundancy and regional duplication.
Domestic Chip Development
China's response to US export controls provides the most illuminating case study in forced technological self-sufficiency. Cut off from NVIDIA's most advanced offerings, Chinese semiconductor startups and tech giants have launched an aggressive push to develop domestic alternatives. The results demonstrate both genuine technical progress and the stubborn persistence of fundamental gaps.
Huawei's Ascend series leads China's domestic efforts. The Ascend 910C, manufactured using SMIC's 7-nanometer N+2 process, reportedly offers 800 teraflops at FP16 precision with 128 gigabytes of HBM3 memory and up to 3.2 terabytes per second memory bandwidth. However, real-world performance tells a more nuanced story. Research from DeepSeek suggests the 910C delivers approximately 60 per cent of the H100's inference performance, though in some scenarios it reportedly matches or exceeds NVIDIA's B20 model.
Manufacturing remains a critical bottleneck. In September 2024, the Ascend 910C's yield sat at just 20 per cent. Huawei has since doubled this to 40 per cent, aiming for the 60 per cent industry standard. The company plans to produce 100,000 Ascend 910C chips and 300,000 Ascend 910B chips in 2025, accounting for over 75 per cent of China's total AI chip production. Chinese tech giants including Baidu and ByteDance have adopted the 910C, powering models like DeepSeek R1.
Beyond Huawei, Chinese semiconductor startups including Cambricon, Moore Threads, and Biren race to establish viable alternatives. Cambricon launched its 7-nanometer Siyuan 590 chip in 2024, modelled after NVIDIA's A100, and turned profitable for the first time. Alibaba is testing a new AI chip manufactured entirely in China, shifting from earlier generations fabricated by Taiwan Semiconductor Manufacturing Company (TSMC). Yet Chinese tech firms often prefer not to use Huawei's chips for training their most advanced AI models, recognising the performance gap.
European efforts follow a different trajectory, emphasising strategic autonomy within the Western alliance rather than complete independence. SiPearl, a Franco-German company, brings to life the European Processor Initiative, designing high-performance, low-power microprocessors for European exascale supercomputers. The company's flagship Rhea1 processor features 80 Arm Neoverse V1 cores and over 61 billion transistors, recently securing €130 million in Series A funding. British firm Graphcore, maker of Intelligence Processing Units for AI workloads, formed strategic partnerships with SiPearl before being acquired by Softbank Group in July 2024 for around 500 million dollars.
The EU's €43 billion Chips Act aims to boost semiconductor manufacturing across the bloc, though critics note that funding appears focused on established players rather than startups. This reflects a broader challenge: building competitive chip design and fabrication capabilities requires not just capital, but accumulated expertise, established supplier relationships, and years of iterative development.
AMD's MI300 series illustrates the challenges even well-resourced competitors face against NVIDIA's dominance. AMD's AI chip revenue reached 461 million dollars in 2023 and is projected to hit 2.1 billion dollars in 2024. The MI300X outclasses NVIDIA's H100 in memory capacity and matches or exceeds its performance for inference on large language models. Major customers including Microsoft, Meta, and Oracle have placed substantial orders. Yet NVIDIA retains a staggering 98 per cent market share in data centre GPUs, sustained not primarily through hardware superiority but via its CUDA programming ecosystem. Whilst AMD hardware increasingly competes on technical merits, its software requires significant configuration compared to CUDA's out-of-the-box functionality.
Cloud Partnerships
For most nations and organisations, complete technological sovereignty remains economically and technically unattainable in any reasonable timeframe. Cloud partnerships emerge as the pragmatic alternative, offering access to cutting-edge capabilities whilst preserving some degree of local control and regulatory compliance.
The Middle East provides particularly striking examples of this model. Saudi Arabia's 100 billion dollar Transcendence AI Initiative, backed by the Public Investment Fund, includes a 5.3 billion dollar commitment from Amazon Web Services to develop new data centres. In May 2025, Google Cloud and the Kingdom's PIF announced advancement of a ten billion dollar partnership to build and operate a global AI hub in Saudi Arabia. The UAE's Khazna Data Centres recently unveiled a 100-megawatt AI facility in Ajman. Abu Dhabi's G42 has expanded its cloud and computing infrastructure to handle petaflops of computing power.
These partnerships reflect a careful balancing act. Gulf states emphasise data localisation, requiring that data generated within their borders be stored and processed locally. This satisfies sovereignty concerns whilst leveraging the expertise and capital of American hyperscalers. The region offers compelling economic advantages: electricity tariffs in Saudi Arabia and the UAE range from 5 to 6 cents per kilowatt-hour, well below the US average of 9 to 15 cents. PwC expects AI to contribute 96 billion dollars to the UAE economy by 2030 (13.6 per cent of GDP) and 135.2 billion dollars to Saudi Arabia (12.4 per cent of GDP).
Microsoft's approach to sovereign cloud illustrates how hyperscalers adapt to this demand. The company partners with national clouds such as Bleu in France and Delos Cloud in Germany, where customers can access Microsoft 365 and Azure features in standalone, independently operated environments. AWS established an independent European governance structure for the AWS European Sovereign Cloud, including a dedicated Security Operations Centre and a parent company managed by EU citizens subject to local legal requirements.
Canada's Sovereign AI Compute Strategy demonstrates how governments can leverage cloud partnerships whilst maintaining strategic oversight. The government is investing up to 700 million dollars to support the AI ecosystem through increased domestic compute capacity, making strategic investments in both public and commercial infrastructure.
Yet cloud partnerships carry their own constraints and vulnerabilities. The US government's control over advanced chip exports means it retains indirect influence over global cloud infrastructure, regardless of where data centres physically reside. Moreover, hyperscalers can choose which markets receive priority access to scarce GPU capacity, effectively rationing computational sovereignty. During periods of tight supply, tier-one nations and favoured partners receive allocations first, whilst others queue.
Supply Chain Reshaping
The global semiconductor supply chain once epitomised efficiency through specialisation. American companies designed chips. Dutch firm ASML manufactured the extreme ultraviolet lithography machines required for cutting-edge production. Taiwan's TSMC fabricated the designs into physical silicon. This distributed model optimised for cost and capability, but created concentrated dependencies that geopolitical tensions now expose as vulnerabilities.
TSMC's dominance illustrates both the efficiency and the fragility of this model. The company holds 67.6 per cent market share in foundry services as of Q1 2025. The HPC segment, dominated by AI accelerators, accounted for 59 per cent of TSMC's total wafer revenue in Q1 2025, up from 43 per cent in 2023. TSMC's management projects that revenue from AI accelerators will double year-over-year in 2025 and grow at approximately 50 per cent compound annual growth rate through 2029. The company produces about 90 per cent of the world's most advanced chips.
This concentration creates strategic exposure for any nation dependent on cutting-edge semiconductors. A natural disaster, political upheaval, or military conflict affecting Taiwan could paralyse global AI development overnight. Consequently, the United States, European Union, Japan, and others invest heavily in domestic fabrication capacity, even where economic logic might not support such duplication.
Samsung and Intel compete with TSMC but trail significantly. Samsung holds just 9.3 per cent market share in Q3 2024, whilst Intel didn't rank in the top ten. Both companies face challenges with yield rates and process efficiency at leading-edge nodes. Samsung's 2-nanometer process is expected to begin mass production in 2025, but concerns persist about competitiveness. Intel pursues an aggressive roadmap with its 20A process and promises its 18A process will rival TSMC's 2-nanometer node if delivered on schedule in 2025.
The reshaping extends beyond fabrication to the entire value chain. Japan has committed ten trillion yen (65 billion dollars) by 2030 to revitalise its semiconductor and AI industries. South Korea fortifies technological autonomy and expands manufacturing capacity. These efforts signify a broader trend toward reshoring and diversification, building more resilient but less efficient localised supply chains.
The United States tightened controls on EDA software, the specialised tools engineers use to design semiconductors. Companies like Synopsys and Cadence, which dominate this market, face restrictions on supporting certain foreign customers. This creates pressure for nations to develop domestic EDA capabilities, despite the enormous technical complexity and cost involved.
The long-term implication points toward a “technological iron curtain” dividing global AI capabilities. Experts predict continued emphasis on diversification and “friend-shoring,” where nations preferentially trade with political allies. The globally integrated, efficiency-driven semiconductor model gives way to one characterised by strategic autonomy, resilience, national security, and regional competition.
This transition imposes substantial costs. Goldman Sachs estimates that building semiconductor fabrication capacity in the United States costs 30 to 50 per cent more than equivalent facilities in Asia. These additional costs ultimately flow through to companies and consumers, creating a “sovereignty tax” on computational resources.
Innovation Under Constraint
For startups, chip restrictions create a wildly uneven playing field that has little to do with the quality of their technology or teams. A startup in Singapore working on novel AI architectures faces fundamentally different constraints than an identical company in San Francisco, despite potentially superior talent or ideas. This geographical lottery increasingly determines who can compete in compute-intensive AI applications.
Small AI companies lacking the cash flow to stockpile chips must settle for less powerful processors not under US export controls. Heavy upfront investments in cutting-edge hardware deter many startups from entering the large language model race. Chinese tech companies Baidu, ByteDance, Tencent, and Alibaba collectively ordered around 100,000 units of NVIDIA's A800 processors before restrictions tightened, costing as much as four billion dollars. Few startups command resources at this scale.
The impact falls unevenly across the startup ecosystem. Companies focused on inference rather than training can often succeed with less advanced hardware. Those developing AI applications in domains like healthcare or finance maintain more flexibility. But startups pursuing frontier AI research or training large multimodal models find themselves effectively excluded from competition unless they reside in tier-one nations or secure access through well-connected partners.
Domestic AI chip startups in the United States and Europe could theoretically benefit as governments prioritise local suppliers. However, reality proves more complicated. Entrenched players like NVIDIA possess not just superior chips but comprehensive software stacks, developer ecosystems, and established customer relationships. New entrants struggle to overcome these network effects, even with governmental support.
Chinese chip startups face particularly acute challenges. Many struggle with high R&D costs, a small customer base of mostly state-owned enterprises, US blacklisting, and limited chip fabrication capacity. Whilst government support provides some cushion, it cannot fully compensate for restricted access to cutting-edge manufacturing and materials.
Cloud-based startups adopt various strategies to navigate these constraints. Some design architectures optimised for whatever hardware they can access, embracing constraint as a design parameter. Others pursue hybrid approaches, using less advanced chips for most workloads whilst reserving limited access to cutting-edge processors for critical training runs. A few relocate or establish subsidiaries in tier-one nations.
The talent dimension compounds these challenges. AI researchers and engineers increasingly gravitate toward organisations and locations offering access to frontier compute resources. A startup limited to previous-generation hardware struggles to attract top talent, even if offering competitive compensation. This creates a feedback loop where computational access constraints translate into talent constraints, further widening gaps.
Creativity Born from Necessity
Faced with restrictions, organisations develop creative approaches to maximise capabilities within constraints. Some of these workarounds involve genuine technical innovation; others occupy legal and regulatory grey areas.
Chip hoarding emerged as an immediate response to export controls. Companies in restricted nations rushed to stockpile advanced processors before tightening restrictions could take effect. Some estimates suggest Chinese entities accumulated sufficient NVIDIA A100 and H100 chips to sustain development for months or years, buying time for domestic alternatives to mature.
Downgraded chip variants represent another workaround category. NVIDIA developed the A800 and later the H20 specifically for the Chinese market, designs that technically comply with US export restrictions by reducing chip-to-chip communication speeds whilst preserving most computational capability. The Trump administration eventually banned these variants, but not before significant quantities shipped. AMD pursued similar strategies with modified versions of its MI series chips.
Algorithmic efficiency gains offer a more sustainable approach. DeepSeek and other Chinese AI labs have demonstrated that clever training techniques and model architectures can partially compensate for hardware limitations. Techniques like mixed-precision training, efficient attention mechanisms, and knowledge distillation extract more capability from available compute. Whilst these methods cannot fully bridge the hardware gap, they narrow it sufficiently to enable competitive models in some domains.
Cloud access through intermediaries creates another workaround path. Researchers in restricted nations can potentially access advanced compute through partnerships with organisations in tier-one or tier-two countries, research collaborations with universities offering GPU clusters, or commercial cloud services with loose verification. Whilst US regulators increasingly scrutinise such arrangements, enforcement remains imperfect.
Some nations pursue specialisation strategies, focusing efforts on AI domains where hardware constraints matter less. Inference-optimised chips, which need less raw computational power than training accelerators, offer one avenue. Edge AI applications, deployed on devices rather than data centres, represent another.
Collaborative approaches also emerge. Smaller nations pool resources through regional initiatives, sharing expensive infrastructure that no single country could justify independently. The European High Performance Computing Joint Undertaking exemplifies this model, coordinating supercomputing investments across EU member states.
Grey-market chip transactions inevitably occur despite restrictions. Semiconductors are small, valuable, and difficult to track once they enter commercial channels. The United States and allies work to close these loopholes through expanded end-use controls and enhanced due diligence requirements for distributors, but perfect enforcement remains elusive.
The Energy Equation
Chip access restrictions dominate headlines, but energy increasingly emerges as an equally critical constraint on AI sovereignty. Data centres now consume 1 to 1.5 per cent of global electricity, and AI workloads are particularly power-hungry. A cluster of 50,000 NVIDIA H100 GPUs would consume roughly 15 to 20 megawatts under full load. Larger installations planned by hyperscalers can exceed 1,000 megawatts, equivalent to a small nuclear power plant.
Nations pursuing AI sovereignty must secure not just chips and technical expertise, but sustained access to massive amounts of electrical power, ideally from reliable, low-cost sources. This constraint particularly affects developing nations, where electrical grids may lack capacity for large data centres even if chips were freely available.
The Middle East's competitive advantage in AI infrastructure stems partly from electricity economics. Tariffs of 5 to 6 cents per kilowatt-hour in Saudi Arabia and the UAE make energy-intensive AI training more economically viable. Nordic countries leverage similar advantages through hydroelectric power, whilst Iceland attracts data centres with geothermal energy. These geographical factors create a new form of computational comparative advantage based on energy endowment.
Cooling represents another energy-related challenge. High-performance chips generate tremendous heat, requiring sophisticated cooling systems that themselves consume significant power. Liquid cooling technologies improve efficiency compared to traditional air cooling, but add complexity and cost.
Sustainability concerns increasingly intersect with AI sovereignty strategies. European data centre operators face pressure to use renewable energy and minimise environmental impact, adding costs that competitors in less regulated markets avoid. Some nations view this as a competitive disadvantage; others frame it as an opportunity to develop more efficient, sustainable AI infrastructure.
The energy bottleneck also limits how quickly nations can scale AI capabilities, even if chip restrictions were lifted tomorrow. Building sufficient electrical generation and transmission capacity takes years and requires massive capital investment. This temporal constraint means that even optimistic scenarios for domestic chip production or relaxed export controls wouldn't immediately enable AI sovereignty.
Permanent Bifurcation or Temporary Turbulence?
The ultimate question facing policymakers, businesses, and technologists is whether current trends toward fragmentation represent a permanent restructuring of the global AI ecosystem or a turbulent transition that will eventually stabilise. The answer likely depends on factors ranging from geopolitical developments to technological breakthroughs that could reshape underlying assumptions.
Pessimistic scenarios envision deepening bifurcation, with separate technology stacks developing in US-aligned and China-aligned spheres. Different AI architectures optimised for different available hardware. Incompatible standards and protocols limiting cross-border collaboration. Duplicated research efforts and slower overall progress as the global AI community fractures along geopolitical lines.
Optimistic scenarios imagine that current restrictions prove temporary, relaxing once US policymakers judge that sufficient lead time or alternative safeguards protect national security interests. In this view, the economic costs of fragmentation and the difficulties of enforcement eventually prompt policy recalibration. Global standards bodies and industry consortia negotiate frameworks allowing more open collaboration whilst addressing legitimate security concerns.
The reality will likely fall between these extremes, varying by domain and region. Some AI applications, particularly those with national security implications, will remain tightly controlled and fragmented. Others may see gradual relaxation as risks become better understood. Tier-two nations might gain expanded access as diplomatic relationships evolve and verification mechanisms improve.
Technological wild cards could reshape the entire landscape. Quantum computing might eventually offer computational advantages that bypass current chip architectures entirely. Neuromorphic computing, brain-inspired architectures fundamentally different from current GPUs, could emerge from research labs. Radically more efficient AI algorithms might reduce raw computational requirements, lessening hardware constraint significance.
Economic pressures will also play a role. The costs of maintaining separate supply chains and duplicating infrastructure may eventually exceed what nations and companies are willing to pay. Alternatively, AI capabilities might prove so economically and strategically valuable that no cost seems too high, justifying continued fragmentation.
The startup ecosystem will adapt, as it always does, but potentially with lasting structural changes. We may see the emergence of “AI havens,” locations offering optimal combinations of chip access, energy costs, talent pools, and regulatory environments. The distribution of AI innovation might become more geographically concentrated than even today's Silicon Valley-centric model, or more fragmented into distinct regional hubs.
For individual organisations and nations, the strategic imperative remains clear: reduce dependencies where possible, build capabilities where feasible, and cultivate relationships that provide resilience against supply disruption. Whether that means investing in domestic chip design, securing multi-source supply agreements, partnering with hyperscalers, or developing algorithmic efficiencies depends on specific circumstances and risk tolerances.
The semiconductor industry has weathered geopolitical disruption before and emerged resilient, if transformed. The current upheaval may prove similar, though the stakes are arguably higher given AI's increasingly central role across economic sectors and national security. What seems certain is that the coming years will determine not just who leads in AI capabilities, but the very structure of global technological competition for decades to come.
The silicon schism is real, and it is deepening. How we navigate this divide will shape the trajectory of artificial intelligence and its impact on human civilisation. The choices made today by governments restricting chip exports, companies designing sovereign infrastructure, and startups seeking computational resources will echo through the remainder of this century. Understanding these dynamics isn't merely an academic exercise. It's essential preparation for a future where computational sovereignty rivals traditional forms of power, and access to silicon increasingly determines access to opportunity.
Sources and References
Congressional Research Service. “U.S. Export Controls and China: Advanced Semiconductors.” Congress.gov, 2024. https://www.congress.gov/crs-product/R48642
AI Frontiers. “How US Export Controls Have (and Haven't) Curbed Chinese AI.” 2024. https://ai-frontiers.org/articles/us-chip-export-controls-china-ai
Center for Strategic and International Studies. “Where the Chips Fall: U.S. Export Controls Under the Biden Administration from 2022 to 2024.” 2024. https://www.csis.org/analysis/where-chips-fall-us-export-controls-under-biden-administration-2022-2024
Center for Strategic and International Studies. “Understanding the Biden Administration's Updated Export Controls.” 2024. https://www.csis.org/analysis/understanding-biden-administrations-updated-export-controls
Hawkins, Zoe Jay, Vili Lehdonvirta, and Boxi Wu. “AI Compute Sovereignty: Infrastructure Control Across Territories, Cloud Providers, and Accelerators.” SSRN, 2025. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5312977
Bain & Company. “Sovereign Tech, Fragmented World: Technology Report 2025.” 2025. https://www.bain.com/insights/sovereign-tech-fragmented-world-technology-report-2025/
Carnegie Endowment for International Peace. “With Its Latest Rule, the U.S. Tries to Govern AI's Global Spread.” January 2025. https://carnegieendowment.org/emissary/2025/01/ai-new-rule-chips-exports-diffusion-framework
Rest of World. “China chip startups race to replace Nvidia amid U.S. export bans.” 2025. https://restofworld.org/2025/china-chip-startups-nvidia-us-export/
CNBC. “China seeks a homegrown alternative to Nvidia.” September 2024. https://www.cnbc.com/2024/09/17/chinese-companies-aiming-to-compete-with-nvidia-on-ai-chips.html
Tom's Hardware. “DeepSeek research suggests Huawei's Ascend 910C delivers 60% of Nvidia H100 inference performance.” 2025. https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-research-suggests-huaweis-ascend-910c-delivers-60-percent-nvidia-h100-inference-performance
Digitimes. “Huawei Ascend 910C reportedly hits 40% yield, turns profitable.” February 2025. https://www.digitimes.com/news/a20250225PD224/huawei-ascend-ai-chip-yield-rate.html
McKinsey & Company. “The cost of compute: A $7 trillion race to scale data centers.” 2024. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers
Government of Canada. “Canadian Sovereign AI Compute Strategy.” 2025. https://ised-isde.canada.ca/site/ised/en/canadian-sovereign-ai-compute-strategy
PwC. “Unlocking the data centre opportunity in the Middle East.” 2024. https://www.pwc.com/m1/en/media-centre/articles/unlocking-the-data-centre-opportunity-in-the-middle-east.html
Bloomberg. “Race for AI Supremacy in Middle East Is Measured in Data Centers.” April 2024. https://www.bloomberg.com/news/articles/2024-04-11/race-for-ai-supremacy-in-middle-east-is-measured-in-data-centers
Government of Japan. “Japan's Pursuit of a Game-Changing Technology and Ecosystem for Semiconductors.” March 2024. https://www.japan.go.jp/kizuna/2024/03/technology_for_semiconductors.html
Digitimes. “Japan doubles down on semiconductor subsidies, Rapidus poised for more support.” November 2024. https://www.digitimes.com/news/a20241129PD213/rapidus-government-funding-subsidies-2024-japan.html
CNBC. “India is betting $18 billion to build a chip powerhouse.” September 2025. https://www.cnbc.com/2025/09/23/india-is-betting-18-billion-to-build-a-chip-powerhouse-heres-what-it-means.html
PatentPC. “Samsung vs. TSMC vs. Intel: Who's Winning the Foundry Market?” 2025. https://patentpc.com/blog/samsung-vs-tsmc-vs-intel-whos-winning-the-foundry-market-latest-numbers
Klover.ai. “TSMC AI Fabricating Dominance: Chip Manufacturing Leadership in AI Era.” 2025. https://www.klover.ai/tsmc-ai-fabricating-dominance-chip-manufacturing-leadership-ai-era/
AIMultiple Research. “Top 20+ AI Chip Makers: NVIDIA & Its Competitors.” 2025. https://research.aimultiple.com/ai-chip-makers/
PatentPC. “The AI Chip Market Explosion: Key Stats on Nvidia, AMD, and Intel's AI Dominance.” 2024. https://patentpc.com/blog/the-ai-chip-market-explosion-key-stats-on-nvidia-amd-and-intels-ai-dominance
Microsoft Azure Blog. “Microsoft strengthens sovereign cloud capabilities with new services.” 2024. https://azure.microsoft.com/en-us/blog/microsoft-strengthens-sovereign-cloud-capabilities-with-new-services/
HPC Wire. “Graphcore and SiPearl Form Strategic Partnership to Combine AI and HPC.” June 2021. https://www.hpcwire.com/off-the-wire/graphcore-and-sipearl-form-strategic-partnership-to-combine-ai-and-hpc/
Tech Funding News. “SiPearl scoops €130M: Can Europe's chip champion challenge Nvidia?” 2024. https://techfundingnews.com/sipearl-european-chip-challenge-nvidia/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk