The Paradox of Progress: The Oligarchic Reality of AI

Every morning, millions of people open ChatGPT, fire up Midjourney, or ask their phone's AI assistant a question. For many, artificial intelligence has become as ubiquitous as electricity, a utility that simply works when you need it. The barriers to entry seem lower than ever. A teenager in Mumbai can fine-tune an open-source language model on a laptop. A three-person startup in Berlin can build a sophisticated AI application in weeks using APIs and no-code tools. Across the globe, small businesses are deploying chatbots, generating marketing copy, and automating workflows with tools that cost less than a Netflix subscription.
This is the democratic face of AI, and it is real.
Yet beneath this accessible surface lies a different reality, one of unprecedented concentration and control. While AI tools have proliferated, the infrastructure that powers them remains firmly in the hands of a tiny number of technology giants. In 2025, just four companies are expected to spend more than 320 billion dollars on AI infrastructure. Amazon, Microsoft, Google, and Meta are engaged in a capital spending spree that dwarfs previous technology buildouts, constructing data centres the size of small towns and hoarding graphics processing units like digital gold. Over the next three years, hyperscalers are projected to invest 1.4 trillion dollars in the computational backbone of artificial intelligence.
This creates a profound tension at the heart of the AI revolution. The tools are becoming more democratic, but the means of production are becoming more oligarchic. A small shopkeeper in Lagos can use AI to manage inventory, but only if that AI runs on servers owned by Amazon Web Services. A researcher in Bangladesh can access cutting-edge models, but only through APIs controlled by companies in Silicon Valley. The paradox is stark: we are building a supposedly open and innovative future on a foundation owned by a handful of corporations.
This dynamic raises urgent questions about innovation, competition, and equity. Can genuine innovation flourish when the fundamental infrastructure is controlled by so few? Will competition survive in markets where new entrants must effectively rent their existence from potential competitors? And perhaps most critically, how can we ensure equitable access to AI's benefits when the digital divide means billions lack even basic internet connectivity, let alone access to the vast computational resources that frontier AI requires?
The answers matter enormously. AI is not merely another technology sector; it is increasingly the substrate upon which the global economy operates. From healthcare diagnostics to financial services, from education to agriculture, AI is being woven into the fabric of modern life. The question of who controls its infrastructure is therefore not a narrow technical concern but a fundamental question about power, opportunity, and the shape of our collective future.
The Oligarchic Infrastructure
The numbers are staggering. Amazon is planning to spend approximately 100 billion dollars throughout 2025, mostly on AI infrastructure for Amazon Web Services. Microsoft has allocated 80 billion dollars for its fiscal year. Google parent company Alphabet is targeting 75 billion dollars. Meta, having dramatically increased its guidance, will spend between 60 and 65 billion dollars. Even Tesla is investing 5 billion dollars in AI-related capital expenditures, primarily for its Cortex training cluster in Texas.
These figures represent more than mere financial muscle. They reflect a fundamental truth about modern AI: it is extraordinarily resource-intensive. Training a state-of-the-art foundation model requires thousands of high-end GPUs running for months, consuming enormous amounts of electricity and generating tremendous heat. Inference, the process of actually using these models to generate outputs, also demands substantial computational resources when operating at scale. The latest data centres being constructed are measured not in megawatts but in gigawatts of power capacity.
Meta's new facility in Louisiana, dubbed Hyperion, will span 2,250 acres and require 5 gigawatts of compute power. To put this in perspective, that is enough electricity to power a medium-sized city. The company has struck deals with local nuclear power plants to handle the energy load. This is not unusual. Across the United States and Europe, AI companies are partnering with utilities, reviving retired nuclear facilities, and deploying alternative power solutions to meet their enormous energy demands. Elon Musk's xAI, for instance, operates its Memphis, Tennessee data centre using dozens of gas-powered turbines whilst awaiting grid connection.
The scale of this buildout cannot be overstated. OpenAI, SoftBank, and Oracle have announced the Stargate Initiative, a 500 billion dollar project to construct AI infrastructure over multiple years. France has pledged 112 billion dollars in AI-related private sector spending, representing Europe's determination to remain competitive. These are not incremental investments; they represent a fundamental restructuring of digital infrastructure comparable to the buildout of electricity grids or telecommunications networks in previous centuries.
At the centre of this infrastructure lies a crucial bottleneck: graphics processing units. Nvidia, which dominates the market for AI-optimised chips, has become one of the world's most valuable companies precisely because its GPUs are essential for training and running large models. The company's latest H100 and H800 chips are so sought-after that waiting lists stretch for months, and companies are willing to pay premiums to secure allocation. Nvidia has responded by not merely selling chips but by investing directly in AI companies, creating circular dependencies where it trades GPUs for equity stakes. In September 2025, Nvidia announced a commitment to invest up to 100 billion dollars in OpenAI progressively as infrastructure is deployed, with investments structured around the buildout of 10 gigawatts of computing capacity and paid substantially through GPU allocation.
This hardware concentration creates multiple layers of dependency. Cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud act as aggregators, purchasing vast quantities of GPUs and then reselling access to that computational capacity. AI companies like OpenAI, Anthropic, and others rent this infrastructure, training their models on hardware they do not own. Application developers then access these models through APIs, building their products on top of this multi-layered stack. At each level, a small number of companies control access to the layer below.
Geographic concentration compounds these dynamics. The vast majority of AI infrastructure investment is occurring in wealthy countries with existing digital infrastructure, stable power grids, and proximity to capital. The United States leads, followed by Western Europe and parts of East Asia. Meanwhile, entire continents remain largely absent from this infrastructure buildout. Africa, despite representing nearly a fifth of the world's population, accounts for a minute fraction of global AI computational capacity. According to recent studies, only 5 per cent of African talent has access to adequate compute resources, and just 1 per cent have on-premise facilities.
The cloud providers themselves acknowledge this concentration. When Amazon CEO Andy Jassy describes the 100 billion dollar investment plan as a 'once-in-a-lifetime type of business opportunity', he is speaking to shareholders about capturing and controlling a fundamental layer of the digital economy. When Microsoft President Brad Smith notes that over half of the company's 80 billion dollar AI spending will occur in the United States, he is making a statement about geographic power as much as technological capacity.
This infrastructure oligarchy is further reinforced by network effects and economies of scale. The more resources a company can deploy, the more customers it can attract, generating revenue that funds further infrastructure investment. The largest players can negotiate better terms with hardware manufacturers, secure priority access to scarce components, and achieve cost efficiencies that smaller operators cannot match. The result is a self-reinforcing cycle where the infrastructure-rich get richer, and new entrants face increasingly insurmountable barriers.
The Democratic Surface
Yet the story does not end with concentrated infrastructure. On the surface, AI has never been more accessible. The same companies pouring billions into data centres are also making powerful tools available to anyone with an internet connection and a credit card. OpenAI's ChatGPT can be accessed for free in a web browser. Google's Gemini is integrated into its widely used search engine and productivity tools. Microsoft's Copilot is woven into Word, Excel, and Teams, bringing AI capabilities to hundreds of millions of office workers worldwide.
More significantly, the cost of using AI has plummeted. In 2023, running inference on large language models cost companies significant sums per query. By 2025, those costs have dropped by orders of magnitude. Some estimates suggest that inference costs have fallen by 90 per cent or more in just two years, making it economically viable to integrate AI into products and services that previously could not justify the expense. This dramatic cost reduction has opened AI to small businesses and individual developers who previously could not afford access.
The open-source movement has emerged as a particularly powerful democratising force. Models like Meta's LLaMA series, Mistral AI's offerings, and most dramatically, China's DeepSeek, have challenged the assumption that the best AI models must be proprietary. DeepSeek R1, released in early 2025, shocked the industry by demonstrating that a model trained for approximately 5.6 million dollars using stripped-down Nvidia H800 chips could achieve performance competitive with models that cost hundreds of millions to develop. The company made its model weights available for free, allowing anyone to download, modify, and use the model without royalty payments.
This represented a profound shift. For years, the conventional wisdom held that state-of-the-art AI required massive capital expenditure that only the wealthiest companies could afford. DeepSeek demonstrated that clever architecture and efficient training techniques could dramatically reduce these costs. The release sent shockwaves through financial markets, briefly wiping a trillion dollars off American technology stocks as investors questioned whether expensive proprietary models would remain commercially viable if open alternatives achieved parity.
Open-source models have created an alternative ecosystem. Platforms like Hugging Face have become hubs where developers share models, datasets, and tools, creating a collaborative environment that accelerates innovation. A developer in Kenya can download a model, fine-tune it on local data, and deploy it to address specific regional needs, all without seeking permission or paying licensing fees. Students can experiment with cutting-edge technology on consumer-grade hardware, learning skills that were previously accessible only to employees of major technology companies.
The API economy has further lowered barriers. Rather than training models from scratch, developers can access sophisticated AI capabilities through simple programming interfaces. A small startup can integrate natural language processing, image recognition, or code generation into its product by making API calls to services offered by larger companies. This allows teams of a few people to build applications that would have required entire research divisions a few years ago.
No-code and low-code platforms have extended this accessibility even further. Tools like Bubble, Replit, and others allow people with minimal programming experience to create functional AI applications through visual interfaces and natural language instructions. According to Gartner, by 2025 an estimated 70 per cent of new enterprise applications will be developed using low-code or no-code platforms, up from less than 25 per cent in 2023. This democratisation means founders can test ideas quickly without assembling large development teams.
Small and medium enterprises have embraced these accessible tools. A 2024 McKinsey report found that AI adoption among businesses increased by 25 per cent over the previous three years, with 40 per cent of small businesses reporting some level of AI use. These companies are not training frontier models; they are deploying chatbots for customer service, using AI to generate marketing content, automating data analysis, and optimising operations. For them, AI is not about research breakthroughs but about practical tools that improve efficiency and reduce costs.
Educational institutions have also benefited from increased accessibility. Universities in developing countries can now access and study state-of-the-art models that previously would have been beyond their reach. Online courses teach AI skills to millions of students who might never have had access to formal computer science education. Initiatives like those at historically black colleges and universities in the United States provide hands-on training with AI tools, helping to diversify a field that has historically been dominated by graduates of elite institutions.
This accessible surface layer is real and meaningful. It has enabled innovation, created opportunities, and genuinely democratised certain aspects of AI. But it would be a mistake to confuse access to tools with control over infrastructure. The person using ChatGPT does not own the servers that run it. The startup building on OpenAI's API cannot operate if that API becomes unavailable or unaffordable. The developer fine-tuning LLaMA still depends on cloud computing resources to deploy at scale. The democratic layer exists, but it rests on an oligarchic foundation.
Innovation Under Constraint
The relationship between accessible tools and concentrated infrastructure creates a complex landscape for innovation. On one hand, the proliferation of open models and accessible APIs has undeniably spurred creativity and entrepreneurship. On the other, the fundamental dependencies on big tech create structural constraints that shape what innovation is possible and who captures its value.
Consider the position of AI startups. A company like Anthropic, which develops Claude, has raised billions in funding and employs world-class researchers. Yet it remains deeply dependent on infrastructure it does not control. The company has received 8 billion dollars in investment from Amazon, which also provides the cloud computing resources on which Anthropic trains its models. This creates an intimate relationship that is simultaneously collaborative and potentially constraining. Amazon benefits from association with cutting-edge AI research. Anthropic gains access to computational resources it could not easily replicate. But this partnership also ties Anthropic's fate to Amazon's strategic priorities.
Similar dynamics play out across the industry. OpenAI's relationship with Microsoft, which has invested 13 billion dollars and provides substantial Azure computing capacity, exemplifies this interdependence. While Microsoft does not own OpenAI, it has exclusive access to certain capabilities, significant influence over the company's direction, and strong financial incentives aligned with OpenAI's success. The startup maintains technical independence but operates within a web of dependencies that constrain its strategic options.
These partnerships are not inherently problematic. They enable companies to access resources they could not otherwise afford, allowing them to focus on research and product development rather than infrastructure management. The issue is the asymmetry of power. When a startup's ability to operate depends on continued access to a partner's infrastructure, that partner wields considerable leverage. Pricing changes, capacity limitations, or strategic shifts by the infrastructure provider can fundamentally alter the startup's viability.
The venture capital landscape reflects and reinforces these dynamics. In 2025, a handful of well-funded startups captured 62 per cent of AI investment. OpenAI, valued at 300 billion dollars despite no profitability, represents an extreme example of capital concentration. The expectation among investors seems to be that AI markets will consolidate, with a few winners capturing enormous value. This creates pressure for startups to grow rapidly, which often means deeper integration with big tech infrastructure providers.
Yet innovation continues to emerge from unexpected places, often specifically in response to the constraints imposed by infrastructure concentration. The DeepSeek breakthrough exemplifies this. Facing restrictions on access to the most advanced American chips due to export controls, Chinese researchers developed training techniques that achieved competitive results with less powerful hardware. The constraints forced innovation, producing methods that may ultimately benefit the entire field by demonstrating more efficient paths to capable models.
Open-source development has similarly thrived partly as a reaction to proprietary control. When Meta released LLaMA, it was motivated partly by the belief that open models would drive adoption and create ecosystems around Meta's tools, but also by the recognition that the company needed to compete with OpenAI's dominance. The open-source community seized on this opportunity, rapidly creating a flourishing ecosystem of fine-tuned models, tools, and applications. Within months of LLaMA's release, developers had created Vicuna, an open chat assistant claiming 90 per cent of ChatGPT's quality.
This dynamic benefits innovation in some ways. The rapid iteration enabled by open source means that any advancement by proprietary models quickly gets replicated and improved by the community. Features that OpenAI releases often appear in open models within weeks. This competitive pressure keeps the entire field moving forward and prevents any single company from building an insurmountable lead based purely on model capabilities.
However, this same dynamic creates challenges for companies trying to build sustainable businesses. If core capabilities are quickly replicated by free open-source alternatives, where does competitive advantage lie? Companies are increasingly finding that advantage not in model performance alone but in their ability to deploy at scale, integrate AI into larger product ecosystems, and leverage proprietary data. These advantages correlate strongly with infrastructure ownership and existing market positions.
Smaller companies navigate this landscape through various strategies. Some focus on vertical specialisation, building models or applications for specific industries where domain expertise matters more than raw scale. A legal tech startup might fine-tune open models on case law and legal documents, creating value through specialisation rather than general capability. Healthcare AI companies integrate models with clinical data and workflows, adding value through integration rather than fundamental research.
Others pursue partnership strategies, positioning themselves as essential complements to big tech offerings rather than competitors. A company providing model evaluation tools or fine-tuning services becomes valuable to multiple large players, reducing dependence on any single one. Some startups explicitly design their technology to be cloud-agnostic, ensuring they can switch infrastructure providers if needed, though this often comes with added complexity and reduced ability to leverage platform-specific optimisations.
The most successful companies in this environment often combine multiple approaches. They utilise open-source models to reduce dependence on proprietary APIs, maintain relationships with multiple cloud providers to avoid lock-in, build defensible vertical expertise, and move quickly to capture emerging niches before larger companies can respond. This requires sophisticated strategy and often more capital than would be needed in a less concentrated market structure.
Innovation continues, but it is increasingly channelled into areas where the infrastructure bottleneck matters less or where new entrants can leverage open resources to compete. This may be positive in some respects, encouraging efficiency and broad-based creativity. But it also means that certain types of innovation, particularly pushing the boundaries of what frontier models can achieve, remains largely the province of companies with the deepest pockets and most extensive infrastructure.
The Competition Question
The concentration of AI infrastructure and the complex dependencies it creates have not escaped the attention of competition authorities. Antitrust regulators in the United States, Europe, and beyond are grappling with how to apply traditional competition frameworks to a technology landscape that often defies conventional categories.
In the United States, both the Federal Trade Commission and the Department of Justice Antitrust Division have launched investigations into AI market dynamics. The FTC has scrutinised partnerships between big tech companies and AI startups, questioning whether these arrangements amount to de facto acquisitions that circumvent merger review processes. When Microsoft invests heavily in OpenAI and becomes its exclusive cloud provider, is that meaningfully different from an outright acquisition in terms of competitive effects?
The DOJ has focused on algorithmic pricing and the potential for AI tools to facilitate tacit collusion. In August 2025, Assistant Attorney General Gail Slater warned that the DOJ's algorithmic pricing probes would increase as AI adoption grows. The concern is that if multiple companies use AI tools trained on similar data or provided by the same vendor, their pricing might become implicitly coordinated without explicit agreement, raising prices for consumers.
Europe has taken a more comprehensive approach. The European Union's Digital Markets Act, which came into force in 2024, designates certain large platforms as 'gatekeepers' subject to ex ante regulations. The European Commission has indicated openness to expanding this framework to cover AI-specific concerns. Preliminary investigations have examined whether Google's agreements to preinstall its Gemini Nano model on Samsung devices constitute anticompetitive exclusivity arrangements that foreclose rivals.
The United Kingdom's Competition and Markets Authority conducted extensive studies on AI market structure, identifying potential chokepoints in the supply chain. Their analysis focused on control over computational resources, training data, and distribution channels, finding that a small number of companies occupy critical positions across multiple layers of the AI stack. The CMA has suggested that intervention may be necessary to prevent these chokepoints from stifling competition.
These regulatory efforts face significant challenges. AI markets are evolving so rapidly that traditional antitrust analysis struggles to keep pace. Merger guidelines written for industrial-era acquisitions may not adequately capture the competitive dynamics of the AI stack. When Microsoft pays to embed OpenAI capabilities into its products, the effects ripple through multiple markets in ways that are difficult to predict or model using standard economic frameworks.
The political environment adds further complexity. In early 2025, President Trump's administration repealed the Biden-era executive order on AI, which had emphasised competition concerns alongside safety and security issues. The new administration's approach prioritised removing regulatory barriers to AI innovation, with competition taking a less prominent role. However, both Republican and Democratic antitrust officials have expressed concern about big tech dominance, suggesting that bipartisan scrutiny will continue even if specific approaches differ.
Regulators face difficult trade-offs. Heavy-handed intervention risks stifling innovation and potentially ceding competitive advantage to countries with less restrictive policies. But a hands-off approach risks allowing market structures to ossify in ways that permanently entrench a few dominant players. The challenge is particularly acute because the companies under scrutiny are also American champions in a global technology race with significant geopolitical implications.
There are also genuine questions about whether traditional antitrust concerns fully apply. The rapid replication of innovations by open-source alternatives suggests that no single company can maintain a lasting moat based on model capabilities alone. The dramatic cost reductions in inference undermine theories that scale economies will lead to natural monopolies. The fact that DeepSeek produced a competitive model for a fraction of what industry leaders spend challenges assumptions about insurmountable barriers to entry.
Yet other evidence suggests that competition concerns are legitimate. The concentration of venture capital in a few well-funded startups, the critical importance of distribution channels controlled by platform holders, and the vertical integration of big tech companies across the AI stack all point to structural advantages that go beyond mere technical capability. When Apple integrates OpenAI's ChatGPT into iOS, it shapes the competitive landscape for every other AI assistant in ways that model quality alone cannot overcome.
Antitrust authorities must also contend with the global nature of AI competition. Aggressive enforcement in one jurisdiction might disadvantage domestic companies without producing corresponding benefits if competitors in other countries face no similar constraints. The strategic rivalry between the United States and China over AI leadership adds layers of complexity that transcend traditional competition policy.
The emergence of open-source models has been championed by some as a solution to competition concerns, providing an alternative to concentrated proprietary control. But regulators have been sceptical that open models fully address the underlying issues. If the infrastructure to run these models at scale remains concentrated, and if distribution channels are controlled by the same companies, then open-source weights may democratise innovation without fundamentally altering market power dynamics.
Potential regulatory responses range from mandating interoperability and data portability to restricting certain types of vertical integration or exclusive partnerships. Some have proposed requiring big tech companies to provide access to their infrastructure on fair and reasonable terms, treating cloud computing resources as essential facilities. Others advocate for transparency requirements, compelling companies to disclose details about data usage, training methods, and commercial relationships.
The path forward remains uncertain. Competition authorities are learning as markets evolve, developing expertise and frameworks in real time. The decisions made in the next few years will likely shape AI market structures for decades, with profound implications for innovation, consumer welfare, and the distribution of economic power.
The Global Equity Gap
While debates about competition and innovation play out primarily in wealthy nations, the starkest dimension of AI infrastructure concentration may be its global inequity. The digital divide, already a significant barrier to economic participation, threatens to become an unbridgeable chasm in the AI era.
The statistics are sobering. According to the International Telecommunication Union, approximately 2.6 billion people, representing 32 per cent of the world's population, remain offline in 2024. The disparity between wealthy and poor nations is dramatic: 93 per cent of people in high-income countries have internet access, compared with just 27 per cent in low-income countries. Urban populations are far more connected than rural ones, with 83 per cent of urban dwellers online globally compared with 48 per cent in rural areas.
Access to the internet is merely the first step. Meaningful participation in the AI economy requires reliable high-speed connectivity, which is even less evenly distributed. Beyond connectivity lies the question of computational resources. Running even modest AI applications requires more bandwidth and processing power than basic web browsing. Training models, even small ones, demands resources that are entirely out of reach for individuals and institutions in most of the world.
The geographic concentration of AI infrastructure means that entire regions are effectively excluded from the most transformative aspects of the technology. Africa, home to nearly 1.4 billion people, has virtually no AI data centre infrastructure. Latin America similarly lacks the computational resources being deployed at scale in North America, Europe, and East Asia. This creates dependencies that echo colonial patterns, with developing regions forced to rely on infrastructure owned and controlled by companies and countries thousands of miles away.
The implications extend beyond infrastructure to data and models themselves. Most large language models are trained predominantly on English-language text, with some representation of other widely spoken European and Asian languages. Thousands of languages spoken by hundreds of millions of people are barely represented. This linguistic bias means that AI tools work far better for English speakers than for someone speaking Swahili, Quechua, or any of countless other languages. Voice AI, image recognition trained on Western faces, and models that embed cultural assumptions from wealthy countries all reinforce existing inequalities.
The talent gap compounds these challenges. Training to become an AI researcher or engineer typically requires access to higher education, expensive computing resources, and immersion in communities where cutting-edge techniques are discussed and shared. Universities in developing countries often lack the infrastructure to provide this training. Ambitious students may study abroad, but this creates brain drain, as graduates often remain in wealthier countries where opportunities and resources are more abundant.
Some efforts are underway to address these disparities. Regional initiatives in Africa, such as the Regional Innovation Lab in Benin, are working to develop AI capabilities in African languages and contexts. The lab is partnering with governments in Benin, Senegal, and Côte d'Ivoire to create voice AI in the Fon language, demonstrating that linguistic inclusion is technically feasible when resources and will align. Similarly, projects in Kenya and other African nations are deploying AI for healthcare, agriculture, and financial inclusion, showing the technology's potential to address local challenges.
However, these initiatives operate at a tiny fraction of the scale of investments in wealthy countries. France's 112 billion dollar commitment to AI infrastructure dwarfs the total computational resources available across the entire African continent. The Africa Green Compute Coalition, designed to address AI equity challenges, represents promising intent but requires far more substantial investment to materially change the landscape.
International organisations have recognised the urgency of bridging the AI divide. The United Nations Trade and Development's Technology and Innovation Report 2025 warns that while AI can be a powerful tool for progress, it is not inherently inclusive. The report calls for investments in digital infrastructure, capability building, and AI governance frameworks that prioritise equity. The World Bank estimates that 418 billion dollars would be needed to connect all individuals worldwide through digital infrastructure, providing a sense of the investment required merely to establish basic connectivity, let alone advanced AI capabilities.
The G20, under South Africa's presidency, has established an AI Task Force focused on ensuring that the AI equity gap does not become the new digital divide. The emphasis is on shifting from centralised global policies to local approaches that foster sovereignty and capability in developing countries. This includes supporting private sector growth, enabling startups, and building local compute infrastructure rather than perpetuating dependency on foreign-owned resources.
There are also concerns about whose values and priorities get embedded in AI systems. When models are developed primarily by researchers in wealthy countries, trained on data reflecting the interests and perspectives of those societies, they risk perpetuating biases and blind spots. A healthcare diagnostic tool trained on populations in the United States may not accurately assess patients in Southeast Asia. An agricultural planning system optimised for industrial farming in Europe may provide poor guidance for smallholder farmers in sub-Saharan Africa.
The consequences of this inequity are profound. AI is increasingly being integrated into critical systems for education, healthcare, finance, and public services. If entire populations lack access to these capabilities, or if the AI systems available to them are second-rate or inappropriate for their contexts, the gap in human welfare and economic opportunity will widen dramatically. The potential for AI to exacerbate rather than reduce global inequality is substantial and pressing.
Addressing this challenge requires more than technical fixes. It demands investment in infrastructure, education, and capacity building in underserved regions. It requires ensuring that AI development is genuinely global, with researchers, entrepreneurs, and users from diverse contexts shaping the technology's trajectory. It means crafting international frameworks that promote equitable access to both AI capabilities and the infrastructure that enables them, rather than allowing current patterns of concentration to harden into permanent structures of digital hierarchy.
Towards an Uncertain Future
The tension between accessible AI tools and concentrated infrastructure is not a temporary phenomenon that market forces will automatically resolve. It reflects fundamental dynamics of capital, technology, and power that are likely to persist and evolve in complex ways. The choices made now, by companies, policymakers, and users, will shape whether AI becomes a broadly shared resource or a mechanism for entrenching existing inequalities.
Several possible futures present themselves. In one scenario, the current pattern intensifies. A small number of technology giants continue to dominate infrastructure, extending their control through strategic investments, partnerships, and vertical integration. Their market power allows them to extract rents from every layer of the AI stack, capturing the majority of value created by AI applications. Startups and developers build on this infrastructure because they have no alternative, and regulators struggle to apply antitrust frameworks designed for different industries to this new technological reality. Innovation continues but flows primarily through channels controlled by the incumbents. Global inequities persist, with developing countries remaining dependent on infrastructure owned and operated by wealthy nations and their corporations.
In another scenario, open-source models and decentralised infrastructure challenge this concentration. Advances in efficiency reduce the computational requirements for capable models, lowering barriers to entry. New architectures enable training on distributed networks of consumer-grade hardware, undermining the economies of scale that currently favour massive centralised data centres. Regulatory interventions mandate interoperability and prevent exclusionary practices, ensuring that control over infrastructure does not translate to control over markets. International cooperation funds infrastructure development in underserved regions, and genuine AI capabilities become globally distributed. Innovation flourishes across a diverse ecosystem of contributors, and the benefits of AI are more equitably shared.
A third possibility involves fragmentation. Geopolitical rivalries lead to separate AI ecosystems in different regions, with limited interoperability. The United States, China, Europe, and perhaps other blocs develop distinct technical standards, governance frameworks, and infrastructure. Competition between these ecosystems drives innovation but also creates inefficiencies and limits the benefits of global collaboration. Smaller countries and regions must choose which ecosystem to align with, effectively ceding digital sovereignty to whichever bloc they select.
Most likely, elements of all these scenarios will coexist. The technology landscape may exhibit concentrated control in some areas while remaining competitive or even decentralised in others. Different regions and domains may evolve along different trajectories. The outcome will depend on myriad decisions, large and small, by actors ranging from corporate executives to regulators to individual developers.
What seems clear is that the democratic accessibility of AI tools is necessary but insufficient to ensure equitable outcomes. As long as the underlying infrastructure remains concentrated, the power asymmetries will persist, shaping who benefits from AI and who remains dependent on the decisions of a few large organisations. The open-source movement has demonstrated that alternatives are possible, but sustaining and scaling these alternatives requires resources and collective action.
Policy will play a crucial role. Competition authorities must develop frameworks that address the realities of AI markets without stifling the innovation that makes them dynamic. This may require new approaches to merger review, particularly for deals involving critical infrastructure or distribution channels. It may necessitate mandating certain forms of interoperability or data portability. It certainly demands greater technical expertise within regulatory agencies to keep pace with rapidly evolving technology.
International cooperation is equally critical. The AI divide cannot be bridged by any single country or organisation. It requires coordinated investment in infrastructure, education, and research capacity across the developing world. It demands governance frameworks that include voices from all regions, not merely the wealthy countries where most AI companies are based. It calls for data-sharing arrangements that enable the creation of models and systems appropriate for diverse contexts and languages.
The technology community itself must grapple with these questions. The impulse to innovate rapidly and capture market share is natural and often productive. But engineers, researchers, and entrepreneurs also have agency in choosing what to build and how to share it. The decision by DeepSeek to release its model openly, by Meta to make LLaMA available, by countless developers to contribute to open-source projects, all demonstrate that alternatives to pure proprietary control exist and can thrive.
Ultimately, the question is not whether AI tools will be accessible, but whether that accessibility will be accompanied by genuine agency and opportunity. A world where billions can use AI applications built by a handful of companies is very different from a world where billions can build with AI, shape its development, and share in its benefits. The difference between these futures is not primarily technical. It is about power, resources, and the choices we collectively make about how transformative technologies should be governed and distributed.
The paradox of progress thus presents both a warning and an opportunity. The warning is that technological capability does not automatically translate to equitable outcomes. Without deliberate effort, AI could become yet another mechanism through which existing advantages compound, and existing inequalities deepen. The opportunity is that we can choose otherwise. By insisting on openness, investing in distributed capabilities, crafting thoughtful policy, and demanding accountability from those who control critical infrastructure, it is possible to shape an AI future that is genuinely transformative and broadly beneficial.
The infrastructure is being built now. The market structures are crystallising. The dependencies are being established. This is the moment when trajectories are set. What we build today will constrain and enable what becomes possible tomorrow. The democratic promise of AI is real, but realising it requires more than accessible tools. It demands confronting the oligarchic reality of concentrated infrastructure and choosing, consciously and collectively, to build something better.
References and Sources
This article draws upon extensive research from multiple authoritative sources including:
- CNBC: Tech megacaps plan to spend more than $300 billion in 2025 as AI race intensifies (February 2025)
- Yahoo Finance: Big Tech set to invest $325 billion this year as hefty AI bills come under scrutiny (February 2025)
- Empirix Partners: The Trillion Dollar Horizon: Inside 2025's Already Historic AI Infrastructure Investments (February 2025)
- TrendForce: AI Infrastructure 2025: Cloud Giants & Enterprise Playbook (July 2025)
- Goldman Sachs Global Investment Research: Infrastructure spending projections
- McKinsey & Company: AI adoption reports (2024)
- Gartner: Technology adoption forecasts (2023-2025)
- International Telecommunication Union: Global connectivity statistics (2024)
- World Bank: Digital infrastructure investment estimates
- United Nations Trade and Development: Technology and Innovation Report 2025
- CCIA: Intense Competition Across the AI Stack (March 2025)
- CSET Georgetown: Promoting AI Innovation Through Competition (May 2025)
- World Economic Forum: Digital divide and AI governance initiatives
- MDPI Applied Sciences: The Democratization of Artificial Intelligence (September 2024)
- Various technology company earnings calls and investor presentations (Q4 2024, Q1 2025)
***

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk