Infrastructure Is Destiny: How Orbital Compute Reshapes the Global AI Race

The question used to be simple: who has the best algorithm? For a decade, the artificial intelligence race rewarded clever code. Researchers at university labs and scrappy startups could publish a paper, train a model on rented cloud compute, and genuinely compete with the biggest players on the planet. That era is ending. The new race belongs to whoever controls the physical stack, from the launchpad to the server rack to the orbital relay station beaming data back to Earth.

In February 2026, SpaceX absorbed xAI in a deal valued at $1.25 trillion, according to Bloomberg. The transaction, structured as a share exchange, merged rocket manufacturing, satellite broadband, and frontier AI development under a single corporate umbrella. Elon Musk described the result as “the most ambitious, vertically-integrated innovation engine on (and off) Earth.” Days later, SpaceX filed with the Federal Communications Commission for authorisation to launch up to one million satellites as part of what it called an “orbital data centre.” The filing proposed satellites operating between 500 and 2,000 kilometres in altitude, functioning as distributed processing nodes optimised for large-scale AI inference.

This is not incremental progress. It is a structural break. And it raises a question that the entire technology industry will spend the next decade answering: does the future of artificial intelligence belong to whoever writes the smartest code, or to whoever controls the infrastructure on which all code must run?

The Stack Nobody Else Owns

To understand why the SpaceX-xAI combination matters, you need to see the full vertical stack it now commands. At the bottom sits rocket manufacturing and launch services. SpaceX launched more than 2,500 Starlink satellites in 2025 alone and remains on track to exceed its projected $15.5 billion in revenue for that year. The company generated an estimated $8 billion in profit on $15 billion to $16 billion of revenue in 2025, according to Reuters. No other entity on Earth can put hardware into orbit at remotely comparable cost or cadence.

One layer up sits the satellite constellation itself. More than 9,500 Starlink satellites have been launched to date, with roughly 8,000 functioning. The network already provides broadband connectivity across six continents. Next-generation Starlink V3 satellites, slated for deployment beginning in 2026 aboard Starship, will deliver more than 20 times the capacity of current V2 satellites. Each V3 satellite will support terabit-class bandwidth and connect to the broader constellation via laser mesh links capable of up to one terabit per second. Current Starlink satellites already carry three lasers operating at up to 200 gigabits per second, forming a mesh network that routes data across the constellation without touching the ground. This means the network can move information between continents at the speed of light through vacuum, which is roughly 47 per cent faster than light travels through fibre optic cables.

Then comes the AI layer. Before the merger, xAI had already built Colossus, widely regarded as the world's largest AI supercomputer. Located in a repurposed Electrolux factory in Memphis, Tennessee, Colossus went from conception to 100,000 Nvidia H100 GPUs in just 122 days, going live on 22 July 2024. Nvidia CEO Jensen Huang noted that projects of this scale typically take around four years, making the deployment remarkably fast. The facility then doubled to 200,000 GPUs in another 92 days. As of mid-2025, Colossus comprises 150,000 H100 GPUs, 50,000 H200 GPUs, and 30,000 GB200 GPUs, with stated plans to expand beyond one million GPUs. The system uses NVIDIA Spectrum-X Ethernet networking and achieves 95 per cent data throughput with zero application latency degradation or packet loss. It draws up to 250 megawatts from the grid, supplemented by a 150-megawatt Megapack battery system, with an expansion target of 1.2 gigawatts.

Finally, the communications layer ties everything together. Starlink already provides the backbone for global data relay, and the proposed orbital data centre satellites would connect to Starlink via high-bandwidth optical links before routing down to ground stations. The result is a closed loop: SpaceX builds the rockets, launches the satellites, operates the network, trains the AI models, and serves the inference requests, all without depending on a single external supplier for any critical link in the chain.

Jensen Huang, speaking at the World Economic Forum in Davos in January 2026, described AI as a “five-layer cake” comprising energy, chips, infrastructure, AI models, and applications. He called the current moment “the largest infrastructure build-out in human history” and estimated that the next five years would present a $3 trillion to $4 trillion AI infrastructure opportunity. The SpaceX-xAI merger represents perhaps the most aggressive attempt by any single entity to own every layer of that cake simultaneously.

Why the Grid Cannot Keep Up

The rationale for moving AI infrastructure into orbit begins with a terrestrial crisis. The primary constraint on AI expansion is no longer capital or algorithmic talent. It is electricity.

According to the International Energy Agency, global electricity consumption by data centres is projected to more than double by 2030, reaching approximately 945 terawatt hours, with AI workloads as the primary driver. In the United States specifically, the Energy Information Administration projects total electricity consumption will reach record levels in both 2025 and 2026, rising from about 4,110 billion kilowatt hours in 2024 to more than 4,260 billion kilowatt hours in 2026. Data centres already consume more than 4 per cent of the country's total electricity supply.

The numbers at the facility level are staggering. The Stargate project, a $500 billion AI infrastructure joint venture announced by President Donald Trump in January 2025 involving OpenAI, SoftBank, and Oracle, has already brought its flagship site in Abilene, Texas online. That single campus houses hundreds of thousands of Nvidia GB200 GPUs and pulls roughly 900 megawatts of power. Meta is developing a one-gigawatt “Prometheus” cluster and has plans for a five-gigawatt “Hyperion” facility. A single AI-related task can consume up to 1,000 times more electricity than a traditional web search, which explains why a handful of AI facilities can destabilise a regional power supply in ways that hundreds of conventional data centres never could.

The grid simply cannot keep pace. A survey found that 72 per cent of data centre industry respondents consider power and grid capacity to be “very or extremely challenging.” Power constraints are extending data centre construction timelines by 24 to 72 months. In the PJM regional grid serving 65 million people across the eastern United States, capacity market clearing prices for the 2026 to 2027 delivery year surged to $329.17 per megawatt, more than ten times the $28.92 per megawatt price just two years earlier. Regional grids in many cases cannot accommodate large-scale data centres without transmission and distribution upgrades that require five to ten years of planning, permitting, and construction.

This is the opening that orbital infrastructure exploits. In space, continuous access to solar energy eliminates dependence on terrestrial power grids. The vacuum provides natural cooling, removing one of the most expensive and water-intensive requirements of ground-based data centres. A typical terrestrial data centre uses 300,000 gallons of water daily for cooling, with the largest facilities consuming 5 million gallons, equivalent to the demands of a town of 50,000 residents. And because orbital platforms sit above national borders, they bypass the community resistance and permitting bottlenecks that have slowed terrestrial expansion to a crawl.

Musk has stated that deploying one million tonnes of satellites per year could add approximately 100 gigawatts of AI computing capacity, with the potential to scale to one terawatt annually. “My estimate is that within 2 to 3 years, the lowest cost way to generate AI compute will be in space,” he wrote. Whether that timeline proves accurate or wildly optimistic, the strategic logic is clear: if you cannot plug into the grid fast enough, you go above it.

The Terrestrial Rivals and Their Structural Gaps

No competitor currently matches this vertical integration, though several are trying to close the gap through different strategies.

Amazon represents the most credible challenger, combining Project Kuiper (rebranded as Amazon Leo in November 2025) with AWS cloud infrastructure. Amazon has invested over $10 billion in launch contracts alone and plans a constellation of 3,236 LEO satellites across three orbital shells. As of early 2026, the company has launched more than 200 production satellites, with its first Ariane 6 mission in February 2026 deploying 32 satellites in a single flight. However, Amazon faces an FCC deadline to deploy 1,618 satellites by July 2026, a requirement it is statistically unlikely to meet at current launch cadence. In January 2026, Amazon filed for a regulatory waiver to extend this deadline. The total capital expenditure for the first-generation system is estimated between $16.5 billion and $20 billion, significantly exceeding initial guidance.

The structural gap is illuminating. Amazon must purchase launches from external providers, including, remarkably, SpaceX's own Falcon 9 rockets. It does not manufacture its own launch vehicles. Blue Origin, the Jeff Bezos-founded rocket company, has yet to achieve the launch cadence necessary to serve as Kuiper's primary deployer. And while AWS provides formidable cloud infrastructure on the ground, with plans for more than 300 ground stations to interface with the Leo constellation, Amazon has not announced plans for orbital compute capabilities comparable to SpaceX's vision. The result is a competitor that owns significant pieces of the stack but not the complete vertical chain.

The European Union is pursuing sovereignty through IRIS squared, its Infrastructure for Resilience, Interconnectivity and Security by Satellite programme. Awarded to the SpaceRISE consortium of SES, Eutelsat, and Hispasat in October 2024, IRIS squared carries a budget of 10.6 billion euros, including 6.5 billion euros from public funding and over 4 billion euros from industry. The system plans approximately 290 satellites across LEO and MEO orbits. But the first launch is not envisioned until 2029, with full operational capacity expected in 2030. The programme's urgent geopolitical motivation became sharper after the February 2025 suspension of United States military aid to Ukraine, which raised questions about continued Starlink availability and underscored Europe's dependency on American infrastructure. By the time the European constellation reaches operational status, SpaceX may have tens of thousands of additional satellites in orbit.

China presents a different kind of challenge, one driven by state coordination rather than corporate integration. The Guowang constellation aims for 13,000 satellites, with plans to launch 310 in 2026, 900 in 2027, and 3,600 annually beginning in 2028. The Qianfan constellation, backed by the Shanghai municipal government and developed by Shanghai SpaceCom Satellite Technology, targets 15,000 satellites by 2030. Most significantly for the AI infrastructure question, China launched the “Three-Body Computing Constellation” in May 2025 via a Long March-2D rocket, sending 12 satellites into orbit as a first batch. Developed by the China Aerospace Science and Industry Corporation in partnership with Zhejiang Lab, each satellite carries an 8-billion-parameter AI model capable of 744 tera operations per second. Collectively, the initial 12 satellites achieved 5 peta operations per second, equivalent to a top-tier supercomputer. The satellites demonstrated the ability to classify astronomical phenomena and terrestrial infrastructure with 94 per cent accuracy without ground intervention, and by processing data in space they reduce downlink data volume by a factor of 1,000 for specific tasks. Plans call for scaling to 2,800 satellites delivering exa-scale compute power by 2030.

China's approach demonstrates that the orbital AI concept is not unique to SpaceX. But China lacks a single vertically integrated entity controlling the entire stack. Its satellite programmes are distributed across state-owned enterprises, private companies, and municipal governments. The coordination overhead of this distributed model may prove a disadvantage against a single entity that can make decisions at the speed of a corporate hierarchy rather than a bureaucratic one.

The Data Feedback Loop

Vertical integration does not merely reduce costs. It creates a compounding advantage through data feedback loops that terrestrial-only competitors cannot replicate.

Consider what happens when the same entity operates both the satellite constellation and the AI models. Starlink generates vast quantities of real-time data about atmospheric conditions, signal propagation, orbital debris patterns, and network traffic flows across the entire globe. That data feeds directly into xAI's models, which can optimise satellite operations, predict hardware failures, and improve routing algorithms. The improved operations generate better data, which produces better models, which further improve operations. This is the flywheel effect that has powered platform monopolies in the internet age, now extended to orbital infrastructure.

The Harvard Business Review noted in November 2025 that businesses across industries are using real-time satellite data to gain competitive advantage, with the number of active satellites tripling in five years and projected to reach 60,000 by 2030. Modern satellites equipped with AI and edge computing have become “smart tools for predictive logistics, environmental monitoring, and fast disaster response.” Yet only 18 per cent of surveyed executives expect to scale these tools soon, held back by the perception that space technology is too complex for daily business. A vertically integrated provider that can package satellite data, AI analysis, and connectivity into a single service removes that complexity barrier entirely.

The implications for training data are equally significant. An entity with global satellite coverage has access to a continuously updated stream of Earth observation data that no terrestrial competitor can match. Remote sensing, weather patterns, maritime tracking, agricultural monitoring, urban development, and infrastructure change detection all become training inputs. When the AI models trained on this data are then used to optimise the satellite constellation that gathered it, the loop closes in a way that generates structural advantages compounding over time.

The Algorithmic Counterargument

Against this infrastructure-first thesis stands a powerful rejoinder: DeepSeek.

In January 2025, the Chinese AI lab released its R1 reasoning model, achieving performance competitive with OpenAI's o1 on mathematical and coding benchmarks. The claimed training cost was approximately $5.6 million using just 2,000 GPUs over 55 days, perhaps 5 per cent of what OpenAI spent on comparable capability. DeepSeek's architectural innovations, including Multi-Head Latent Attention and its proprietary Mixture of Experts approach, demonstrated that clever engineering could substitute for brute-force compute to a remarkable degree. One year later, DeepSeek R1 remained the most liked open-source model on Hugging Face.

This matters because it challenges the assumption that infrastructure alone determines capability. If a relatively small team with constrained hardware access can produce frontier-quality models, then perhaps the vertically integrated orbital stack is an expensive solution to a problem that algorithmic efficiency will solve more cheaply. The RAND Corporation noted that DeepSeek's success “calls into question” the assumption that Washington enjoys a decisive advantage due to massive compute budgets.

But the counterargument has limits. As the Centre for Strategic and International Studies noted, while DeepSeek lowered AI entry barriers, it “has not achieved a disruptive expansion of capability boundaries nor altered the trajectory of AI development.” Its innovations represent refinements of existing techniques rather than fundamental breakthroughs. And critically, DeepSeek's efficiency gains have not reduced aggregate demand for compute. Global investment in AI infrastructure continues to accelerate, with Big Tech capital expenditure crossing $300 billion in 2025 alone, including $100 billion from Amazon, $80 billion from Microsoft, and substantial commitments from Alphabet and Meta.

The Jevons Paradox looms large. As AI becomes cheaper to run per unit, it proliferates into more applications, driving total demand higher. Google reported that over a 12-month period, the energy footprint of its median Gemini Apps text prompt dropped by 33 times while delivering higher quality responses. Yet Google's total electricity consumption still rose 27 per cent year over year. Efficiency gains are real, but they are being overwhelmed by the velocity of adoption. McKinsey forecasts $6.7 trillion in global capital for data centre infrastructure through 2030.

Research published on ResearchGate in 2026 argues explicitly that “infrastructure architecture itself, distinct from algorithmic innovation, constitutes a significant lever” for AI capability. The OECD's November 2025 report on competition in AI infrastructure identified “high concentration and barriers to entry” at every level of the AI supply chain, with “very high capital requirements” and “substantial economies of scale” creating structural advantages for incumbents. The report warned that vertical relationships where cloud providers also develop and deploy AI models could “make it hard for independent model developers to compete.”

The evidence suggests not an either-or dynamic but a hierarchy: algorithmic innovation remains necessary, yet infrastructure control increasingly determines who can deploy those algorithms at scale, who can iterate fastest, and who can serve the billions of inference requests that define commercial AI success.

Infrastructure as Geopolitical Lever

The implications extend far beyond corporate competition. As the Atlantic Council noted in its assessment of how AI will shape geopolitics in 2026, national policymakers are seeking to “impose greater control over critical digital infrastructure” including compute power, cloud storage, and microchips. The push to control this infrastructure is evolving into what analysts call a “battle of the AI stacks.”

An entity that controls orbital infrastructure operates from a position of extraordinary geopolitical leverage. Satellites do not require host-country permission to overfly territory. They can provide connectivity and compute to any point on the globe, bypassing national firewalls, regulatory regimes, and infrastructure deficits. A vertically integrated space-AI platform could, in theory, offer AI services to any government or enterprise on Earth without depending on any terrestrial intermediary.

This is precisely why Europe is investing 10.6 billion euros in IRIS squared and why China is racing to deploy its own constellations. The fear is not merely commercial disadvantage but strategic dependency. If the world's most capable AI inference runs on orbital infrastructure controlled by a single American corporation, then every nation without comparable capability becomes a customer rather than a sovereign actor in the AI age. The scarcity of satellite frequency and orbital resources, governed by a “first come, first served” principle at the International Telecommunication Union, adds urgency to the deployment race.

The OECD's 2025 competition report flags the cross-border implications directly: “enforcement actions, merger reviews, and policy interventions in one jurisdiction can have global implications.” The organisation recommends that competition authorities consider “ex ante measures, such as interoperability requirements” to address the risk of abuse of dominance in AI infrastructure markets.

Huang's Davos framing is instructive here. He urged every country to “build your own AI, take advantage of your fundamental natural resource, which is your language and culture; develop your AI, continue to refine it, and have your national intelligence part of your ecosystem.” But this advice assumes access to the underlying infrastructure stack. For nations that lack domestic launch capability, satellite manufacturing, and hyperscale compute, “building your own AI” means renting someone else's stack. And the landlord's terms are not always negotiable.

The Skeptics and the Technical Realities

None of this means orbital AI infrastructure is inevitable or imminent. The technical challenges remain formidable.

Kimberly Siversen Burke, director of government affairs for Quilty Space, told Via Satellite that orbital data centres “remain speculative” as a near-term revenue driver, citing “unproven economics, aging chips, latency, and limited use cases like defence, remote sensing, and sovereign compute.” She noted that linking SpaceX to AI infrastructure demand gives the company “valuation scaffolding” but cautioned that the economics remain unproven. A constellation of one million satellites with five-year operational lives would require replacing 200,000 satellites annually just to maintain capacity, roughly 550 per day. Radiation hardening, thermal management in vacuum conditions, and limited repair capabilities all represent unsolved engineering problems at scale.

The financial picture is also sobering. xAI was reportedly burning approximately $1 billion per month prior to the merger. SpaceX's $8 billion annual profit provides a significant cushion, but orbital data centres represent capital expenditure on a scale that would strain even the most profitable company on Earth. The planned SpaceX IPO, potentially raising up to $50 billion at a valuation as high as $1.5 trillion according to the Financial Times, would provide additional capital, but investors will demand evidence that orbital compute can generate returns within a reasonable time horizon.

There is also the question of latency. Orbital infrastructure at 500 to 2,000 kilometres altitude introduces signal propagation delays that make it unsuitable for applications requiring single-digit millisecond response times. Terrestrial data centres will remain essential for latency-sensitive workloads like autonomous vehicles, high-frequency trading, and real-time robotics. Orbital compute is better suited to batch processing, model training, and inference tasks where slightly higher latency is acceptable.

Former Google CEO Eric Schmidt appears to be hedging this bet from a different angle. In March 2025, he took over as CEO of Relativity Space, a rocket startup with $2.9 billion in orders and a heavy-lift Terran R vehicle capable of carrying up to 33.5 metric tonnes to low Earth orbit, scheduled for its first launch at the end of 2026. Schmidt subsequently confirmed that his acquisition was connected to plans for orbital data centres, following congressional testimony in April 2025 where he described the “rapidly escalating energy demands of AI systems and the looming strain they are expected to place on national power infrastructure.” His approach differs from Musk's in scale and speed, but the strategic logic is identical: if terrestrial constraints are throttling AI growth, space offers an alternative path.

Consolidation on the Ground Mirrors Ambition in Orbit

The vertical integration thesis is not confined to space. On the ground, the satellite industry is consolidating rapidly. In July 2025, SES completed its $3.1 billion acquisition of Intelsat, creating a combined fleet of approximately 90 geostationary satellites and nearly 30 medium Earth orbit satellites. The FCC approved the merger partly because the combined entity would “more aggressively compete against Starlink and other LEO providers.” SES projects synergies with a total net present value of 2.4 billion euros.

This deal followed a wave of satellite industry consolidation that included Viasat's acquisition of Inmarsat and Eutelsat's acquisition of OneWeb. The FCC's order encapsulated the competitive pressures: with terrestrial fibre networks and streaming services reducing demand for satellite content distribution, legacy operators are being squeezed simultaneously by faster, higher-capacity LEO constellations. Consolidation is the survival strategy.

The satellite communication market was valued at $23.1 billion in 2024 and is growing at 12.3 per cent annually. The AI-specific segment is growing even faster, with the AI in satellite internet market projected to expand from $2.52 billion in 2025 to $8.91 billion by 2030, driven by a compound annual growth rate of 29 per cent. The pattern is consistent: companies are combining manufacturing control, AI-driven network optimisation, and cross-sector service delivery because the market rewards integration over specialisation.

From Algorithm Wars to Infrastructure Empires

The shift from algorithmic competition to infrastructure control represents something more fundamental than a change in business strategy. It represents a change in what determines power in the AI age.

For most of the past decade, the AI field operated on a relatively democratic premise. Breakthrough papers were published openly. Pre-trained models were shared on platforms like Hugging Face. Cloud compute could be rented by the hour. A brilliant researcher with a laptop and a credit card could, in principle, contribute to the frontier. DeepSeek's January 2025 release of R1 as an open-source model demonstrates that this democratic impulse remains alive.

But the infrastructure layer is not democratic. You cannot rent a rocket. You cannot subscribe to an orbital data centre. You cannot share a satellite constellation on GitHub. The physical assets required for vertically integrated space-AI infrastructure cost tens of billions of dollars, take years to deploy, and depend on regulatory approvals that only a handful of entities have the political influence to secure.

The Deloitte 2026 tech trends report frames this as “the AI infrastructure reckoning,” noting that the anticipated transition from compute expansion toward efficiency-focused orchestration results from a convergence of technological, economic, and organisational drivers. Capital constraints have reduced appetite for expansion without demonstrated returns, and organisations observing 50 to 70 per cent GPU underutilisation recognise that expansion compounds inefficiency. But orchestration still requires instruments to orchestrate. And the instruments, in this case orbital satellites, launch vehicles, terrestrial data centres, and global communication networks, are concentrating in fewer and fewer hands.

The Council on Foreign Relations, assessing how 2026 could decide the future of artificial intelligence, observed that “diffusion could be even more important than cutting-edge innovation” but acknowledged it is “harder to measure.” This distinction matters: innovation creates capability, but diffusion, the spread of that capability through infrastructure, determines who benefits from it. An entity that controls both the innovation layer and the diffusion layer holds a position that purely algorithmic competitors simply cannot match.

Whether this concentration proves beneficial or dangerous depends entirely on governance structures that do not yet exist. The regulatory frameworks designed for terrestrial telecommunications and antitrust were not built for entities that simultaneously manufacture rockets, operate global satellite networks, develop frontier AI models, and plan orbital data centres. The OECD has recommended that competition authorities “assess whether existing powers are sufficient to address potential abuses of dominance.” The answer, almost certainly, is that they are not.

The question that opened this article, whether the future of AI belongs to the best algorithm or the best infrastructure, is not quite right. The real question is whether we are comfortable with a world where the two become indistinguishable, where the algorithm and the infrastructure that runs it merge into a single system controlled by a single entity, and where the physics of rocket launches and orbital mechanics become as important to AI capability as the mathematics of gradient descent. That world is no longer hypothetical. It is being built, one satellite at a time, at a cadence of roughly 550 per day.


References and Sources

  1. Bloomberg, “Musk's SpaceX Combines With xAI at $1.25 Trillion Valuation,” 2 February 2026. https://www.bloomberg.com/news/articles/2026-02-02/elon-musk-s-spacex-said-to-combine-with-xai-ahead-of-mega-ipo

  2. CNBC, “Musk's xAI, SpaceX combo is the biggest merger of all time, valued at $1.25 trillion,” 3 February 2026. https://www.cnbc.com/2026/02/03/musk-xai-spacex-biggest-merger-ever.html

  3. CNBC, “Elon Musk's SpaceX acquiring AI startup xAI ahead of potential IPO,” 2 February 2026. https://www.cnbc.com/2026/02/02/elon-musk-spacex-xai-ipo.html

  4. TechCrunch, “Elon Musk's SpaceX officially acquires Elon Musk's xAI, with plan to build data centres in space,” 2 February 2026. https://techcrunch.com/2026/02/02/elon-musk-spacex-acquires-xai-data-centers-space-merger/

  5. Tom's Hardware, “SpaceX acquires xAI in a bid to make orbiting data centres a reality,” February 2026. https://www.tomshardware.com/tech-industry/artificial-intelligence/spacex-acquires-xai-in-a-bid-to-make-orbiting-data-centers-a-reality-musk-plans-to-launch-a-million-tons-of-satellites-annually-targets-1tw-year-of-space-based-compute-capacity

  6. Via Satellite, “SpaceX Acquires xAI to Pursue Orbital Data Center Constellation,” 2 February 2026. https://www.satellitetoday.com/connectivity/2026/02/02/spacex-files-for-orbital-data-center-satellites-amid-xai-merger-reports/

  7. Data Center Dynamics, “SpaceX files for million satellite orbital AI data centre megaconstellation,” February 2026. https://www.datacenterdynamics.com/en/news/spacex-files-for-million-satellite-orbital-ai-data-center-megaconstellation/

  8. NVIDIA Newsroom, “NVIDIA Ethernet Networking Accelerates World's Largest AI Supercomputer, Built by xAI.” https://nvidianews.nvidia.com/news/spectrum-x-ethernet-networking-xai-colossus

  9. HPCwire, “Colossus AI Hits 200,000 GPUs as Musk Ramps Up AI Ambitions,” 13 May 2025. https://www.hpcwire.com/2025/05/13/colossus-ai-hits-200000-gpus-as-musk-ramps-up-ai-ambitions/

  10. Data Center Frontier, “The Colossus Supercomputer: Elon Musk's Drive Toward Data Center AI Technology.” https://www.datacenterfrontier.com/machine-learning/article/55244139/the-colossus-ai-supercomputer-elon-musks-drive-toward-data-center-ai-technology-domination

  11. International Energy Agency, “Energy demand from AI,” 2025. https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai

  12. OpenAI, “Announcing The Stargate Project,” January 2025. https://openai.com/index/announcing-the-stargate-project/

  13. CNBC, “Trump announces AI infrastructure investment backed by Oracle, OpenAI and SoftBank,” 21 January 2025. https://www.cnbc.com/2025/01/21/trump-ai-openai-oracle-softbank.html

  14. About Amazon, “First heavy-lift launch grows constellation to 200+ satellites.” https://www.aboutamazon.com/news/innovation-at-amazon/project-kuiper-satellite-rocket-launch-progress-updates

  15. European Commission, “IRIS squared: Secure Connectivity.” https://defence-industry-space.ec.europa.eu/eu-space/iris2-secure-connectivity_en

  16. ESA, “ESA confirms kick-start of IRIS squared with European Commission and SpaceRISE.” https://connectivity.esa.int/archives/news/esa-confirms-kickstart-iris%C2%B2-european-commission-and-spacerise

  17. China.org.cn, “China demonstrates AI computing power in outer space with satellite network breakthrough,” 13 February 2026. http://www.china.org.cn/2026-02/13/content_118333643.shtml

  18. SatNews, “China Completes In-Orbit Testing of 'Three-Body' AI Computing Constellation,” 16 February 2026. https://news.satnews.com/2026/02/16/china-completes-in-orbit-testing-of-three-body-ai-computing-constellation/

  19. Orbital Today, “China Launches AI-Driven Satellite Constellation to Transform Space Computing,” 15 February 2026. https://orbitaltoday.com/2026/02/15/china-launches-ai-driven-satellite-constellation-to-transform-space-computing/

  20. CSIS, “DeepSeek's Latest Breakthrough Is Redefining AI Race.” https://www.csis.org/analysis/deepseeks-latest-breakthrough-redefining-ai-race

  21. RAND Corporation, “DeepSeek's Lesson: America Needs Smarter Export Controls,” February 2025. https://www.rand.org/pubs/commentary/2025/02/deepseeks-lesson-america-needs-smarter-export-controls.html

  22. OECD, “Competition in Artificial Intelligence Infrastructure,” November 2025. https://www.oecd.org/en/publications/2025/11/competition-in-artificial-intelligence-infrastructure_69319aee.html

  23. NVIDIA Blog, “'Largest Infrastructure Buildout in Human History': Jensen Huang on AI's 'Five-Layer Cake' at Davos,” January 2026. https://blogs.nvidia.com/blog/davos-wef-blackrock-ceo-larry-fink-jensen-huang/

  24. World Economic Forum, “Davos 2026: Nvidia CEO Jensen Huang on the future of AI,” January 2026. https://www.weforum.org/stories/2026/01/nvidia-ceo-jensen-huang-on-the-future-of-ai/

  25. Deloitte, “The AI infrastructure reckoning: Optimising compute strategy in the age of inference economics,” 2026. https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends/2026/ai-infrastructure-compute-strategy.html

  26. Atlantic Council, “Eight ways AI will shape geopolitics in 2026.” https://www.atlanticcouncil.org/dispatches/eight-ways-ai-will-shape-geopolitics-in-2026/

  27. Council on Foreign Relations, “How 2026 Could Decide the Future of Artificial Intelligence.” https://www.cfr.org/articles/how-2026-could-decide-future-artificial-intelligence

  28. SES, “SES Completes Acquisition of Intelsat, Creating Global Multi-Orbit Connectivity Powerhouse,” 17 July 2025. https://www.ses.com/press-release/ses-completes-acquisition-intelsat-creating-global-multi-orbit-connectivity

  29. SpaceNews, “Relativity names Eric Schmidt as CEO as it updates Terran R development,” March 2025. https://spacenews.com/relativity-names-eric-schmidt-as-ceo-as-it-updates-terran-r-development/

  30. TechCrunch, “Eric Schmidt joins Relativity Space as CEO,” 10 March 2025. https://techcrunch.com/2025/03/10/eric-schmidt-joins-relativity-space-as-ceo/

  31. Space Insider, “Eric Schmidt's Quiet Play May be Launching AI Infrastructure Into Space Through Relativity,” 5 May 2025. https://spaceinsider.tech/2025/05/05/eric-schmidts-quiet-play-may-be-launching-ai-infrastructure-into-space-through-relativity/

  32. ResearchGate, “AI Infrastructure Evolution: From Compute Expansion to Efficient Orchestration in 2026.” https://www.researchgate.net/publication/398878635_AI_Infrastructure_Evolution_From_Compute_Expansion_to_Efficient_Orchestration_in_2026

  33. Harvard Business Review, “Turning Real-Time Satellite Data into a Competitive Advantage,” November 2025. https://hbr.org/2025/11/turning-real-time-satellite-data-into-a-competitive-advantage

  34. Global News Wire, “Artificial Intelligence (AI) in Satellite Internet Research Report 2026: $8.91 Bn Market Opportunities,” 29 January 2026. https://www.globenewswire.com/news-release/2026/01/29/3228392/0/en/p.html

  35. Futurum Group, “SpaceX Acquires xAI: Rockets, Starlink, and AI Under One Roof.” https://futurumgroup.com/insights/spacex-acquires-xai-rockets-starlink-and-ai-under-one-roof/

  36. CircleID, “Chinese LEO Satellite Internet Update: Guowang, Qianfan, and Honghu-3.” https://circleid.com/posts/chinese-leo-satellite-internet-update-guowang-qianfan-and-honghu-3

  37. SpaceNews, “SES to acquire Intelsat for $3.1 billion.” https://spacenews.com/ses-to-acquire-intelsat-for-3-1-billion/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...