The New Power Brokers: How Political Capital Is Reshaping AI

The convergence of political influence and artificial intelligence development has accelerated beyond traditional lobbying into something more fundamental: a restructuring of how advanced technology is governed, funded, and deployed. When venture capitalist Marc Andreessen described the aftermath of Donald Trump's 2024 election victory as feeling “like a boot off the throat,” he wasn't simply celebrating regulatory relief. He was marking the moment when years of strategic political investment by Silicon Valley's AI elite began yielding tangible returns in the form of favourable policy, lucrative government contracts, and unprecedented influence over the regulatory frameworks that will govern humanity's most consequential technology.
What makes this moment distinctive is not merely that wealthy technologists have cultivated political relationships. Such arrangements have existed throughout the history of American capitalism, from the railroad barons of the nineteenth century to the telecommunications giants of the twentieth. Rather, the novelty lies in the concentration of influence around a technology whose development trajectory will fundamentally reshape economic structures, labour markets, information environments, and potentially the nature of intelligence itself. The stakes of AI governance extend far beyond ordinary industrial policy into questions about human autonomy, economic organisation, and the distribution of power in democratic societies.
The pattern emerging from the intersection of political capital and AI development reveals far more than opportunistic lobbying or routine industry influence. Instead, a systematic reshaping of competitive dynamics is underway, where proximity to political power increasingly determines which companies gain access to essential infrastructure, energy resources, and the regulatory latitude necessary to deploy frontier AI systems at scale. This transformation raises profound questions about whether AI governance will emerge from democratic deliberation or from backroom negotiations between political allies and tech oligarchs whose financial interests and ideological commitments have become deeply intertwined with governmental decision-making.
Financial Infrastructure of Political Influence
The scale of direct political investment by AI-adjacent figures in the 2024 election cycle represents an inflection point in Silicon Valley's relationship with formal political power. Elon Musk contributed more than $270 million to political groups supporting Donald Trump and Republican candidates, including approximately $75 million to his own America PAC, making him the largest single donor in the election according to analysis by the Washington Post and The Register. This investment secured Musk not merely access but authority: leadership of the Department of Government Efficiency (DOGE), a position from which he wields influence over the regulatory environment facing his AI startup xAI alongside his other ventures.
The DOGE role creates extraordinary conflicts of interest. Richard Schoenstein, vice chair of litigation practice at law firm Tarter Krinsky & Drogin, characterised Musk's dual role as businessman and Trump advisor a “dangerous combination.” Venture capitalist Reid Hoffman wrote in the Financial Times that Musk's direct ownership in xAI creates a “serious conflict of interest in terms of setting federal AI policies for all US companies.” These concerns materialised rapidly as xAI secured governmental contracts whilst Musk simultaneously held authority over efficiency initiatives affecting the entire technology sector.
Peter Thiel, co-founder of Palantir Technologies, took a different approach. Despite having donated a record $15 million to JD Vance's 2022 Ohio Senate race, Thiel announced he would not donate to any 2024 presidential campaigns, though he confirmed he would vote for Trump. Yet Thiel's influence manifests through networks rather than direct contributions. More than a dozen individuals with ties to Thiel's companies secured positions in the Trump administration, including Vice President JD Vance himself, whom Thiel introduced to Trump in 2021. Bloomberg documented how Clark Minor (who worked at Palantir for nearly 13 years) became Chief Information Officer at the Department of Health and Human Services (which holds contracts with Palantir), whilst Jim O'Neill (who described Thiel as his “patron”) was named acting director of the Centres for Disease Control and Prevention.
Marc Andreessen and Ben Horowitz, co-founders of Andreessen Horowitz (a16z), made their first presidential campaign donations in 2024, supporting Trump. Their firm donated $25 million to crypto-focused super PACs and backed “Leading The Future,” a super PAC reportedly armed with more than $100 million to ensure pro-AI electoral victories in the 2026 midterm elections, according to Gizmodo. The PAC's founding backers include OpenAI president Greg Brockman, Palantir co-founder Joe Lonsdale, and AI search company Perplexity, creating a formidable coalition dedicated to opposing state-level AI regulation.
In podcast episodes following Trump's victory, Andreessen and Horowitz articulated fears that regulatory approaches to cryptocurrency might establish precedents for AI governance. Given a16z's substantial investments across AI companies, they viewed preventing regulatory frameworks as existential to their portfolio's value. David Sacks (a billionaire venture capitalist) secured appointment as both the White House's crypto and AI czar, giving the venture capital community direct representation in policy formation.
The return on these investments became visible almost immediately. Within months of Trump's inauguration, Palantir's stock surged more than 200% from the day before the election. The company secured more than $113 million in federal contracts since Trump took office, including an $800 million Pentagon deal, according to NPR. Michael McGrath, former chief executive of i2 (a data analytics firm competing with Palantir), observed that “having political connections and inroads with Peter Thiel and Elon Musk certainly helps them. It makes deals come faster without a lot of negotiation and pressure.”
For xAI, Musk's AI venture valued at $80 billion following its merger with X, political proximity translated into direct government integration. In early 2025, xAI signed an agreement with the General Services Administration enabling federal agencies to access its Grok AI chatbot through March 2027 at $0.42 per agency for 18 months, as reported by Newsweek. The arrangement raises significant questions about competitive procurement processes and whether governmental adoption of xAI products reflects technical merit or political favour.
The interconnected nature of these investments creates mutually reinforcing relationships. Musk's political capital benefits not only xAI but also Tesla (whose autonomous driving systems depend on AI), SpaceX (whose contracts with NASA and the Defence Department exceed billions of dollars), and Neuralink (whose brain-computer interfaces require regulatory approval). Similarly, Thiel's network encompasses Palantir, Anduril Industries, and numerous portfolio companies through Founders Fund, all positioned to benefit from favourable governmental relationships. This concentration means that political influence flows not merely to individual companies but to entire portfolios of interconnected ventures controlled by a small number of individuals.
The Regulatory Arbitrage Strategy
Political investment by AI companies cannot be understood solely as seeking favour. Rather, it represents a systematic strategy to reshape the regulatory landscape itself. The Trump administration's swift repeal of President Biden's October 2023 Executive Order on AI demonstrates how regulatory frameworks can be dismantled as rapidly as they're constructed when political winds shift.
Biden's executive order had established structured oversight including mandatory red-teaming for high-risk AI models, enhanced cybersecurity protocols, and requirements for advanced AI developers to submit safety results to the federal government. Trump's January 20, 2025 Executive Order 14148 rescinded these provisions entirely, replacing them with a framework “centred on deregulation and the promotion of AI innovation as a means of maintaining US global dominance,” as characterised by the American Psychological Association.
Trump's December 11, 2025 executive order explicitly pre-empts state-level AI regulation, attempting to establish a “single national framework” that prevents states from enforcing their own AI rules. White House crypto and AI czar David Sacks justified this federal intervention by arguing it would prevent a “patchwork of state regulations” that could impede innovation. Silicon Valley leaders like OpenAI CEO Sam Altman had consistently advocated for precisely this outcome, as CNN and NPR reported, despite legal questions about whether such federal pre-emption exceeds executive authority.
The lobbying infrastructure supporting this transformation expanded dramatically in 2024. OpenAI increased its federal lobbying expenditure nearly sevenfold, spending $1.76 million in 2024 compared to just $260,000 in 2023, according to MIT Technology Review. The company hired Chris Lehane (a political strategist from the Clinton White House who later helped Airbnb and Coinbase) as head of global affairs. Across the AI sector, OpenAI, Anthropic, and Cohere combined spent $2.71 million on federal lobbying in 2024. Meta led all tech companies with more than $24 million in lobbying expenditure.
Research by the RAND Corporation identified four primary channels through which AI companies attempt to influence policy: agenda-setting (advancing anti-regulation narratives), advocacy activities targeting legislators, influence in academia and research, and information management. Of seventeen experts interviewed, fifteen cited agenda-setting as the key mechanism. Congressional staffers told researchers that companies publicly strike cooperative tones on regulation whilst privately lobbying for “very permissive or voluntary regulations,” with one staffer noting: “Anytime you want to make a tech company do something mandatory, they're gonna push back on it.”
The asymmetry between public and private positions proves particularly significant. Companies frequently endorse broad principles of AI safety and responsibility in congressional testimony and public statements whilst simultaneously funding organisations that oppose specific regulatory proposals. This two-track strategy allows firms to cultivate reputations as responsible actors concerned with safety whilst effectively blocking measures that would impose binding constraints on their operations. The result is a regulatory environment shaped more by industry preferences than by independent assessment of public interests or technological risks.
Technical Differentiation as Political Strategy
The competition between frontier AI companies encompasses not merely model capabilities but fundamentally divergent approaches to alignment, safety, and transparency. These technical distinctions have become deeply politicised, with companies strategically positioning their approaches to appeal to different political constituencies and regulatory philosophies.
OpenAI's trajectory exemplifies this dynamic. Founded as a nonprofit research laboratory, the company restructured into a “capped profit” entity in 2019 to attract capital for compute-intensive model development. Microsoft's $10 billion investment in 2023 cemented OpenAI's position as the commercial leader in generative AI, but also marked its transformation from safety-focused research organisation to growth-oriented technology company. When Jan Leike (responsible for alignment and safety) and Ilya Sutskever (co-founder and former Chief Scientist) both departed in 2024 citing concerns that the company prioritised speed over safeguards, it signalled a fundamental shift. Leike's public statement upon leaving noted that “safety culture and processes have taken a backseat to shiny products” at OpenAI.
Anthropic, founded in 2021 by former OpenAI employees including Dario and Daniela Amodei, explicitly positioned itself as the safety-conscious alternative. Structured as a public benefit corporation with a Long-Term Benefit Trust designed to represent public interest, Anthropic developed “Constitutional AI” methods for aligning models with written ethical principles. The company secured $13 billion in funding at a $183 billion valuation by late 2024, driven substantially by enterprise customers seeking models with robust safety frameworks.
Joint safety evaluations conducted in summer 2025, where OpenAI and Anthropic tested each other's models, revealed substantive differences reflecting divergent training philosophies. According to findings published by both companies, Claude models produced fewer hallucinations but exhibited higher refusal rates. OpenAI's o3 and o4-mini models attempted answers more frequently, yielding more correct completions alongside more hallucinated responses. On jailbreaking resistance, OpenAI's reasoning models showed greater resistance to creative attacks compared to Claude systems.
These technical differences map onto political positioning. Anthropic's emphasis on safety appeals to constituencies concerned about AI risks, potentially positioning the company favourably should regulatory frameworks eventually mandate safety demonstrations. OpenAI's “iterative deployment” philosophy, emphasising learning from real-world engagement rather than laboratory testing, aligns with the deregulatory stance dominant in the current political environment.
Meta adopted a radically different strategy through its Llama series of open-source models, making frontier-adjacent capabilities freely available. Yet as research published in “The Economics of AI Foundation Models” notes, openness strategies are “rational, profit-maximising responses to a firm's specific competitive position” rather than philosophical commitments. By releasing models openly, Meta reduces the competitive advantage of OpenAI's proprietary systems whilst positioning itself as the infrastructure provider for a broader ecosystem of AI applications. The strategy simultaneously serves commercial objectives and cultivates political support from constituencies favouring open development.
xAI represents the most explicitly political technical positioning, with Elon Musk characterising competing models as censorious and politically biased, positioning Grok as the free-speech alternative. This framing transforms technical choices about content moderation and safety filters into cultural battleground issues, appealing to constituencies sceptical of mainstream technology companies whilst deflecting concerns about safety by casting them as ideological censorship. The strategy proves remarkably effective at generating engagement and political support even as questions about Grok's actual capabilities relative to competitors remain contested.
Google's DeepMind represents yet another positioning, emphasising scientific research credentials and long-term safety research alongside commercial deployment. The company's integration of AI capabilities across its product ecosystem (Search, Gmail, Workspace, Cloud) creates dependencies that transcend individual model comparisons, effectively bundling AI advancement with existing platform dominance. This approach faces less political scrutiny than pure-play AI companies despite Google's enormous market power, partly because AI represents one component of a diversified technology portfolio rather than the company's singular focus.
Infrastructure Politics and the Energy-Compute Nexus
Perhaps nowhere does the intersection of political capital and AI development manifest more concretely than in infrastructure policy. Training and deploying frontier AI models requires unprecedented computational resources, which in turn demand enormous energy supplies. The Bipartisan Policy Centre projects that by 2030, 25% of new domestic energy demand will derive from data centres, driven substantially by AI workloads. Current power-generating capacity proves insufficient; in major data centre regions, tech companies report that utilities are unable to provide electrical service for new facilities or are rationing power until transmission infrastructure completion.
In September 2024, Sam Altman joined leaders from Nvidia, Anthropic, and Google in visiting the White House to pitch the Biden administration on subsidising energy infrastructure as essential to US competitiveness in AI. Altman proposed constructing multiple five-gigawatt data centres, each consuming electricity equivalent to New York City's entire demand, according to CNBC. The pitch framed energy subsidisation as national security imperative rather than corporate welfare.
The Trump administration has proven even more amenable to this framing. The Department of Energy identified 16 potential sites on DOE lands “uniquely positioned for rapid data centre construction” and released a Request for Information on possible use of federal lands for AI infrastructure. DOE announced creation of an “AI data centre engagement team” to leverage programmes including loans, grants, tax credits, and technical assistance. Executive Order 14179 explicitly directs the Commerce Department to launch financial support initiatives for data centres requiring 100+ megawatts of new energy generation.
Federal permitting reform has been reoriented specifically toward AI data centres. Trump's executive order accelerates federal permitting by streamlining environmental reviews, expanding FAST-41 coverage, and promoting use of federal and contaminated lands for data centres. These provisions directly benefit companies with the political connections to navigate federal processes and the capital to invest in massive infrastructure, effectively creating higher barriers for smaller competitors whilst appearing to promote development broadly.
The Institute for Progress proposed establishing “Special Compute Zones” where the federal government would coordinate construction of AI clusters exceeding five gigawatts through strategic partnerships with top AI labs, with government financing next-generation power plants. This proposal, which explicitly envisions government picking winners, represents an extreme version of the public-private convergence already underway.
The environmental implications of this infrastructure expansion remain largely absent from political discourse despite their significance. Data centres already consume approximately 1-1.5% of global electricity, with AI workloads driving rapid growth. The water requirements for cooling these facilities place additional strain on local resources, particularly in regions already experiencing water stress. Yet political debates about AI infrastructure focus almost exclusively on competitiveness and national security, treating environmental costs as externalities to be absorbed rather than factors to be weighed against purported benefits. This framing serves the interests of companies seeking infrastructure subsidies whilst obscuring the distributional consequences of AI development.
Governance Capture and the Concentration of AI Power
The systematic pattern of political investment, regulatory influence, and infrastructure access produces a form of governance that operates parallel to democratic institutions whilst claiming to serve national interests. Quinn Slobodian, professor of international history at Boston University, characterised the current situation of ties between industry and government as “unprecedented in the modern era.”
Palantir Technologies exemplifies how companies can become simultaneously government contractor, policy influencer, and infrastructure provider in ways that blur distinctions between public and private power. Founded with early backing from In-Q-Tel (the CIA's venture arm), Palantir built its business on government contracts with agencies including the FBI, NSA, and Immigration and Customs Enforcement. ICE alone has spent more than $200 million on Palantir contracts. The Department of Defence awarded Palantir billion-dollar contracts for battlefield intelligence and AI-driven analysis.
Palantir's Gotham platform, marketed as an “operating system for global decision making,” enables governments to integrate disparate data sources with AI-driven analysis predicting patterns and movements. The fundamental concern lies not in the capabilities but in their opacity: because Gotham is proprietary, neither the public nor elected officials can examine how its algorithms weigh data or why they highlight certain connections. Yet the conclusions generated can produce life-altering consequences (inclusion on deportation lists, identification as security risks), with mistakes or biases scaling rapidly across many people.
The revolving door between Palantir and government agencies intensified following Trump's 2024 victory. The company secured a contract with the Federal Housing Finance Agency in May 2025 to establish an “AI-powered Crime Detection Unit” at Fannie Mae. In December 2024, Palantir joined with Anduril Industries (backed by Thiel's Founders Fund) to form a consortium including SpaceX, OpenAI, Scale AI, and Saronic Technologies challenging traditional defence contractors.
This consortium model represents a new form of political-industrial complex. Rather than established defence contractors cultivating relationships with the Pentagon over decades, a network of ideologically aligned technology companies led by politically connected founders now positions itself as the future of American defence and intelligence. These companies share investors, board members, and political patrons in a densely connected graph where business relationships and political allegiances reinforce each other.
The effective altruism movement's influence on AI governance represents another dimension of this capture. According to Politico reporting, an anonymous biosecurity researcher described EA-linked funders as “an epic infiltration” of policy circles, with “a small army of adherents to 'effective altruism' having descended on the nation's capital and dominating how the White House, Congress and think tanks approach the technology.” EA-affiliated organisations drafted key policy proposals including the federal Responsible Advanced Artificial Intelligence Act and California's Senate Bill 1047, both emphasising long-term existential risks over near-term harms like bias, privacy violations, and labour displacement. Critics note that focusing on existential risk allows companies to position themselves as responsible actors concerned with humanity's future whilst continuing rapid commercialisation with minimal accountability for current impacts.
The Geopolitical Framing and Its Discontents
Nearly every justification for deregulation, infrastructure subsidisation, and concentrated AI development invokes competition with China. This framing proves rhetorically powerful because it positions commercial interests as national security imperatives, casting regulatory caution as geopolitical liability. Chris Lehane (OpenAI's head of global affairs) explicitly deployed this strategy, arguing that “if the US doesn't lead the way in AI, an autocratic nation like China will.”
The China framing contains elements of truth alongside strategic distortion. China has invested heavily in AI, with projections exceeding 10 trillion yuan ($1.4 trillion) in technology investment by 2030. Yet US private sector AI investment vastly exceeds Chinese private investment; in 2024, US private AI investment reached approximately $109.1 billion (nearly twelve times China's $9.3 billion), according to research comparing the US-China AI gap. Five US companies alone (Meta, Alphabet, Microsoft, Amazon, Oracle) are expected to spend more than $450 billion in aggregate AI-specific capital expenditures in 2026.
The competitive framing serves primarily to discipline domestic regulatory debates. By casting AI governance as zero-sum geopolitical competition, industry advocates reframe democratic oversight as strategic vulnerability. This rhetorical move positions anyone advocating for stronger AI regulation as inadvertently serving Chinese interests by handicapping American companies. The logic mirrors earlier arguments against environmental regulation, labour standards, or financial oversight.
Recent policy developments complicate this narrative. President Trump's December 8 announcement that the US would allow Nvidia to sell powerful H200 chips to China seemingly contradicts years of export controls designed to prevent Chinese AI advancement, suggesting the relationship between AI policy and geopolitical strategy remains contested even within administrations ostensibly committed to technological rivalry.
Alternative Governance Models and Democratic Deficits
The concentration of AI governance authority in politically connected companies operating with minimal oversight represents one potential future, but not an inevitable one. The European Union's AI Act establishes comprehensive regulation with classification systems, conformity assessments, and enforcement mechanisms, despite intense lobbying by OpenAI and other companies. Time magazine reported that OpenAI successfully lobbied to remove language suggesting general-purpose AI systems should be considered inherently high risk, demonstrating that even relatively assertive regulatory frameworks remain vulnerable to industry influence.
Research institutions focused on AI safety independent of major labs provide another potential check. The Centre for AI Safety published research on “circuit breakers” preventing dangerous AI behaviours (requiring 20,000 attempts to jailbreak protected models) and developed the Weapons of Mass Destruction Proxy Benchmark measuring hazardous knowledge in biosecurity, cybersecurity, and chemical security.
The fundamental democratic deficit lies in the absence of mechanisms through which publics meaningfully shape AI development priorities, safety standards, or deployment conditions. The technologies reshaping labour markets, information environments, and social relationships emerge from companies accountable primarily to investors and increasingly to political patrons rather than to citizens affected by their choices. When governance occurs through private negotiations between tech oligarchs and political allies, the public's role reduces to retrospectively experiencing consequences of decisions made elsewhere.
Whilst industry influence on regulation has long existed, the current configuration involves direct insertion of industry leaders into governmental decision-making (Musk leading DOGE), governmental adoption of industry products without competitive procurement (xAI's Grok agreement), and systematic dismantling of nascent oversight frameworks replaced by industry-designed alternatives. This represents not merely regulatory capture but governance convergence, where distinctions between regulator and regulated dissolve.
Reshaping Competitive Dynamics Beyond Markets
The intertwining of political capital, financial investment, and AI infrastructure around particular companies fundamentally alters competitive dynamics in ways extending far beyond traditional market competition. In conventional markets, companies compete primarily on product quality, pricing, and customer service. In the emerging AI landscape, competitive advantage increasingly derives from political proximity, with winners determined partly by whose technologies receive governmental adoption, whose infrastructure needs receive subsidisation, and whose regulatory preferences become policy.
This creates what economists term “political rent-seeking” as a core competitive strategy. Palantir's stock surge following Trump's election reflects not sudden technical breakthroughs but investor recognition that political alignment translates into contract access. xAI's rapid governmental integration reflects not superior capabilities relative to competitors but Musk's position in the administration.
For newer entrants and smaller competitors, these dynamics raise formidable barriers. If regulatory frameworks favour incumbents, if infrastructure subsidies flow to connected players, and if government procurement privileges politically aligned firms, then competitive dynamics reward political investment over technical innovation.
The international implications prove equally significant. If American AI governance emerges from negotiations between tech oligarchs and political patrons rather than democratic deliberation, it undermines claims that the US model represents values-aligned technology versus authoritarian Chinese alternatives. Countries observing US AI politics may rationally conclude that American “leadership” means subordinating their own governance preferences to the commercial interests of US-based companies with privileged access to American political power.
The consolidation of AI infrastructure around politically connected companies also concentrates future capabilities in ways that may prove difficult to reverse. If a handful of companies control the computational resources, energy infrastructure, and governmental relationships necessary for frontier AI development, then path dependencies develop where these companies' early advantages compound over time. Alternative approaches to AI development, safety, or governance become increasingly difficult to pursue as the resource advantages of incumbents grow.
Reconfiguring the Politics of Technological Power
The selective investment patterns of political figures and networks in specific AI companies signal a broader transformation in how technological development intersects with political power. Several factors converge to enable this reconfiguration. First, the immense capital requirements for frontier AI development concentrate power among firms with access to patient capital. Second, the geopolitical framing of AI competition creates permission structures for policies that would otherwise face greater political resistance. Third, the technical complexity of AI systems creates information asymmetries where companies possess far greater understanding of capabilities and risks than regulators.
Fourth, and perhaps most significantly, the effective absence of organised constituencies advocating for alternative AI governance approaches leaves the field to industry and its allies. Labour organisations remain fractured in responses to AI-driven automation, civil liberties groups focus on specific applications rather than systemic governance, and academic researchers often depend on industry funding or access. This creates a political vacuum where industry preferences face minimal organised opposition.
The question facing democratic societies extends beyond whether particular companies or technologies prevail. Rather, it concerns whether publics retain meaningful agency over technologies reshaping economic structures, information environments, and social relations. The current trajectory suggests a future where AI governance emerges from negotiations among political and economic elites with deeply intertwined interests, whilst publics experience consequences of decisions made without their meaningful participation.
Breaking this trajectory requires not merely better regulation but reconstructing the relationships between technological development, political power, and democratic authority. This demands new institutional forms enabling public participation in shaping AI priorities, funding mechanisms for AI research independent of commercial imperatives, and political constituencies capable of challenging the presumption that corporate interests align with public goods. Whether such reconstruction proves possible in an era of concentrated wealth and political influence remains democracy's defining question as artificial intelligence becomes infrastructure.
The coalescence of political capital around specific AI companies represents a test case for whether democratic governance can reassert authority over technological development or whether politics has become merely another domain where economic power translates into control. The outcome of this contest will determine not merely which companies dominate AI markets, but whether the development of humanity's most powerful technologies occurs through democratic deliberation or oligarchic negotiation.
References & Sources
- How Elon Musk's $130 million investment in Trump's victory could reap a huge payoff
- Musk-linked contributions to Trump total $270M
- Elon Musk's budget-cutter role creates dangerous conflicts of interest
- Elon Musk's Grok AI Inks Deal With Trump Admin
- Trump's victory will benefit Elon Musk and xAI
- AI companies upped their federal lobbying spend in 2024
- OpenAI has upped its lobbying efforts nearly seven-fold
- How Palantir is mapping the nation's data
- How Palantir is rising in the Trump era
- Palantir and Accenture Federal Services Join Forces
- OpenAI and Anthropic alignment evaluation exercise
- Findings from Pilot Anthropic-OpenAI Alignment Exercise
- AI Governance through Markets
- Peter Thiel's network infiltrating Trump's White House
- Peter Thiel's Allies in Trump's Government
- OpenAI is under pressure as Google, Anthropic gain ground
- From OpenAI to Anthropic: who's leading on AI governance?
- Frontier Model Competition Is Wide Open
- OpenAI Grapples with Losses, Competition, Internal Turmoil
- Trump rescinds Biden executive order in AI regulatory overhaul
- Trump signs executive order to block state AI regulations
- Trump AI executive order may not be legal
- Trump signs order for single national AI regulation standard
- Strategic Federal Actions to Strengthen AI and Energy Infrastructure
- DOE Announces Actions to Enhance AI Leadership
- Build AI in America: Anthropic Energy Report
- Compute in America: A Policy Playbook
- How OpenAI CEO Sam Altman's lobbying power tamed Washington
- OpenAI Lobbied EU to Water Down AI Regulation
- Marc Andreessen-Backed Super-PAC Fighting State AI Regulations
- Andreessen Horowitz founders plan to donate to pro-Trump PACs
- Crypto Won Big in 2024. AI Is Angling to Do the Same in 2026
- AI and policy leaders debate web of effective altruism
- US-China AI Gap: 2025 Analysis
- AI Industrial Policy: US and China Geoeconomic Battlefield
- Managing Industry Influence in US AI Policy
- Centre for AI Safety 2024 Year in Review

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk