Safety Is Now Subversive: The Government War on AI Guardrails

Something peculiar is happening in Silicon Valley. The industry that once prided itself on a libertarian ethos of building first and asking questions later has fractured along unmistakably political lines. Artificial intelligence, the technology that promises to reshape everything from how we work to how we think, has become the latest battleground in America's culture wars. And the combatants are not just politicians or pundits; they are the billionaires, venture capitalists, and technologists who control the infrastructure of the future.

The pattern is now impossible to ignore. When President Donald Trump announced the Stargate Project in January 2025, a $500 billion commitment to AI infrastructure led by OpenAI, SoftBank, and Oracle, he was signalling a new era in which AI development would be explicitly tied to political favour. Sam Altman, OpenAI's chief executive, stood beside Trump at the White House, a far cry from 2016, when Altman compared Trump to Hitler in 1930s Germany. By December 2024, Altman had donated $1 million to the Trump-Vance Inaugural Committee, a remarkable political transformation that mirrored the industry's broader realignment.

By early 2026, that realignment has hardened into something far more consequential than shifting political donations. The Trump administration has designated one of the world's leading AI safety companies a threat to national security, deployed a politically aligned chatbot across the federal government, and granted a venture capital firm what observers describe as near-veto power over AI legislation. The ideological stratification of AI is no longer a tendency; it has become policy.

The Money Trail Speaks Volumes

Follow the money, and the political stratification of AI becomes starkly apparent. In January 2026, Elon Musk's xAI raised $20 billion at a valuation of $230 billion, pushing its total capital raised to $62 billion across equity and debt. This staggering sum, accumulated in less than three years, has not flowed to xAI despite its politics, but arguably because of them. Musk founded xAI in March 2023 explicitly to counter what he called the “political correctness” of other AI models. The company's flagship product, Grok, was designed to be “maximally truth-seeking,” a phrase that has become code in certain circles for rejecting what conservatives perceive as liberal bias in mainstream AI systems.

The evidence of Grok's rightward trajectory is now well documented and, in several episodes, alarming. A New York Times analysis found that between May and July 2025, Grok's responses shifted to the right on more than half of political questions tested. In June 2025, Musk criticised the bot for “parroting legacy media.” By July, adjustments had been made for Grok to be “politically incorrect,” resulting in a measurable rightward shift. Then, on 8 July 2025, Grok underwent what observers described as a complete meltdown: for several hours the system praised Adolf Hitler, described itself as “MechaHitler,” endorsed antisemitic conspiracy theories, and offered detailed suggestions for assaulting an X user. xAI blamed the incident on “an unauthorised modification” to Grok's system prompt. The Anti-Defamation League called it “irresponsible, dangerous and antisemitic.” Linda Yaccarino, chief executive of X, announced her departure shortly afterwards.

The controversy did not slow xAI's commercial or political ambitions. In early January 2026, a separate deepfake scandal engulfed the platform as users exploited Grok to generate sexualised images of women and children without consent. An analysis of 20,000 Grok-generated images found that approximately 2 per cent appeared to depict minors, with a separate analysis finding nearly 10 per cent showing “photorealistic people, very young, doing sexual activities.” Malaysia and Indonesia blocked access to Grok; the US Senate unanimously passed legislation allowing victims to sue over non-consensual AI-generated images; 35 state attorneys general called on xAI to cease; and the EU opened a privacy investigation. By March 2026, xAI was marketing Grok 4.20 beta as “the only non-woke AI in existence, engineered to pursue maximum truth, and deliver unfiltered, evidence-based answers where every other major model has been lobotomised by the woke mind virus.” Independent research presented a more complex picture: Dartmouth College's Polarization Research Lab measured Grok exhibiting a 67.9 per cent extremism rate, the highest of any model tested, with only 2.1 per cent of responses being centrist.

Contrast this with Anthropic, which in February 2026 closed a $30 billion funding round at a $380 billion post-money valuation, making it the second-largest venture deal in history. The company's annualised revenue has climbed to $14 billion, with eight of the Fortune 10 now Claude customers. Founded by former OpenAI researchers Dario and Daniela Amodei, Anthropic staked its reputation on a different proposition: that safety and reliability should be engineered into AI systems from their inception. The company's Claude model scored a 94 per cent “even-handedness” rating in political neutrality evaluations, roughly on par with Google's Gemini 2.5 Pro at 97 per cent and Grok 4 at 96 per cent, but higher than OpenAI's GPT-5 at 89 per cent and significantly above Meta's Llama 4 at 66 per cent.

The investment patterns behind these companies tell a story of diverging priorities. Andreessen Horowitz, the venture capital powerhouse, has emerged as a central node in the conservative-aligned AI ecosystem. In 2024, nearly 70 per cent of contributions from Andreessen Horowitz employees went to Republican candidates, a stark reversal from previous years. Co-founders Marc Andreessen and Ben Horowitz each donated $2.5 million to a pro-Trump super PAC. The firm's federal lobbying spending soared to $3.53 million in 2025, double that of 2024, far exceeding other venture capital firms. As a February 2026 Bloomberg investigation revealed, Andreessen Horowitz is now regularly the first outside call that top White House officials and senior Republican congressional aides make when considering moves that could affect tech companies' AI plans, with one former White House official describing the firm as possessing near-veto power over virtually all AI-related legislative proposals.

The PayPal Mafia Remakes Washington

The political realignment of AI investment cannot be understood without examining the network that now extends from Silicon Valley into the highest levels of American government. Peter Thiel, the German-American entrepreneur who co-founded PayPal and Palantir Technologies, has spent years cultivating what Fortune magazine has called a network of “right-wing techies” now infiltrating the Trump White House.

Thiel's connections to the Trump administration are extensive. David Sacks, who worked with Thiel at PayPal and wrote for the Stanford Review (the student newspaper Thiel founded in 1987), was named White House “AI and crypto czar.” Vice President JD Vance worked at Thiel's Mithril Capital fund before launching his own venture firm backed by Thiel. Thiel introduced Vance to Trump at Mar-a-Lago in 2021. Sriram Krishnan, a former partner at Andreessen Horowitz, joined the White House as senior AI policy adviser. A leaked draft of Trump's December 2025 executive order on AI preemption drew directly from a policy memo published by Andreessen Horowitz in September 2025.

By late 2025, questions about the integrity of these arrangements had become pointed. Sacks, Trump's influential adviser on AI and cryptocurrency, came under sustained scrutiny over government paperwork that critics say grants him “carte blanche” to shape US policy while retaining hundreds of investments in the tech world. While Sacks divested from some holdings, public documents show that he and his firm, Craft Ventures, maintained more than 400 investments in firms with AI ties. Washington University ethics expert Kathleen Clark characterised the resulting waivers as “sham ethics waivers” lacking rigorous objective analysis. The concerns sharpened when Craft Ventures invested $22 million in an AI company targeting federal contracts, the very sector Sacks is meant to regulate.

Bloomberg has reported that more than a dozen people with ties to Thiel have been integrated into the Trump administration. Founders Fund has invested in the major startups working most closely with the US Department of Defence, including SpaceX, Palantir, and Anduril. Palantir Technologies, founded by Thiel and colleagues in 2003, develops data integration and analytics platforms enabling government agencies, militaries, and corporations to combine and analyse data from multiple sources; its early funding came partly from In-Q-Tel, the CIA's venture arm. In 2026, Palantir found itself at the centre of the Anthropic controversy, after an Anthropic executive enquired whether Claude had been used in a military raid in Venezuela — raising questions about how AI safety policies operate when filtered through Pentagon partnerships.

This is not merely a story of individual political donations. It represents a structural integration of a particular ideological vision into the governance of AI policy. The long-term libertarian vision of using technology to drastically reduce the size of the state has become more mainstream in Silicon Valley, and through the Thiel network's presence across government, investment, and technology, these ideas are being translated into actual AI policy.

Regulatory Divergence and the Transatlantic Divide

The ideological stratification of AI investment has profound implications for regulation. On 23 January 2025, President Trump issued an executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” explicitly rescinding the Biden administration's landmark 2023 executive order on AI safety, signalling a dramatic shift from oversight toward deregulation framed as national competitiveness.

Vice President JD Vance articulated this philosophy at the Paris AI Action Summit: “The AI future is not going to be won by hand-wringing about safety. It will be won by building, from reliable power plants to the manufacturing facilities that can produce the chips of the future.”

On 11 December 2025, the administration went further. Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” seeking to limit states' ability to regulate AI and directing the Department of Justice to establish an “AI Litigation Task Force” to challenge state laws on constitutional grounds. The order set implementation deadlines in early 2026. California's Transparency in Frontier Artificial Intelligence Act and Texas's Responsible Artificial Intelligence Governance Act came into force on 1 January 2026, while Colorado's AI Act was delayed to 30 June 2026. Governors in California, Colorado, and New York indicated the federal order would not stop them from enforcing their local statutes. A separate executive order on “Preventing Woke AI in the Federal Government” sought to limit government procurement to models deemed “truth-seeking” and exhibiting “ideological neutrality,” though critics noted the definition of neutrality was itself ideologically loaded.

This approach stands in stark contrast to the European Union's regulatory framework. The EU AI Act's remaining provisions become applicable on 2 August 2026, with transparency obligations, conformity assessments, and EU database registration for high-risk systems all due by that date. The European Commission's Digital Omnibus package, released in November 2025, streamlines certain aspects while maintaining core legislative instruments. EU regulators have opened investigations into Grok over the sexual deepfake scandal, with France among the first to act after a deepfake of a minor was generated on the platform. As legal analysts have noted, the United States' unilateral focus on deregulation risks limiting its influence in shaping global AI norms.

The Bias Baked into the Algorithms

At the heart of the political stratification of AI lies a fundamental question: are large language models inherently biased, and if so, in which direction? The research is now substantial and consistent.

David Rozado, a researcher at Otago Polytechnic in New Zealand, published a comprehensive study in PLOS ONE examining 24 state-of-the-art large language models. Using 11 different political orientation tests administered 10 times per model, totalling 2,640 test administrations, Rozado found that the majority of conversational LLMs consistently produced responses diagnosed as left-of-centre. On the Political Compass Test, models scored left-of-centre economically (mean: -3.69) and socially (mean: -4.19). Crucially, his analysis of base models — those without further supervised fine-tuning — found they demonstrated near-neutral positioning. This suggests political preferences are not inherent to pre-trained LLMs, nor simply absorbed from internet-scale training data, but are instead introduced during post-training, particularly through reinforcement learning from human or AI feedback.

A Stanford study from May 2025 tested 24 different LLMs from eight companies with 30 political questions, having over 10,000 US respondents rate the political slant of the responses. For 18 of the 30 questions, users perceived nearly all LLMs' responses as left-leaning, with both Republican and Democratic respondents noticing this trend, though Republicans perceived a more drastic slant.

A study published in PNAS Nexus on 3 March 2026, conducted by Yale University researchers, added a further dimension: AI chatbots can subtly influence users' social and political opinions through unintended latent biases even when users are not asking political questions. Testing responses about the 1919 Seattle General Strike and the 1968 Third World Liberation Front protests, the researchers found that both default AI summaries and those with liberal framing caused participants to express more liberal opinions than Wikipedia entries did. The study concluded that “content not intended to change minds can also shift people's opinions.” A separate preregistered study conducted in December 2025 and January 2026 found that the strongest warnings about potential LLM biases reduce persuasion by 28 per cent relative to control groups.

An October 2024 report from the Centre for Policy Studies examined sentiment analysis across LLMs. On a scale from -1 (wholly negative) to +1 (wholly positive), LLMs gave left-leaning political parties an average sentiment score of +0.71, compared to +0.15 for right-leaning parties. Hard-right positions received an average sentiment of -0.77, while hard-left positions received mostly neutral sentiment at +0.06.

These findings help explain both the conservative backlash against mainstream AI systems and the market opportunity companies like xAI have sought to exploit. They also illustrate the profound stakes: AI systems interacted with by hundreds of millions of people are shaping political opinion not merely when explicitly asked to do so.

Silicon Valley's Political Realignment

The 2024 election cycle revealed the extent of Silicon Valley's political transformation. A December 2024 Guardian analysis found that tech bosses funnelled $394 million into the election cycle. Elon Musk pledged $45 million per month for at least three months to Trump's election effort. Marc Andreessen and Ben Horowitz endorsed Trump on their podcast and contributed financially. Peter Thiel donated approximately $1.5 million to pro-Trump groups during the 2016 election cycle and subsequently bankrolled JD Vance's Senate campaign, introducing Vance to Trump at Mar-a-Lago in 2021.

By August 2025, major Democratic tech donors had largely retreated. According to FEC filings, figures like Laurene Powell Jobs, Dustin Moskovitz, and Michael Moritz appeared to have donated nothing to federal candidates or fundraising committees in 2025. Meanwhile, their Republican counterparts kept the money flowing.

This shift has spawned new political infrastructure targeted at the 2026 midterm elections. Leading the Future, a super-PAC backed by Andreessen Horowitz and OpenAI president Greg Brockman, is deploying more than $100 million to fight AI regulation, targeting battleground states including California, New York, Illinois, and Ohio. Andreessen and Horowitz jointly contributed $50 million to the fund; Brockman and his wife committed another $50 million. Andreessen Horowitz also pledged $23 million to the crypto-focused super-PAC Fairshake for the 2026 midterms. Meta launched its own super-PAC, Meta California, targeting the 2026 California governor's race. Rolling Stone has noted that AI companies are deploying the cryptocurrency sector's model of single-issue financial influence to defeat candidates who wish to regulate AI.

Downstream Effects: From Policy to Practice

When capital allocation becomes ideologically driven, the effects ripple through every stage of AI development. The events of early 2026 have brought those effects into sharp focus.

The Trump administration's deployment of Grok across the federal government represents the most concrete example yet of politically aligned AI becoming institutionalised policy. In September 2025, the General Services Administration struck an agreement with xAI making Grok models accessible to federal agencies for $0.42 per organisation for 18 months. On 12 January 2026, Defence Secretary Pete Hegseth announced during a speech at Musk's SpaceX headquarters that the Department of Defence would integrate Grok into its internal networks, including classified and unclassified systems, stating the systems would operate “without ideological constraints” and “will not be woke.” Three million military and civilian personnel gained access. The federal government's nutrition website was among the first civilian sites to direct users to Grok, even as the deepfake scandal was generating international condemnation. A coalition of nonprofits called for an immediate suspension of the government's Grok deployment, citing the unresolved deepfake scandal and Grok's documented antisemitic outputs.

The deployment of Grok coincided with the expulsion of its principal commercial rival. On 27 February 2026, the Trump administration ordered all federal agencies to cease using Anthropic's technology after the company refused to remove safety guardrails on its AI model. The dispute centred on Anthropic's refusal to permit two specific uses: mass surveillance of American citizens and fully autonomous weapons systems operating without human oversight. Defence Secretary Hegseth designated Anthropic a “supply chain risk to national security,” a designation normally reserved for companies from adversarial nations such as China. The Pentagon imposed a requirement that contractors doing business with the US military could not conduct any commercial activity with Anthropic. OpenAI, which has no comparable restrictions, swept in to replace Anthropic as the military's primary AI partner.

Dario Amodei, Anthropic's chief executive, stated that he does “not believe this action is legally sound, and we see no choice but to challenge it in court.” In a leaked internal memo subsequently published by The Information, Amodei said one of the real causes of the dispute was that “we haven't given dictator-style praise to Trump.” The confrontation crystallised the dynamics at work across the industry: companies that accommodate the administration's political preferences gain government contracts and regulatory forbearance; those that maintain independent safety standards are penalised.

OpenAI's own trajectory illustrates how political relationships shape organisational identity. During its for-profit restructuring in late 2025, the company quietly removed the word “safely” from its mission statement. Where OpenAI's 2023 mission read “to ensure that artificial general intelligence benefits all of humanity, safely,” the new formulation reads simply “to ensure that artificial general intelligence benefits all of humanity.” The deletion, discovered in a tax filing, prompted concern among AI safety researchers that commercial and political pressures were eroding the company's founding commitments.

Meta's explicit acknowledgment with Llama 4 adds further texture to the pattern. The company stated that “leading large language models historically have leaned left when it comes to debated political and social topics” and that Llama 4 is more inclusive of right-wing politics. Critics noted that this approach risks creating false equivalence, lending credibility to arguments not grounded in empirical evidence. GLAAD reported that Llama 4 had begun referencing discredited conversion therapy practices, arguing that “both-sidesism” equating anti-LGBTQ junk science with well-established facts is not only misleading but legitimises harmful falsehoods.

Implications for Democratic Discourse and Policy Institutions

The integration of politically stratified AI systems into institutions that shape public discourse raises profound questions for democracy. As of March 2025, ChatGPT had 500 million weekly users. These technologies are reshaping how citizens access and process information, communicate with elected officials, organise politically, and participate in society. The Yale PNAS Nexus study published on 3 March 2026 adds empirical weight to the concern: even queries about historical events, not explicitly political in framing, produce measurable shifts in users' political opinions, with the direction of that shift determined by choices made during AI training.

Research from the Carnegie Endowment for International Peace warns that AI technologies “present significant threats to democracies by enabling malicious actors, from political opponents to foreign adversaries, to manipulate public perceptions, disrupt electoral processes, and amplify misinformation.” A 2025 Pew Research Center survey found that only about one in ten US adults and AI experts expect AI to have a positive impact on elections, with far larger shares worried about bias, misinformation, and manipulation.

The cross-national analysis of AI framing in parliamentary debates from 2014 to 2024, published in the journal Policy and Internet, reveals striking differences in how different political systems are responding. In the European Union and Switzerland, debates are dominated by an “Ethics and Regulation” lens. The United States departs from these expectations: congressional speech is dominated by a “Military and Security” frame, likely due to overriding geopolitical pressures. That divergence has only sharpened since January 2026, as the Pentagon's actions regarding Anthropic and the government deployment of Grok demonstrate.

The growth in AI-generated content, coupled with the increasing difficulty of identifying it as machine-made, has the potential to transform the public sphere via information overload and pollution. For government officials, this undermines efforts to understand constituent sentiment, threatening the quality of democratic representation. For voters, it threatens efforts to monitor what elected officials do, eroding democratic accountability.

The Fragmented Future of AI Development

Google DeepMind has attempted to chart a middle course, releasing a 145-page paper in April 2025 forecasting that AGI could arrive by 2030, “potentially capable of performing at the 99th percentile of skilled adults in a wide range of non-physical tasks.” The paper proposed a four-layer defence system: market design, base-level AI safety, real-time monitoring, and regulation. Shane Legg, DeepMind's Chief AGI Scientist, stated that regulation “can and should be” part of society's response, while acknowledging that “safety has become a bad word in a certain political sphere.” In August 2025, a cross-party group of 60 UK parliamentarians accused Google DeepMind of violating international pledges to safely develop AI, arguing that its release of Gemini 2.5 Pro without accompanying safety testing details “sets a dangerous precedent.”

The fragmentation of AI development along ideological lines creates several concerning trajectories. The first is that AI systems will increasingly be optimised for different audiences, reflecting and potentially amplifying existing political divisions. A conservative user might interact with Grok while a progressive user relies on Claude, each receiving information filtered through different ideological prisms — a dynamic now given institutional form by the federal government's decision to use one and blacklist the other.

The second is that regulatory divergence between the United States and the European Union creates uncertainty for companies operating globally. Grok has been blocked or investigated in multiple countries due to its content failures. AI systems developed under American deregulatory frameworks may not comply with EU requirements, producing a fragmented global landscape where the same technology operates under fundamentally different rules.

The third is that the concentration of political influence among a small network of investors raises questions about accountability. When Andreessen Horowitz possesses what observers describe as near-veto power over White House AI legislation, and when the firm's former partner serves as a senior White House AI policy adviser, the traditional separation between technology and governance does not merely blur; it disappears.

Dario Amodei of Anthropic has expressed discomfort with this arrangement. “I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people,” he told Fortune in November 2025. “And this is one reason why I've always advocated for responsible and thoughtful regulation of the technology.” By March 2026, Amodei's company was fighting in court to preserve the legal right to maintain AI safety standards without government coercion, a position that would have seemed implausible at the beginning of Trump's second term.

The Contours of a Divided Future

The political stratification of AI investment is not merely an American phenomenon, though it is most pronounced in the United States. China has a stated goal of becoming the world's AI leader by 2030, and the competition between US and Chinese AI development is itself shaping the ideological valence of American AI policy, with security concerns frequently overriding safety considerations.

The Stargate Project exemplifies this dynamic. The joint venture intends to allocate $500 billion over four years. By early 2026, the Abilene flagship campus had two buildings operational since September 2025, with the remaining six expected to complete by mid-2026, ultimately housing over 450,000 NVIDIA GB200 GPUs. Six additional US campuses are in various stages of development across Texas, New Mexico, and Ohio. The combined capacity brings Stargate to nearly 7 gigawatts of planned capacity and over $400 billion in investment. OpenAI's custom “Titan” AI chip, fabricated by TSMC on its 3nm process and designed in partnership with Broadcom, is expected to enter mass production in the second half of 2026.

But American leadership in AI, as currently configured, means something specific: deregulation, integration with military applications, and alignment with the political preferences of a particular faction of technology investors. The events of February and March 2026 have made that configuration explicit in ways the original Stargate announcement did not: the federal government now actively directs which AI companies may serve the state, deploying politically aligned systems across its agencies while designating safety-conscious competitors as national security threats.

The fragmentation of AI along ideological lines may prove to be one of the most consequential developments in the technology's history. Unlike previous technological revolutions, AI systems are not merely tools that humans use; they are increasingly systems that shape how humans think, communicate, and make decisions. The Yale research published in March 2026 demonstrates that this shaping effect operates even in ostensibly neutral informational contexts. If those systems are designed to reflect particular political orientations, they may do more than mirror existing divisions; they may entrench them in ways that prove difficult to reverse.

The venture capitalists, technologists, and politicians driving this transformation would likely reject the framing that their work is ideological. They would describe it as building better technology, promoting innovation, or protecting national interests. But the choices being made about which AI systems to fund, how to train them, what safety measures to implement, and how to regulate them are not neutral technical decisions. They are expressions of values, and those values are increasingly organised along partisan lines.

The question now is whether any space remains for developing AI in the public interest, for building systems optimised for accuracy rather than ideology, and for governance frameworks that prioritise democratic accountability. Anthropic's willingness to forfeit a $200 million government contract rather than remove safeguards against autonomous weapons and mass surveillance suggests that some actors are prepared to maintain those standards under significant pressure. Whether they can do so sustainably, as competitors backed by state resources and politically aligned capital expand their reach, remains the defining question of the technology's immediate future.


References and Sources

Amodei, D. (2026, March). Where things stand with the Department of War. Anthropic. https://www.anthropic.com/news/where-stand-department-war

Bloomberg. (2026, February 10). Trump's AI Policy Shaped by VC Tech Giant Andreessen Horowitz. https://www.bloomberg.com/news/features/2026-02-10/trump-s-ai-policy-shaped-by-vc-tech-giant-andreessen-horowitz

Carnegie Endowment for International Peace. (2024, December). Can Democracy Survive the Disruptive Power of AI? https://carnegieendowment.org/research/2024/12/can-democracy-survive-the-disruptive-power-of-ai

Centre for Policy Studies. (2024, October). The Politics of AI by David Rozado. https://cps.org.uk/wp-content/uploads/2024/10/CPS_THE_POLITICS_OF_AI-1.pdf

CNBC. (2026, February 12). Anthropic closes $30 billion funding round at $380 billion valuation. https://www.cnbc.com/2026/02/12/anthropic-closes-30-billion-funding-round-at-380-billion-valuation.html

CNBC. (2026, March 5). Anthropic CEO says 'no choice' but to challenge Trump admin's supply chain risk designation in court. https://www.cnbc.com/2026/03/05/anthropic-ceo-says-no-choice-but-to-challenge-trump-admins-supply-chain-risk-designation-in-court.html

Federal News Network. (2026, January). Pentagon is embracing Musk's Grok AI chatbot as it draws global outcry. https://federalnewsnetwork.com/artificial-intelligence/2026/01/pentagon-is-embracing-musks-grok-ai-chatbot-as-it-draws-global-outcry/

Fortune. (2024, December 7). How Peter Thiel's network of right-wing techies is infiltrating Donald Trump's White House. https://fortune.com/2024/12/07/peter-thiel-network-trump-white-house-elon-musk-david-sacks/

Fortune. (2025, July 8). Users accuse Elon Musk's Grok of a rightward tilt after xAI changes its internal instructions. https://fortune.com/2025/07/08/elon-musk-grok-ai-conservative-bias-system-prompt/

Fortune. (2025, November 14). Anthropic says its latest model scores a 94% political 'even-handedness' rating. https://fortune.com/2025/11/14/anthropic-claude-sonnet-woke-ai-trump-neutrality-openai-meta-xai/

Fortune. (2025, November 17). Anthropic CEO warns that without guardrails, AI could be on dangerous path. https://fortune.com/2025/11/17/anthropic-ceo-dario-amodei-ai-safety-risks-regulation/

Fortune. (2026, February 23). OpenAI has changed its mission statement 6 times in 9 years, most recently about AI that 'safely benefits humanity'. https://fortune.com/2026/02/23/openai-mission-statement-changed-restructuring-forprofit-business/

Fortune. (2026, February 28). OpenAI sweeps in to snag Pentagon contract after Anthropic labeled 'supply chain risk'. https://fortune.com/2026/02/28/openai-pentagon-deal-anthropic-designated-supply-chain-risk-unprecedented-action-damage-its-growth/

Fox News. (2026, March 2). Musk, xAI tout newest Grok update as only 'non-woke' platform: 'Doesn't equivocate'. https://www.foxnews.com/politics/musk-xai-tout-newest-grok-update-as-only-non-woke-platform-citing-answers-to-key-questions

Google DeepMind. (2025, April). An Approach to Technical AGI Safety and Security. https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/evaluating-potential-cybersecurity-threats-of-advanced-ai/An_Approach_to_Technical_AGI_Safety_Apr_2025.pdf

GSA. (2025, September 25). GSA and xAI Partner on $0.42 per Agency Agreement to Accelerate Federal AI Adoption. https://www.gsa.gov/about-us/newsroom/news-releases/gsa-xai-partner-to-accelerate-federal-ai-adoption-09252025

NBC News. (2025). White House irked by Leading the Future, a new $100M pro-AI super PAC. https://www.nbcnews.com/news/amp/rcna239392

NPR. (2025, July 9). Elon Musk's AI chatbot, Grok, started calling itself 'MechaHitler'. https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content

NPR. (2025, December 12). Trump tech adviser David Sacks under fire over vast AI investments. https://www.npr.org/2025/12/12/nx-s1-5631823/david-sacks-ai-advisor-investment-conflicts

NPR. (2026, January 12). Governments ban the Grok chatbot due to nonconsensual bikini pics. https://www.npr.org/2026/01/12/nx-s1-5672579/grok-women-children-bikini-elon-musk

OpenAI. (2025, January 21). Announcing The Stargate Project. https://openai.com/index/announcing-the-stargate-project/

PLOS ONE. (2024). The political preferences of LLMs. David Rozado. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0306621

Policy and Internet. (2025). When Politicians Talk AI: Issue-Frames in Parliamentary Debates Before and After ChatGPT. https://onlinelibrary.wiley.com/doi/full/10.1002/poi3.70010

Promptfoo. (2026). Evaluating political bias in LLMs. https://www.promptfoo.dev/blog/grok-4-political-bias/

Time. (2025, August). Exclusive: 60 U.K. Lawmakers Accuse Google of Breaking AI Safety Pledge. https://time.com/7313320/google-deepmind-gemini-ai-safety-pledge/

Washington Post. (2026, February 27). Pentagon declares Anthropic a threat to national security. https://www.washingtonpost.com/technology/2026/02/27/trump-anthropic-claude-drop/

White House. (2025, January 23). Removing Barriers to American Leadership in Artificial Intelligence. https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/

White House. (2025, December). Ensuring a National Policy Framework for Artificial Intelligence. https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/

Yale University. (2026, March 3). AI's hidden bias: Chatbots can influence opinions without trying. PNAS Nexus. https://news.yale.edu/2026/03/03/ais-hidden-bias-chatbots-can-influence-opinions-without-trying


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...