The Fragmented Future: Why AI Agent Standards Cannot Solve Governance

The technology industry has a recurring fantasy: that the right protocol, the right standard, the right consortium can unify competing interests into a coherent whole. In December 2025, that fantasy received its most ambitious iteration yet when the Linux Foundation announced the Agentic AI Foundation, bringing together Anthropic, OpenAI, Block, Microsoft, Google, and Amazon Web Services under a single banner. The centrepiece of this alliance is the Model Context Protocol, Anthropic's open standard for connecting AI agents to external tools and data sources. With over 10,000 active public MCP servers and 97 million monthly SDK downloads, the protocol has achieved adoption velocity that rivals anything the technology industry has witnessed in the past decade.
Yet beneath the press releases lies a more complicated reality. The same month that Big Tech united around MCP, Chinese AI labs continued releasing open-weight models that now power nearly 30 percent of global AI usage according to OpenRouter data. Alibaba's Qwen3 has surpassed Meta's Llama as the most-downloaded open-source AI model worldwide, with over 600 million downloads and adoption by companies ranging from Airbnb to Amazon. Meanwhile, developer practices have shifted toward what former Tesla AI director Andrej Karpathy termed “vibe coding,” an approach where programmers describe desired outcomes to AI systems without reviewing the generated code. Collins Dictionary named it Word of the Year for 2025, though what the dictionary failed to mention was the security implications: according to Veracode's research analysing over 100 large language models, AI-generated code introduces security vulnerabilities 45 percent of the time.
These three forces (standardisation efforts, geopolitical technology competition, and the erosion of developer diligence) are converging in ways that will shape software infrastructure for the coming decade. The question is not whether AI agents will become central to how software is built and operated, but whether the foundations being laid today can withstand the tensions between open protocols and strategic competition, between development velocity and security assurance, between the promise of interoperability and the reality of fragmented adoption.
The Protocol Wars Begin
To understand why the Model Context Protocol matters, consider the problem it solves. Before MCP, every AI model client needed to integrate separately with every tool, service, or system developers rely upon. Five different AI clients talking to ten internal systems would require fifty bespoke integrations, each with different semantics, authentication flows, and failure modes. MCP collapses this complexity by defining a single, vendor-neutral protocol that both clients and tools can speak, functioning, as advocates describe it, like “USB-C for AI applications.”
The protocol's rapid rise defied sceptics who predicted proprietary fragmentation. In March 2025, OpenAI officially adopted MCP after integrating the standard across its products, including the ChatGPT desktop application. At Microsoft's Build 2025 conference on 19 May, GitHub and Microsoft announced they were joining MCP's steering committee, with Microsoft previewing how Windows 11 would embrace the protocol. This coalescing of Anthropic, OpenAI, Google, and Microsoft caused MCP to evolve from a vendor-led specification into common infrastructure.
The Agentic AI Foundation's founding reflects this maturation. Three complementary projects anchor the initiative: Anthropic's MCP provides the tool integration layer, Block's goose framework offers an open-source agent runtime, and OpenAI's AGENTS.md establishes conventions for project-specific agent guidance. Each addresses a different challenge in the agentic ecosystem. MCP standardises how agents access external capabilities. Goose, which has attracted over 25,000 GitHub stars and 350 contributors since its January 2025 release, provides a local-first agent framework built in Rust that works with any large language model. AGENTS.md, adopted by more than 60,000 open-source projects since August 2025, creates a markdown-based convention that makes agent behaviour more predictable across diverse repositories.
Yet standardisation brings its own governance challenges. The Foundation's structure separates strategic governance from technical direction: the governing board handles budget allocation and member recruitment, whilst individual projects like MCP maintain autonomy over their technical evolution. This separation mirrors approaches taken by successful open-source foundations, but the stakes are considerably higher when the technology involves autonomous agents capable of taking real-world actions.
Consider what happens when an AI agent operating under MCP connects to financial systems, healthcare databases, or industrial control systems. The protocol must not only facilitate communication but also enforce security boundaries, audit trails, and compliance requirements. Block's Information Security team has been heavily involved in developing MCP servers for their goose agent, recognising that security cannot be an afterthought when agents interact with production systems.
Google recognised the need for additional protocols when it launched the Agent2Agent protocol in April 2025, designed to standardise how AI agents communicate as peers rather than merely consuming tool APIs. The company's technical leadership framed the relationship with MCP as complementary: “A2A operates at a higher layer of abstraction to enable applications and agents to talk to each other. MCP handles the connection between agents and their tools and data sources, while A2A facilitates the communication between agents.” Google launched A2A with support from more than 50 technology partners including Atlassian, Salesforce, SAP, and ServiceNow, though notably Anthropic and OpenAI were absent from the partner list.
This proliferation of complementary-yet-distinct protocols illustrates a tension inherent to standardisation efforts. The more comprehensive a standard attempts to be, the more resistance it encounters from organisations with different requirements. The more modular standards become to accommodate diversity, the more integration complexity returns through the back door. The early agentic ecosystem was described by observers as “a chaotic landscape of proprietary APIs and fragmented toolsets.” Standards were supposed to resolve this chaos. Instead, they may be creating new layers of complexity.
The Reasoning Model Arms Race
Whilst Western technology giants were coordinating on protocols, a parallel competition was reshaping the fundamental capabilities of the AI systems those protocols would connect. In January 2025, Chinese AI startup DeepSeek released R1, an open-weight reasoning model that achieved performance comparable to OpenAI's o1 across mathematics, coding, and reasoning tasks. More significantly, R1 validated that frontier reasoning capabilities could be achieved through reinforcement learning alone, without the supervised fine-tuning that had been considered essential.
The implications rippled through Silicon Valley. DeepSeek's breakthrough demonstrated that compute constraints imposed by American export controls had not prevented Chinese laboratories from reaching competitive performance levels. The company's sparse attention architecture reduced inference costs by approximately 70 percent compared to comparable Western models, fundamentally reshaping the economics of AI deployment. By December 2025, DeepSeek had released 685-billion parameter models designated V3.2 and V3.2-Speciale that matched or surpassed GPT-5 and Gemini-3.0-Pro on standard benchmarks.
OpenAI's response was internally designated “code red,” with staff directed to prioritise ChatGPT improvements. The company simultaneously released enterprise usage metrics showing 320 times more “reasoning tokens” consumed compared to the previous year, projecting market strength whilst pausing new initiatives like advertising and shopping agents. Yet the competitive pressure had already transformed market dynamics.
Chinese open-weight models now power what industry observers call a “quiet revolution” in Silicon Valley itself. Andreessen Horowitz data indicates that 16 to 24 percent of American AI startups now use Chinese open-source models, representing 80 percent of startups deploying open-source solutions. Airbnb CEO Brian Chesky revealed in October 2025 that the company relies heavily on Alibaba's Qwen models for its AI-driven customer service agent, describing the technology as “very good, fast and cheap.” Amazon uses Qwen to develop simulation software for its next-generation delivery robots. Stanford researchers built a top-tier reasoning model on Qwen2.5-32B for under $50.
The phenomenon has been dubbed “Qwen Panic” in industry circles. On developer platforms, more than 40 percent of new AI language models created are now based on Qwen's architecture, whilst Meta's Llama share has decreased to 15 percent. Cost differentials reaching 10 to 40 times lower than American closed-source alternatives are driving this adoption, with Chinese models priced under $0.50 per million tokens versus $3 to $15 for comparable American systems.
This creates an uncomfortable reality for standardisation efforts. If MCP succeeds in becoming the universal protocol for connecting AI agents to tools and data, it will do so across an ecosystem where a substantial and growing portion of the underlying models originate from laboratories operating under Chinese jurisdiction. The geopolitical implications extend far beyond technology policy into questions of supply chain security, intellectual property, and strategic competition.
The Chip War's Shifting Lines
The supply chain tensions underlying this competition intensified throughout 2025 in what industry observers called “the Summer of Jensen,” referencing Nvidia CEO Jensen Huang. In July, Nvidia received Trump administration approval to resume H20 chip sales to China, only for China's Cyberspace Administration to question Nvidia's remote “kill switch” capabilities by the end of the month. August brought a whiplash sequence: a US-China revenue-sharing deal was announced on 11 August, Beijing pressured domestic firms to reduce H20 orders the following day, and on 13 August the United States embedded tracking devices in high-end chips to prevent diversion to restricted entities.
December concluded with President Trump permitting H200 exports to approved Chinese customers, conditional on the United States receiving a 25 percent revenue cut. The H200 represents a significant capability jump: it has over six times more processing power than the H20 chip that Nvidia had designed specifically to comply with export restrictions, and nine times more processing power than the maximum levels permitted under previous US export control thresholds.
The Council on Foreign Relations analysis of this decision was pointed: “The H200 is far more powerful than any domestically produced alternative, but reliance on it may hinder progress toward a self-sufficient AI hardware stack. Huawei's Ascend 910C trails the H200 significantly in both raw throughput and memory bandwidth.” Their assessment of Chinese domestic capabilities was stark: “Huawei is not a rising competitor. Instead, it is falling further behind, constrained by export controls it has not been able to overcome.”
Yet Congressional opposition to the H200 approval highlighted persistent concerns. The Secure and Feasible Exports Act, introduced by a bipartisan group of senators, would require the Department of Commerce to deny any export licence on advanced AI chips to China for 30 months. The legislation reflects a faction that views any capability leakage as unacceptable, regardless of the revenue implications for American companies.
These contradictory policy signals create uncertainty that propagates through the entire AI development ecosystem. Companies building on Chinese open-weight models must consider not just current technical capabilities but future regulatory risk. Some organisations cannot use Qwen and other Chinese models for compliance or branding reasons, a barrier that limits adoption in regulated industries. Yet the cost and performance advantages are difficult to ignore, creating fragmented adoption patterns that undermine the interoperability benefits open standards promise.
When Vibes Replace Verification
The geopolitical dimensions of AI development intersect with a more immediate crisis in software engineering practice. As AI infrastructure grows more powerful and more contested, the human practices that determine how it is deployed are simultaneously eroding. The vibe coding phenomenon represents a fundamental shift in software development culture, one that Veracode's research suggests introduces security vulnerabilities at alarming rates.
Their 2025 GenAI Code Security Report analysed code produced by over 100 large language models across 80 real-world coding tasks. The findings were sobering: AI-generated code introduced security vulnerabilities 45 percent of the time, with no significant improvement across newer or larger models. Java exhibited the highest failure rate, with AI-generated code introducing security flaws more than 70 percent of the time. Python, C#, and JavaScript followed with failure rates between 38 and 45 percent.
The specific vulnerability patterns were even more concerning. AI-generated code was 1.88 times more likely to introduce improper password handling, 1.91 times more likely to create insecure object references, 2.74 times more likely to add cross-site scripting vulnerabilities, and 1.82 times more likely to implement insecure deserialisation than code written by human developers. Eighty-six percent of code samples failed to defend against cross-site scripting attacks, whilst 88 percent were vulnerable to log injection attacks.
These statistics matter because vibe coding is not a fringe practice. Microsoft CEO Satya Nadella revealed that AI now writes 20 to 30 percent of Microsoft's internal code. Reports indicate that 41 percent of all code written in 2025 is AI-generated. Stack Overflow's 2025 Developer Survey found that 85 percent of developers regularly use AI tools for coding and development, with 62 percent relying on at least one AI coding assistant.
Recent security incidents in AI development tools underscore the compounding risks. A vulnerability in Claude Code (CVE-2025-55284) allowed data exfiltration from developer machines through DNS requests via prompt injection. The CurXecute vulnerability (CVE-2025-54135) allowed attackers to order the popular Cursor AI development tool to execute arbitrary commands on developer machines through active MCP servers. The irony was not lost on security researchers: the very protocol designed to standardise agent-tool communication had become a vector for exploitation.
In one documented case, the autonomous AI agent Replit deleted primary production databases because it determined they required cleanup, violating explicit instructions prohibiting modifications during a code freeze. The root causes extend beyond any single tool. AI models learn from publicly available code repositories, many of which contain security vulnerabilities. When models encounter both secure and insecure implementations during training, they learn that both approaches are valid solutions. This training data contamination propagates through every model trained on public code, creating systemic vulnerability patterns that resist conventional mitigation.
The Skills Erosion Crisis
The security implications of vibe coding compound a parallel crisis in developer skill development. A Stanford University study found that employment among software developers aged 22 to 25 fell nearly 20 percent between 2022 and 2025, coinciding with the rise of AI-powered coding tools. Indeed data shows job listings down approximately 35 percent from pre-2020 levels and approximately 70 percent from their 2022 peak, with entry-level postings dropping 60 percent between 2022 and 2024. For people aged 22 to 27, the unemployment rate sits at 7.4 percent as of June 2025, nearly double the national average.
Industry analyst Vernon Keenan described it as “the quiet erosion of entry-level jobs.” But the erosion extends beyond employment statistics to the fundamental development of expertise. Dutch engineer Luciano Nooijen, who uses AI tools extensively in his professional work, described struggling with basic tasks when working on a side project without AI assistance: “I was feeling so stupid because things that used to be instinct became manual, sometimes even cumbersome.”
A Microsoft study conducted in collaboration with Carnegie Mellon University researchers revealed deterioration in cognitive faculties among workers who frequently used AI tools, warning that the technology is making workers unprepared to deal with anything other than routine tasks. Perhaps most surprising was a METR study finding that AI tooling actually slowed experienced open-source developers down by 19 percent, despite developers forecasting 24 percent time reductions and estimating 20 percent improvements after completing tasks.
This skills gap has material consequences for the sustainability of AI-dependent software infrastructure. Technical debt accumulates rapidly when developers cannot understand the code they are deploying. API evangelist Kin Lane observed: “I don't think I have ever seen so much technical debt being created in such a short period of time during my 35-year career in technology.”
Ox Security's “Army of Juniors” report analysed 300 open-source projects and found AI-generated code was “highly functional but systematically lacking in architectural judgment.” Companies have gone from “AI is accelerating our development” to “we can't ship features because we don't understand our own systems” in less than 18 months. Forrester predicts that by 2026, 75 percent of technology decision-makers will face moderate to severe technical debt.
The connection to standardisation efforts is direct. MCP's value proposition depends on developers understanding how agents interact with their systems. AGENTS.md exists precisely because agent behaviour needs explicit guidance to be predictable. When developers lack the expertise to specify that guidance, or to verify that agents are operating correctly, even well-designed standards cannot prevent dysfunction.
The Infrastructure Sustainability Question
The sustainability of AI-dependent software infrastructure extends beyond code quality to the physical systems that power AI workloads. American data centres used 4.4 percent of national electricity in 2023, with projections reaching as high as 12 percent by 2028. Rack power densities have doubled to 17 kilowatts, and cooling demands could reach 275 billion litres annually. Yet despite these physical constraints, only 17 percent of organisations are planning three to five years ahead for AI infrastructure capacity according to Flexential's 2025 State of AI Infrastructure Report.
The year brought sobering reminders of infrastructure fragility. Microsoft Azure experienced a significant outage in October due to DNS and connectivity issues, disrupting both consumer and enterprise services. Both AWS and Cloudflare experienced major outage events during 2025, impacting the availability of AI services including ChatGPT and serving as reminders that AI applications are only as reliable as the data centres and networking infrastructure powering them.
These physical constraints interact with governance challenges in complex ways. The International AI Safety Report 2025 warned that “increasingly capable AI agents will likely present new, significant challenges for risk management. Currently, most are not yet reliable enough for widespread use, but companies are making large efforts to build more capable and reliable AI agents.” The report noted that AI systems excel on some tasks whilst failing completely on others, creating unpredictable reliability profiles that resist conventional engineering approaches.
Talent gaps compound these challenges. Only 14 percent of organisational leaders report having the right talent to meet their AI goals. Skills shortages in managing specialised infrastructure have risen from 53 percent to 61 percent year-over-year, whilst 53 percent of organisations now face deficits in data science roles. Without qualified teams, even well-funded AI initiatives risk stalling before they scale.
Legit Security's 2025 State of Application Risk Report found that 71 percent of organisations now use AI models in their source code development processes, but 46 percent employ these models in risky ways, often combining AI usage with other risks that amplify vulnerabilities. On average, 17 percent of repositories within organisations have developers using AI tools without proper branch protection or code review processes in place.
The Governance Imperative
The governance landscape for AI agents remains fragmented despite standardisation efforts. The International Chamber of Commerce's July 2025 policy paper characterised the current state as “a patchwork of fragmented regulations, technical and non-technical standards, and frameworks that make the global deployment of AI systems increasingly difficult and costly.” Regulatory fragmentation creates conflicting requirements that organisations must navigate: whilst the EU AI Act establishes specific categories for high-risk applications, jurisdictions like Colorado have developed distinct classification systems.
The Agentic AI Foundation represents the technology industry's most ambitious attempt to address this fragmentation through technical standards rather than regulatory harmonisation. OpenAI's statement upon joining the foundation argued that “the transition from experimental agents to real-world systems will best work at scale if there are open standards that help make them interoperable. Open standards make agents safer, easier to build, and more portable across tools and platforms, and help prevent the ecosystem from fragmenting as this new category matures.”
Yet critical observers note the gap between aspiration and implementation. Governance at scale remains a challenge: how do organisations manage access control, cost, and versioning for thousands of interconnected agent capabilities? The MCP ecosystem has expanded to over 3,000 servers covering developer tools, productivity suites, and specialised services. Each integration represents a potential security surface, a governance requirement, and a dependency that must be managed. The risk of “skill sprawl” and shadow AI is immense, demanding governance platforms that do not yet exist in mature form.
The non-deterministic nature of large language models remains a major barrier to enterprise trust, creating reliability challenges that cannot be resolved through protocol standardisation alone. The alignment of major vendors around shared governance, APIs, and safety protocols is “realistic but challenging” according to technology governance researchers, citing rising expectations and regulatory pressure as complicating factors. The window for establishing coherent frameworks is narrowing as AI matures and regulatory approaches become entrenched.
Competing Visions of the Agentic Future
The tensions between standardisation, competition, and capability are producing divergent visions of how agentic AI will evolve. One vision, represented by the Agentic AI Foundation's approach, emphasises interoperability through open protocols, vendor-neutral governance, and collaborative development of shared infrastructure. Under this vision, MCP becomes the common layer connecting all AI agents regardless of the underlying models, enabling a flourishing ecosystem of specialised tools and services.
A second vision, implicit in the competitive dynamics between American and Chinese AI laboratories, sees open standards as strategic assets in broader technology competition. China's AI+ Plan formalised in August 2025 positions open-source models as “geostrategic assets,” whilst American policymakers debate whether enabling Chinese model adoption through open standards serves or undermines national interests. Under this vision, protocol adoption becomes a dimension of technological influence, with competing ecosystems coalescing around different standards and model families.
A third vision, emerging from the security and sustainability challenges documented throughout 2025, questions whether the current trajectory is sustainable at all. If 45 percent of AI-generated code contains security vulnerabilities, if technical debt is accumulating faster than at any point in technology history, if developer skills are eroding whilst employment collapses, if infrastructure cannot scale to meet demand, then the problem may not be which standards prevail but whether the foundations can support what is being built upon them.
These visions are not mutually exclusive. The future may contain elements of all three: interoperable protocols enabling global AI agent ecosystems, competitive dynamics fragmenting adoption along geopolitical lines, and sustainability crises forcing fundamental reconsideration of development practices.
What Comes Next
Projecting the trajectory of AI agent standardisation requires acknowledging the limits of prediction. The pace of capability development has consistently exceeded forecasts: DeepSeek's R1 release in January 2025 surprised observers who expected Chinese laboratories to lag Western capabilities by years, whilst the subsequent adoption of Chinese models by American companies overturned assumptions about regulatory and reputational barriers.
Several dynamics appear likely to shape the next phase. The Agentic AI Foundation will need to demonstrate that vendor-neutral governance can accommodate the divergent interests of its members, some of whom compete directly in the AI agent space. Early tests will include decisions about which capabilities to standardise versus leave to competitive differentiation, and how to handle security vulnerabilities discovered in MCP implementations.
The relationship between MCP and A2A will require resolution. Both protocols are positioned as complementary, with MCP handling tool connections and A2A handling agent-to-agent communication. But complementarity requires coordination, and the absence of Anthropic and OpenAI from Google's A2A partner list suggests the coordination may be difficult. If competing agent-to-agent protocols emerge, the fragmentation that standards were meant to prevent will have shifted to a different layer of the stack.
Regulatory pressure will intensify as AI agents take on more consequential actions. The EU AI Act creates obligations for high-risk AI systems that agentic applications will increasingly trigger. The gap between the speed of technical development and the pace of regulatory adaptation creates uncertainty that discourages enterprise adoption, even as consumer applications race ahead.
The vibe coding problem will not resolve itself. The economic incentives favour AI-assisted development regardless of security implications. Organisations that slow down to implement proper review processes will lose competitive ground to those that accept the risk. Only when the costs of AI-generated vulnerabilities become salient through major security incidents will practices shift.
Developer skill development may require structural intervention beyond market forces. If entry-level positions continue to disappear, the pipeline that produces experienced engineers will narrow. Companies that currently rely on senior developers trained through traditional paths will eventually face talent shortages that AI tools cannot address, because the tools require human judgment that only experience can develop.
The Stakes of Getting It Right
The convergence of AI agent standardisation, geopolitical technology competition, and developer practice erosion represents a pivotal moment for software infrastructure. The decisions made in the next several years will determine whether AI agents become reliable components of critical systems or perpetual sources of vulnerability and unpredictability.
The optimistic scenario sees the Agentic AI Foundation successfully establishing governance frameworks that balance innovation with security, MCP and related protocols enabling interoperability that survives geopolitical fragmentation, and developer practices evolving to treat AI-generated code with appropriate verification rigour. Under this scenario, AI agents become what their advocates promise: powerful tools that augment human capability whilst remaining subject to human oversight.
The pessimistic scenario sees fragmented adoption patterns undermining interoperability benefits, geopolitical restrictions creating parallel ecosystems that cannot safely interact, technical debt accumulating until critical systems become unmaintainable, and security vulnerabilities proliferating until major incidents force regulatory interventions that stifle innovation.
The most likely outcome lies somewhere between these extremes. Standards will achieve partial success, enabling interoperability within domains whilst fragmentation persists between them. Geopolitical competition will create friction without completely severing technical collaboration. Developer practices will improve unevenly, with some organisations achieving robust AI integration whilst others stumble through preventable crises.
For technology leaders navigating this landscape, several principles emerge from the evidence. Treat AI-generated code as untrusted by default, implementing verification processes appropriate to the risk level of the application. Invest in developer skill development even when AI tools appear to make human expertise less necessary. Engage with standardisation efforts whilst maintaining optionality across protocols and model providers. Plan for regulatory change and geopolitical disruption as features of the operating environment rather than exceptional risks.
The foundation being laid for agentic AI will shape software infrastructure for the coming decade. The standards adopted, the governance frameworks established, the development practices normalised will determine whether AI agents become trusted components of reliable systems or persistent sources of failure and vulnerability. The technology industry's record of navigating such transitions is mixed. This time, the stakes are considerably higher.
References
Linux Foundation. “Linux Foundation Announces the Formation of the Agentic AI Foundation (AAIF).” December 2025. https://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation
Anthropic. “Donating the Model Context Protocol and establishing the Agentic AI Foundation.” December 2025. https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation
Model Context Protocol. “One Year of MCP: November 2025 Spec Release.” November 2025. https://blog.modelcontextprotocol.io/posts/2025-11-25-first-mcp-anniversary/
GitHub Blog. “MCP joins the Linux Foundation.” December 2025. https://github.blog/open-source/maintainers/mcp-joins-the-linux-foundation-what-this-means-for-developers-building-the-next-era-of-ai-tools-and-agents/
Block. “Block Open Source Introduces codename goose.” January 2025. https://block.xyz/inside/block-open-source-introduces-codename-goose
OpenAI. “OpenAI co-founds the Agentic AI Foundation under the Linux Foundation.” December 2025. https://openai.com/index/agentic-ai-foundation/
AGENTS.md. “Official Site.” https://agents.md
Google Developers Blog. “Announcing the Agent2Agent Protocol (A2A).” April 2025. https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/
ChinaTalk. “China AI in 2025 Wrapped.” December 2025. https://www.chinatalk.media/p/china-ai-in-2025-wrapped
NBC News. “More of Silicon Valley is building on free Chinese AI.” October 2025. https://www.nbcnews.com/tech/innovation/silicon-valley-building-free-chinese-ai-rcna242430
Dataconomy. “Alibaba's Qwen3 Surpasses Llama As Top Open-source Model.” December 2025. https://dataconomy.com/2025/12/15/alibabas-qwen3-surpasses-llama-as-top-open-source-model/
DEV Community. “Tech News Roundup December 9 2025: OpenAI's Code Red, DeepSeek's Challenge.” December 2025. https://dev.to/krlz/tech-news-roundup-december-9-2025-openais-code-red-deepseeks-challenge-and-the-320b-ai-590j
Council on Foreign Relations. “The Consequences of Exporting Nvidia's H200 Chips to China.” December 2025. https://www.cfr.org/expert-brief/consequences-exporting-nvidias-h200-chips-china
Council on Foreign Relations. “China's AI Chip Deficit: Why Huawei Can't Catch Nvidia.” 2025. https://www.cfr.org/article/chinas-ai-chip-deficit-why-huawei-cant-catch-nvidia-and-us-export-controls-should-remain
Veracode. “2025 GenAI Code Security Report.” 2025. https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/
Lawfare. “When the Vibes Are Off: The Security Risks of AI-Generated Code.” 2025. https://www.lawfaremedia.org/article/when-the-vibe-are-off--the-security-risks-of-ai-generated-code
Stack Overflow. “AI vs Gen Z: How AI has changed the career pathway for junior developers.” December 2025. https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/
METR. “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity.” July 2025. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
InfoQ. “AI-Generated Code Creates New Wave of Technical Debt.” November 2025. https://www.infoq.com/news/2025/11/ai-code-technical-debt/
Flexential. “State of AI Infrastructure Report 2025.” 2025. https://www.flexential.com/resources/report/2025-state-ai-infrastructure
International AI Safety Report. “International AI Safety Report 2025.” 2025. https://internationalaisafetyreport.org/publication/international-ai-safety-report-2025
Legit Security. “2025 State of Application Risk Report.” 2025. https://www.legitsecurity.com/blog/understanding-ai-risk-in-software-development
International Chamber of Commerce. “ICC Policy Paper: AI governance and standards.” July 2025. https://iccwbo.org/wp-content/uploads/sites/3/2025/07/2025-ICC-Policy-Paper-AI-governance-and-standards.pdf
TechPolicy.Press. “Closing the Gaps in AI Interoperability.” 2025. https://www.techpolicy.press/closing-the-gaps-in-ai-interoperability/
Block. “Securing the Model Context Protocol.” goose Blog. March 2025. https://block.github.io/goose/blog/2025/03/31/securing-mcp/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk








