War Games, Neural Networks: How MIT Is Rethinking Military Leadership

In December 2025, MIT announced a programme that would have seemed implausible even a decade earlier: a two-year master's degree designed to teach naval officers the fundamentals of artificial intelligence, machine learning, and autonomous systems. The programme, designated 2N6, pairs the university's Department of Mechanical Engineering with its Department of Electrical Engineering and Computer Science, awarding graduates both a Master of Science in mechanical engineering and an AI certificate from the MIT Schwarzman College of Computing. It is, in essence, a bet that the future of naval warfare will be shaped not by those who build the biggest ships, but by those who best understand the algorithms directing them.

The timing is no coincidence. In January 2026, the Department of Defense released its Artificial Intelligence Acceleration Strategy, declaring its intention to become an “AI-first” organisation. Under Secretary for Research and Engineering Emil Michael had already pruned the Pentagon's list of critical technology areas from fourteen to six, placing applied artificial intelligence at the very top. And at U.S. Indo-Pacific Command, where the prospect of conflict with a technologically sophisticated adversary concentrates minds with particular intensity, Commander Admiral Samuel Paparo had been arguing for months that future wars would be won not by superior firepower alone, but by whoever could “see, understand, decide and act faster.” The question was no longer whether the military needed AI-literate officers, but how quickly it could produce them.

The origins of 2N6 trace back to a campus visit by Paparo himself. The admiral toured MIT's existing AI research facilities and immediately recognised a gap. The university had maintained the 2N Naval Construction and Engineering programme since 1901, training generations of officers in ship design and acquisition. The programme was about to celebrate its 125th anniversary in 2026. But the world had changed. The defining technologies of 21st-century naval power were no longer hull forms and propulsion systems alone; they were neural networks, reinforcement learning, and autonomous underwater vehicles. Paparo envisioned an applied AI programme modelled on the existing 2N infrastructure, and within months, 2N6 began taking shape.

Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering, has been central to the programme's development. MacLean, himself a graduate of the 2N programme whose thesis focused on the fracture and plasticity characterisation of DH-36 Navy steel, explained that Paparo “was given an overview of some of the cutting-edge work and research that MIT has done and is doing in the field of AI” and “made the connection, envisioning an applied AI program similar to 2N.” In describing the programme's scope, MacLean was emphatic about breadth: “AI is a force multiplier that can be used for data processing, decision support, unmanned and autonomous systems, cyber defence, logistics and supply chains, energy management, and many other fields.” This is not a programme narrowly focused on weapons systems or battlefield robots; it treats artificial intelligence as a pervasive capability touching every aspect of naval operations.

Dan Huttenlocher, the inaugural Dean of the MIT Schwarzman College of Computing, lent institutional weight to the announcement. “I'm honoured that the college can contribute to and support such a vital program that will equip our nation's naval officers with the technical expertise they need,” Huttenlocher stated. His involvement signals the seriousness of MIT's commitment: Huttenlocher, who previously founded Cornell Tech and co-authored “The Age of AI: And Our Human Future” with Henry Kissinger and Eric Schmidt, brings both academic credibility and a deep engagement with the societal implications of artificial intelligence.

A Curriculum Built for the Contested Spectrum

The 2N6 curriculum reflects a deliberate attempt to balance theoretical depth with operational relevance, structured to satisfy the U.S. Navy's sub-specialty code for Applied Artificial Intelligence. Students begin with a “Summer Camp” of foundational courses covering linear algebra and optimisation, introductory programming, discrete mathematics and proofs, algorithms and data structures, and software fundamentals. These are not optional polish; they are prerequisites designed to ensure that officers arriving from operational billets, where they may have spent years commanding ships or submarines rather than writing code, have the mathematical and computational fluency to engage with what follows.

The core of the programme divides into several tracks. The probability, inference, and machine learning sequence includes courses in stochastic dynamical systems, introduction to probability, introduction to inference, and both introductory and advanced machine learning. These build toward specialised AI topics: advances in computer vision, topics in multi-agent learning, quantitative methods for natural language processing, optimisation methods, and a course titled “AI, Decision Making and Society.” That final course is significant. It signals that 2N6 does not treat artificial intelligence as a purely technical problem but as one embedded in social, political, and ethical contexts that military leaders must navigate with the same rigour they apply to technical challenges.

The naval applications track offers four areas of concentration, each designed to connect AI theory to operational reality. In autonomy, students study unmanned marine vehicle autonomy, sensing and communications, manoeuvring and control of surface and underwater vehicles, and principles of autonomy and decision making. In design and manufacturing, the focus turns to AI and machine learning for design, principles of naval ship design, and manufacturing processes and systems. A games and strategy track covers reinforcement learning combined with game theory and wargaming, preparing officers for the adversarial dynamics of actual conflict. And an innovation track provides team-based interdisciplinary collaboration, simulating the cross-functional problem-solving that AI deployment demands in practice.

Themis Sapsis, the William I. Koch Professor in mechanical engineering and Director of the Center for Ocean Engineering at MIT, has described the programme as “specifically designed to train naval officers on the fundamentals and applications of AI, but also involve them in research that has direct impact to the Navy.” Sapsis, who holds a diploma in naval architecture and marine engineering from the Technical University of Athens and a PhD in mechanical and ocean engineering from MIT, brings direct domain expertise to the programme. His own research spans nonlinear dynamical systems, probabilistic modelling, and data-driven methods, with applications ranging from predicting catastrophic sea waves to calculating extreme loads on warships. His work has been recognised with awards from the Office of Naval Research, the Army Research Office, and the Air Force Office of Scientific Research. “2N6 can model a new paradigm for advanced AI education focused more broadly on supporting national security,” Sapsis has emphasised, positioning the programme not merely as a naval initiative but as a potential template for defence AI education writ large.

John Hart, Head of MIT's Department of Mechanical Engineering, framed the programme in generational terms: “With the 2N6 program, we're proud to be at the helm of such an important charge in training the next generation of leaders for the Navy.” Asu Ozdaglar, Deputy Dean of the Schwarzman College of Computing, similarly described the partnership as “an important collaboration with the U.S. Navy” that reflects the college's broader mission to bring computing expertise to consequential domains.

The Technical Competencies That Matter

The specific competencies the programme prioritises reveal much about where the U.S. Navy believes its AI gaps are most acute. Autonomous systems sit at the top of the list, and for good reason. Admiral Paparo has been explicit about wanting large numbers of low-cost, long-endurance unmanned sensor platforms, including drones, robot ships, and autonomous underwater vehicles, to maintain persistent surveillance across the Indo-Pacific. With Chinese wargames growing ever larger and more realistic, Paparo has argued that traditional intelligence “indications and warning” can no longer reliably distinguish between exercises and an actual invasion preparation. His proposed solution: surveillance drones feeding AI analysis to detect anomalies and patterns more quickly and accurately than human analysts could manage alone.

“We never send a human being to do something that a machine can do,” Paparo has stated. “We never lose human agency over offensive power.” The tension between those two principles captures the central challenge of military autonomy: expanding the envelope of machine capability whilst maintaining meaningful human control. Graduates of 2N6 will be expected to design and manage systems that operate in this tension, understanding both the engineering of autonomy and the doctrinal requirements for human oversight.

Cyber defence represents another critical domain. The ability to protect AI systems themselves from adversarial manipulation, data poisoning, and model exploitation is becoming as important as the AI capabilities those systems provide. An AI-enabled fleet that can be fooled by adversarial inputs or compromised through supply chain attacks on its training data becomes a liability rather than an advantage. The curriculum's emphasis on algorithms, data structures, and software fundamentals is not merely academic preparation; it provides the conceptual toolkit for understanding how AI systems can be attacked and defended. MIT Lincoln Laboratory's Embedded and Open Systems Group has been developing AI research environments specifically to evaluate promising embedded AI technologies and their impact on critical defence missions, from advanced multimodal navigation to synthetic aperture radar object detection.

Decision intelligence, the application of AI to command-and-control processes, constitutes perhaps the most consequential area. At U.S. Indo-Pacific Command, AI is already being pursued to accelerate the decision cycle and provide predictive analysis for logistics. Colonel Jared Voneida, INDOPACOM's C4 Operations Division chief, has noted that AI is being pursued to speed up the decision cycle across every warfighting function. The concept of “decision superiority,” which Paparo has defined as understanding “who is making the best decisions, who is best able to see, understand, decide, act, learn and assess,” depends on officers who can critically evaluate AI-generated recommendations rather than simply accepting them. This requires not just technical literacy but a sophisticated understanding of where AI excels, where it fails, and how to design human-machine teaming arrangements that exploit strengths whilst compensating for weaknesses.

Machine learning for manufacturing and design rounds out the technical portfolio. Naval shipbuilding remains an enormously complex industrial undertaking, and AI-driven design optimisation, predictive maintenance, and manufacturing process control offer significant potential for reducing costs and timelines. MIT Lincoln Laboratory has already demonstrated systems like COVAS (Human-Machine Collaborative Optimisation via Apprenticeship Scheduling), which uses machine learning to provide real-time ship defence scheduling solutions by learning from human experts. COVAS is the first and only algorithm to provide such real-time solutions, and researchers plan to mature the technology before proposing it as a Future Naval Capability to the Office of Naval Research. Maintenance operations across INDOPACOM are also being transformed through AI-enabled predictive systems that analyse sensor data from shipboard systems and aircraft components to identify potential failures before they become critical. Graduates of 2N6 would be expected to evaluate, integrate, and manage such systems across the fleet.

Ethics, Governance, and the Responsible AI Question

Perhaps the most consequential element of the 2N6 curriculum is one that might easily be overlooked: the mandatory inclusion of coursework in the social and ethical responsibilities of computing. This is not a token addition. The MIT Schwarzman College of Computing operates SERC (Social and Ethical Responsibilities of Computing), a cross-cutting initiative led by associate deans Nikos Trichakis and Brian Hedden. SERC develops peer-reviewed case studies, active learning projects, and pedagogical materials addressing privacy and surveillance, inequality and justice, autonomous systems and robotics, ethical computing practice, and law and policy. Its materials are based on original research, published through open-access licensing, and designed for integration across MIT's computing curriculum. Naval officers in 2N6 will encounter these frameworks not as a separate ethics module bolted onto a technical degree, but as an integral dimension of their AI education.

This integration matters because the Department of Defense has its own ethical framework that graduates will be expected to operationalise. The DoD adopted five principles for the ethical development of AI capabilities: responsible, equitable, traceable, reliable, and governable. The Responsible AI Strategy and Implementation Pathway translates these principles into concrete requirements, promoting human-machine teaming rather than fully autonomous systems and requiring that AI technologies be integrated in a lawful, ethical, and accountable manner. The DoD's Responsible AI Toolkit builds on the Defence Innovation Unit's guidelines, NIST's AI Risk Management Framework, and IEEE 7000-2021, establishing standards for operationalising ethical principles throughout the technology lifecycle. The Defence Innovation Unit launched its strategic initiative in March 2020 specifically to implement ethical principles into commercial prototyping and acquisition programmes, ensuring alignment through a process designed to be reliable, replicable, and scalable.

The question of traceability deserves particular attention. Traceability, in the DoD's formulation, means the ability to track and document all data and decisions of an AI tool, including how it was trained and how it processes information. For officers deploying AI in operational contexts, this creates obligations that are simultaneously technical (implementing logging, auditing, and explainability mechanisms) and organisational (ensuring that chains of command can meaningfully review AI-informed decisions). The programme's emphasis on algorithms, inference, and decision-making provides the technical foundation for understanding traceability, whilst the ethics coursework provides the normative framework for why it matters.

Yet genuine tensions remain. The DoD's ethical principles exist alongside a policy environment that has shifted significantly. President Biden's Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of AI, issued in October 2023, established foundational requirements for AI safety across federal agencies. That order was revoked in January 2025, and the subsequent AI Action Plan focuses less on safe development and more on acceleration. The DoD's own ethical principles remain formally in place, but the broader political context creates ambiguity about how rigorously they will be enforced. As Paparo himself has put it: “we need robust, ethical AI systems that enhance decision-making while fiercely preserving human oversight of critical operations.” Officers trained at MIT will enter a system where stated principles and operational incentives may not always align, making their ability to navigate ethical complexity all the more important.

The Dual-Use Dilemma

The technologies that 2N6 graduates will master are, almost without exception, dual-use. The same computer vision algorithms that identify military targets can diagnose medical conditions. The same natural language processing techniques that analyse intercepted communications can power consumer chatbots. The same reinforcement learning methods that optimise military logistics can manage commercial supply chains. This fundamental characteristic of AI technology, that its military and civilian applications are often indistinguishable at the algorithmic level, creates governance challenges that no single curriculum can resolve.

Research published in PMC (PubMed Central) has documented what scholars term the “double-distinguishability problem” of AI: not only is AI software with potential military applications likely to reside in both military and civilian networks, but even within the military domain, distinguishing between platforms that integrate AI and those that do not is extremely difficult. This complicates arms control, export regulation, and confidence-building measures. The degree of transparency required to build international confidence or ensure compliance with agreements may itself produce security vulnerabilities, discouraging cooperation.

The inherent opacity of many advanced machine learning systems compounds the problem. Despite strong performance in testing environments, the underlying reasoning of deep neural networks remains largely opaque. This “black box” quality compromises the human oversight required to uphold legal and ethical standards in military operations, particularly when AI decisions are made in milliseconds. Legal regimes must clarify fault attribution, determining whether responsibility falls on the commanding officer, the system developer, the algorithm designer, or the deploying state. What constitutes “meaningful human control” remains ambiguous and case-dependent, with a recent analysis noting that a human can technically interact with an autonomous system without having any substantive moral, legal, or operational oversight.

The United Nations Office for Disarmament Affairs convened the Military AI, Peace and Security Dialogues in 2025, where participants emphasised retaining human judgement and control over decisions on the use of force. They cautioned that legal determinations should not be coded into opaque systems, and that decision-making support tools should enable, not replace, legality and ethical reasoning. The U.S. State Department's Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy established broader norms, requiring that military AI use comply with international humanitarian law, that accountability be maintained through a responsible human chain of command, and that states take proactive steps to minimise unintended bias.

For MIT's 2N6 graduates, this dual-use reality means that their technical skills will be applicable across domains, but their ethical and governance training will need to be specifically calibrated for military contexts where the consequences of error are measured in lives rather than revenue. The programme's integration of game theory, wargaming, and reinforcement learning acknowledges that military AI operates in adversarial environments where rational actors are actively trying to exploit, deceive, or defeat the systems being deployed.

The Global AI Arms Race in Uniform

MIT's 2N6 programme does not exist in a vacuum. It is one move in an accelerating international competition to build AI-literate military forces, and the landscape of that competition reveals starkly different approaches to the same underlying challenge.

China represents the most direct competitive pressure. The People's Liberation Army views AI as leading to the next revolution in military affairs and expects to field a range of “algorithmic warfare” and “network-centric warfare” capabilities by 2030. Georgetown University's Center for Security and Emerging Technology has identified 370 Chinese institutions whose researchers have published papers related to general artificial intelligence. The PLA's approach relies heavily on military-civil fusion, integrating universities and commercial technology companies directly into defence research and development. A majority of suppliers for AI-related PLA procurement contracts are now civilian companies and universities rather than traditional state-owned defence enterprises.

Chinese researchers at institutions linked to the PLA's Academy of Military Science used Meta's open-source Llama 2 13B model to build “ChatBIT,” a military-focused AI tool fine-tuned and “optimised for dialogue and question-answering tasks in the military field.” The PLA rapidly adopted DeepSeek's generative AI models in early 2025, likely deploying them for intelligence purposes. The Pentagon's 2024 China report noted that “China's commercial and academic AI sectors made progress on large language models and LLM-based reasoning models, which has narrowed the performance gap between China's models and the U.S. models currently leading the field.” China's emerging 15th Five-Year Plan framework is expected to institutionalise military-civil fusion as the primary pathway for achieving what Chinese strategists call an “intelligentised” PLA by 2035.

Russia has pursued a different trajectory, constrained by sanctions and a smaller technology sector. The National Strategy for the Development of Artificial Intelligence, signed by President Putin in 2019, set targets of training 15,500 AI specialists by 2030 and allocated 26.49 billion rubles to AI development from 2025 to 2027. Russia aims to automate 30 percent of its military equipment and has begun integrating AI into systems like the ZALA Lancet drone swarm, which reportedly allows drones to exchange information and divide tasks autonomously. However, senior Russian military experts, including Vladimir Prikhvatilov of the Academy of Military Science, have acknowledged that Russia has “virtually no chances to catch up with the Chinese or the Americans” in military AI. The war in Ukraine has both accelerated urgency and exposed the gap between Russia's AI rhetoric and its actual capabilities, with international sanctions further constraining access to advanced computing hardware.

The United Kingdom offers a more direct parallel to MIT's approach. The UK Ministry of Defence published its Defence Artificial Intelligence Strategy describing an “ambitious, safe, responsible” approach to military AI. The Alan Turing Institute, as a strategic partner of the Defence Science and Technology Laboratory (Dstl), conducts defence-relevant AI research and has published frameworks for AI assurance in military contexts, including a commander's guide for uncrewed systems and recommendations for iteratively identifying, documenting, and communicating risks. A January 2025 Defence Committee report called on the Ministry of Defence to “transform itself into an 'AI-native' organisation” whilst acknowledging that the sector remained under-developed. Sub-committee chair Emma Lewell-Buck emphasised the need to make AI “a greater part of military education” and to facilitate movement between civilian and defence AI sectors, a recommendation that echoes precisely the gap MIT's 2N6 programme is designed to fill.

Israel has arguably moved furthest in operational deployment. The IDF established the Artificial Intelligence and Big Data Research Centre, created a new AI Division within its C4I and Cyber Defence Directorate following lessons from the Israel-Hamas War, and in January 2025, the Israeli Ministry of Defence established the AI and Autonomy Administration. Eyal Zamir, the Ministry's director general, emphasised that this was the first new administration established within the Ministry in over two decades. Approximately 750 military reservists were enrolled in AI training programmes organised by Israel's Innovation Authority and the Ministry of Defence in January 2026, reflecting a recognition that AI literacy cannot be confined to active-duty specialists. The IDF's model of recruiting talented high school graduates into elite technology units like Unit 8200, training them intensively through programmes like the 36-month Havatzalot Programme at Hebrew University, and then cycling them into the civilian technology sector creates a distinctive pipeline that no other nation has fully replicated.

Reshaping the Defence Workforce

The emergence of programmes like 2N6 points toward a fundamental recomposition of what militaries expect from their officer corps. The traditional career path, in which technical specialists remained in engineering billets whilst operational commanders focused on tactics and leadership, is giving way to a model that demands hybrid competency. Officers who will command AI-enabled forces need enough technical understanding to evaluate what their systems can and cannot do, enough ethical grounding to make responsible deployment decisions, and enough strategic vision to understand how AI reshapes the character of conflict.

The Naval Postgraduate School in Monterey, California, announced its own accelerated one-year Master of Science in Artificial Intelligence in late 2025, set to commence in July 2026. The programme comprises 21 courses, requires residency in Monterey, and is open to active-duty military officers, DoD civilian employees, and allied officers with computer science backgrounds. An NPS AI initiative launched in early 2025 established three lines of effort: AI education, problem-solving, and technology infrastructure, with industry partners including NVIDIA supporting cutting-edge education and applied research. Meanwhile, NPS also offers a distance-learning AI certificate comprising four courses, designed for military professionals without technical backgrounds, recognising that even non-specialist officers need baseline AI literacy.

Emil Michael declared that “the Department of War must become an 'AI-First' organisation,” and the January 2026 AI Acceleration Strategy codified this vision through four broad aims: incentivising internal experimentation with AI models, eliminating bureaucratic obstacles, focusing military investment on asymmetric advantages, and initiating Pace-Setting Projects. Cameron Stanley, previously chief of the DoD Algorithmic Warfare Cross Functional Team (formerly known as Project Maven) and a former national security transformation lead for Amazon Web Services, was appointed to lead the Applied Artificial Intelligence critical technology area.

These developments suggest a future in which AI literacy becomes a prerequisite for advancement rather than a specialist qualification. Just as nuclear propulsion reshaped the U.S. Navy's officer corps in the 1950s and 1960s, creating a cadre of nuclear-trained officers led by Admiral Hyman Rickover whose influence extended far beyond the engineering department, AI may create a similar dynamic. Officers who understand machine learning, autonomous systems, and decision intelligence will increasingly populate senior leadership positions, bringing with them assumptions, methodologies, and risk tolerances shaped by their technical training.

The implications extend well beyond the United States. As the UK Defence Committee recognised, military AI development requires not just technical infrastructure but a transformed workforce. The challenge is particularly acute for smaller nations that cannot replicate MIT's resources or the NPS's scale. International partnerships, joint training programmes, and standardised AI competency frameworks may emerge as mechanisms for distributing AI literacy across allied military forces. The 2N6 programme already anticipates this: whilst the first cohort will comprise only U.S. Navy officers, plans exist to expand to other military branches, allied officers, and civilian participants. The U.S. State Department's Political Declaration provides one potential foundation for allied cooperation, establishing shared expectations around accountability, human oversight, bias minimisation, and senior official involvement in AI deployment decisions.

The Academic-Military Compact

MIT's decision to launch 2N6 also illuminates the evolving relationship between universities and defence establishments. This is not new territory for MIT. The university founded Lincoln Laboratory in 1951, which has since developed advanced technologies for national security across domains including air and missile defence, undersea systems, embedded AI, and cyber security. Lincoln Laboratory hosts the annual RAAINS (Recent Advances in AI for National Security) Workshop, showcasing state-of-the-art national security AI applications, and the ANCHOR (Advancing Naval Capabilities through Holistic Opportunity and Research) Technology Workshop, which provides an open forum for discussing requirements of U.S. Naval Special Warfare Command. The Schwarzman College of Computing, established with a one-billion-dollar commitment, explicitly aims to address the opportunities and challenges of pervasive computing and the rise of AI across all fields of study.

Yet the partnership is not without tension. Huttenlocher's co-authorship of “The Age of AI” reflects the kind of broad civilisational thinking about artificial intelligence that academic freedom enables. The college's SERC initiative explicitly addresses privacy and surveillance, inequality and justice, and autonomous systems, topics that inevitably create friction when applied to military contexts. Academic freedom, open publication, and ethical inquiry sit uncomfortably alongside classification requirements, operational security, and institutional loyalty. How MIT navigates these tensions within 2N6 will offer a template, or a cautionary tale, for other universities considering similar partnerships.

The broader trend is unmistakable. Universities globally are recognising that AI for national security represents both a significant funding stream and a consequential research domain. The question is whether academic institutions can engage with military applications whilst maintaining the independence and ethical rigour that give their contributions value. If 2N6 becomes merely a credential-minting operation, it will fail both MIT and the Navy. If it genuinely produces officers capable of critical, ethical, technically informed thinking about AI in military contexts, it could influence how democracies approach the integration of artificial intelligence into their most consequential institutions.

What Comes Next

The 2N6 programme will run as a pilot for at least two years. Its success will ultimately be measured not by the grades its graduates earn but by whether they can bridge the gap between what AI can do in a laboratory and what it should do in the field.

Admiral Paparo's vision of decision superiority, of forces that can see, understand, decide, and act faster than any adversary, depends on officers who are not merely consumers of AI capability but informed, critical, and ethically grounded practitioners. MIT's 2N6 programme represents the most ambitious academic attempt to produce such officers. Whether it succeeds will depend on factors far beyond the curriculum: on institutional support within the Navy, on career incentives that reward AI competency, on the political will to enforce ethical principles even when they slow deployment, and on the willingness of military culture to embrace a fundamentally different kind of expertise.

The 2N programme celebrates its 125th year at MIT in 2026. If 2N6 proves its worth, the university may find itself at the centre of military education for another century, this time training officers not to design ships, but to think alongside the machines that will increasingly operate them.

References and Sources

  1. MIT News. “New MIT program to train military leaders for the AI age.” 12 December 2025. https://news.mit.edu/2025/applied-ai-program-train-military-leaders-ai-age-1212

  2. MIT 2N6 Programme. “Curriculum.” https://2n6.mit.edu/curriculum/

  3. MIT Lincoln Laboratory. “Artificial intelligence system helps Navy select the best tactics for ship defense.” https://www.ll.mit.edu/news/artificial-intelligence-system-helps-navy-select-best-tactics-ship-defense

  4. MIT Schwarzman College of Computing. “Social and Ethical Responsibilities of Computing (SERC).” https://computing.mit.edu/cross-cutting/social-and-ethical-responsibilities-of-computing/

  5. U.S. Department of Defense. “DOD Adopts 5 Principles of Artificial Intelligence Ethics.” https://www.war.gov/News/News-Stories/article/article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/

  6. U.S. Department of Defense. “Responsible AI Strategy and Implementation Pathway.” October 2024. https://media.defense.gov/2024/Oct/26/2003571790/-1/-1/0/2024-06-RAI-STRATEGY-IMPLEMENTATION-PATHWAY.PDF

  7. Defence Innovation Unit. “Responsible AI Guidelines.” https://www.diu.mil/responsible-ai-guidelines

  8. U.S. Department of State. “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” https://2021-2025.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/

  9. Breaking Defense. “'Constant stare': US Pacific commander wants AI to tell Chinese military exercises from invasion.” February 2024. https://breakingdefense.com/2024/02/constant-stare-us-pacific-commander-wants-ai-to-tell-chinese-military-exercises-from-invasion/

  10. AFCEA International. “AI Will Affect Every Warfighting Function in Indo-Pacific Command.” https://www.afcea.org/signal-media/ai-will-affect-every-warfighting-function-indo-pacific-command

  11. DefenseScoop. “Naval Postgraduate School offering new accelerated master's degree program in AI.” 22 December 2025. https://defensescoop.com/2025/12/22/nps-ai-masters-degree-program-naval-postgraduate-school/

  12. Breaking Defense. “From lasers to logistics: Pentagon CTO announces top six tech priorities.” November 2025. https://breakingdefense.com/2025/11/from-lasers-to-logistics-pentagon-cto-announces-top-six-tech-priorities/

  13. DefenseScoop. “Pentagon names 6 appointees to lead the CTO's top technology efforts.” January 2026. https://defensescoop.com/2026/01/30/dod-cto-critical-technology-areas-emil-michael-cta-appointees/

  14. Georgetown CSET. “China's Military AI Wish List.” https://cset.georgetown.edu/publication/chinas-military-ai-wish-list/

  15. Recorded Future. “China's PLA Leverages Generative AI for Military Intelligence.” https://www.recordedfuture.com/research/artificial-eyes-generative-ai-chinas-military-intelligence

  16. Pentagon 2024 China Report. “New Pentagon report on China's military notes Beijing's progress on LLMs.” DefenseScoop, 26 December 2025. https://defensescoop.com/2025/12/26/dod-report-china-military-and-security-developments-prc-ai-llm/

  17. CNBC. “Chinese researchers develop AI model for military use on the back of Meta's Llama.” 1 November 2024. https://www.cnbc.com/2024/11/01/chinese-researchers-build-ai-model-for-military-use-on-back-of-metas-llama.html

  18. The Diplomat. “How China's Coming 15th Five-Year Plan Will Reshape Military Innovation.” October 2025. https://thediplomat.com/2025/10/how-chinas-coming-15th-five-year-plan-will-reshape-military-innovation/

  19. Jamestown Foundation. “Russia Capitalizes on Development of Artificial Intelligence in Its Military Strategy.” https://jamestown.org/russia-capitalizes-on-development-of-artificial-intelligence-in-its-military-strategy/

  20. UK Government. “Defence Artificial Intelligence Strategy.” https://www.gov.uk/government/publications/defence-artificial-intelligence-strategy/

  21. UK Parliament Defence Committee. “Developing AI capacity and expertise in UK defence.” January 2025. https://committees.parliament.uk/publications/46217/documents/231330/default/

  22. Defense News. “Israel creates hub to hasten military AI, autonomy research.” 2 January 2025. https://www.defensenews.com/global/mideast-africa/2025/01/02/israel-creates-hub-to-hasten-military-ai-autonomy-research/

  23. United Nations Office for Disarmament Affairs. “Key Takeaways of The Military AI, Peace and Security Dialogues 2025.” https://disarmament.unoda.org/en/updates/key-takeaways-military-ai-peace-security-dialogues-2025

  24. PMC/PubMed Central. “Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D.” https://pmc.ncbi.nlm.nih.gov/articles/PMC8904348/

  25. Nextgov/FCW. “DOD's AI acceleration strategy.” February 2026. https://www.nextgov.com/ideas/2026/02/dods-ai-acceleration-strategy/411135/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...