AI That Understands Physics: Why We Cannot Understand It Back

In November 2025, Yann LeCun walked into Mark Zuckerberg's office and told his boss he was leaving. After twelve years building Meta's AI research operation into one of the most respected in the world, the Turing Award winner had decided that the entire industry was heading in the wrong direction. Four months later, his new venture, Advanced Machine Intelligence Labs, announced the largest seed round in European startup history: $1.03 billion to build AI systems that do not merely predict the next word in a sentence, but understand how physical reality actually works.

The money is staggering. The ambition is larger. And the question it raises is one that should unsettle anyone paying attention: if we succeed in building machines that can model the physical world with superhuman fidelity, will we have any idea what those machines actually know?

Welcome to the age of world models, where the gap between what AI understands and what we understand about AI threatens to become the defining tension of the next decade.

A Turing Winner's Trillion-Dollar Heresy

LeCun has never been shy about his contrarian streak. Even whilst serving as Meta's chief AI scientist, he publicly and repeatedly argued that the industry's obsession with large language models was fundamentally misguided. “Scaling them up will not allow us to reach AGI,” he has said, a position that put him at odds with the prevailing orthodoxy at OpenAI, Google, and, increasingly, within his own employer. His departure, first confirmed in a December 2025 LinkedIn post, was not merely a career move. It was a declaration of intellectual war.

AMI Labs, headquartered in Paris with additional offices in New York, Montreal, and Singapore, is built around a deceptively simple thesis: real intelligence does not begin in language. It begins in the world. The company's technical foundation is LeCun's Joint Embedding Predictive Architecture, or JEPA, a framework he first proposed in a 2022 position paper titled “A Path Towards Autonomous Intelligence.” Where large language models like ChatGPT, Claude, and Gemini learn by predicting the next token in a sequence of text, JEPA learns by predicting abstract representations of sensory data. It does not try to reconstruct every pixel or predict every word. Instead, it learns to capture the structural, meaningful patterns that govern how environments behave and change over time.

The distinction matters enormously. LeCun has used the example of video prediction to illustrate the point: trying to forecast every pixel of a future video frame is computationally ruinous, because the world is full of chaotic, unpredictable details like flickering leaves, shifting shadows, and textured surfaces. A generative model wastes enormous capacity modelling this noise. JEPA sidesteps the problem entirely by operating in an abstract embedding space, focusing on the low-entropy, structural aspects of a scene rather than its surface-level chaos.

The $1.03 billion seed round, which values AMI at $3.5 billion pre-money, drew an extraordinary roster of backers. The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Additional investors include NVIDIA, Temasek, Samsung, Toyota Ventures, and Bpifrance, alongside individuals such as Jeff Bezos, Mark Cuban, and Eric Schmidt. LeCun initially sought approximately 500 million euros, according to a leaked pitch deck reported by Sifted. Demand far exceeded that figure.

Day-to-day operations are led by Alexandre LeBrun, the French entrepreneur who previously founded and ran Nabla, a medical AI startup. The leadership roster also includes Saining Xie, formerly of Google DeepMind, as chief science officer; Pascale Fung as chief research and innovation officer; Michael Rabbat as VP of world models; and Laurent Solly, Meta's former VP for Europe, as chief operating officer. LeCun himself serves as executive chairman whilst maintaining his professorship at New York University.

LeBrun has been candid about the timeline. “AMI Labs is a very ambitious project, because it starts with fundamental research,” he has said. “It's not your typical applied AI startup that can release a product in three months.” Within three to five years, LeCun has stated, the goal is to produce “fairly universal intelligent systems” capable of deployment across virtually any domain requiring machine intelligence. The initial commercial targets include healthcare, robotics, wearables, and industrial automation.

What World Models Actually Are (and Why They Change Everything)

To grasp why a billion dollars is flowing into world models, you need to understand what they are and why the current generation of AI systems falls short. A world model, in its simplest formulation, is an AI system designed to understand and predict how the physical world works. Gravity, motion, cause and effect, spatial relationships, object permanence: these are the kinds of knowledge that a world model attempts to internalise, not through explicit programming, but through learning from vast quantities of sensory data.

This is not an entirely new idea. The concept of internal models of reality has deep roots in cognitive science, where researchers have long argued that human intelligence depends on our brain's ability to simulate possible futures before we act. When you reach for a glass of water, you do not consciously calculate trajectories and grip forces. Your brain runs a rapid internal simulation, predicting what will happen and adjusting on the fly. World models attempt to give machines a similar capability.

Google DeepMind CEO Demis Hassabis, the 2024 Nobel laureate in Chemistry, has articulated the problem with current approaches in characteristically vivid terms. At the India AI Impact Summit in February 2026, he described today's AI systems as possessing “jagged intelligence,” explaining: “Today's systems can get gold medals in the International Maths Olympiad, really hard problems, but sometimes can still make mistakes on elementary maths if you pose the question in a certain way. A true general intelligence system shouldn't have that kind of jaggedness.” Large language models, Hassabis has argued, are ultimately sophisticated probability predictors. They do not genuinely understand the physical laws of the real world.

Fei-Fei Li, the Stanford professor often described as the “godmother of AI” for her foundational work on ImageNet, has put it even more bluntly. LLMs, she has said, are like “wordsmiths in the dark,” possessing elaborate linguistic ability but lacking spatial intelligence and physical experience. Her own company, World Labs, released its Marble world model in November 2025, capable of generating entire 3D worlds from a text prompt, image, video, or rough layout. World Labs is now reportedly in discussions at a $5 billion valuation after raising $230 million in funding.

The broader landscape is moving rapidly. Google DeepMind launched Genie 3, the first real-time interactive world model capable of generating navigable 3D environments at 24 frames per second, maintaining strict object permanence and consistent physics without a separate memory module. NVIDIA's Cosmos platform, announced at CES 2025 and trained on 9,000 trillion tokens drawn from 20 million hours of real-world data, has surpassed 2 million downloads. Waymo has built its autonomous vehicle world model on top of Genie 3, using it to train self-driving cars in simulated environments. Reports indicate that OpenAI triggered a “code red” response to Genie 3's capabilities, accelerating efforts to add spatial understanding to GPT-5.

Over $1.3 billion in funding flowed into world model startups in early 2026 alone. This is not a niche research interest. It is rapidly becoming the central front in the race towards more capable AI.

The Architecture of Understanding

AMI Labs' approach differs from its competitors in important ways. Where World Labs focuses on generating photorealistic 3D environments and DeepMind's Genie 3 emphasises interactive simulation, JEPA is fundamentally about learning representations rather than generating outputs.

The architecture works through a deceptively elegant mechanism. JEPA takes a pair of related inputs, such as consecutive video frames or adjacent image patches, and encodes each into an abstract representation using separate encoder networks. A predictor module then attempts to forecast the representation of the “target” input from the representation of the “context” input. Crucially, this prediction happens entirely in abstract embedding space, never at the level of raw pixels or tokens.

This creates what amounts to a learned physics engine. The system develops an internal model of how things relate to one another and how they change over time, without being burdened by the task of reconstructing surface-level details. An optional latent variable, often denoted as z, allows the model to account for inherent uncertainty, representing different hypothetical scenarios for aspects of the target that the context alone cannot determine.

Several variants already exist. I-JEPA learns by predicting representations of image regions from other regions, developing abstract understanding of visual scenes without explicit labels. V-JEPA extends this to video, predicting missing or masked parts of video sequences in representation space, pre-trained entirely with unlabelled data. VL-JEPA adds vision-language capability, predicting continuous embeddings of target texts rather than generating tokens autoregressively, achieving stronger performance with 50 per cent fewer trainable parameters.

The promise is tantalising. An AI system built on JEPA principles could, in theory, develop the kind of intuitive physical understanding that enables a child to predict that pushing a table will move the book sitting on it. It could reason about cause and effect, plan actions in the physical world, and adapt to novel situations without the brittleness that characterises current systems.

But there is a catch. And it is a significant one.

The Understanding Gap Widens

Here is the paradox at the heart of the world models revolution: the better these systems become at understanding physical reality, the harder they become for us to understand. We are constructing machines designed to build rich internal representations of how the world works, and we have strikingly little ability to inspect, interpret, or verify what those representations actually contain.

This is not a new problem, but world models threaten to make it dramatically worse. The interpretability challenges that plague current large language models are already formidable. Mechanistic interpretability, the effort to reverse-engineer neural networks into human-understandable components, has been recognised by MIT Technology Review as a “breakthrough technology for 2026.” Yet the field remains at what researchers describe as a critical inflection point, with genuine progress coexisting alongside fundamental barriers.

The core difficulty is what researchers call superposition. Because there are more features that a neural network needs to represent than there are dimensions available to represent them, the network compresses information in ways that produce polysemantic neurons, individual units that contribute to multiple, semantically distinct features. Understanding what a network “knows” requires disentangling this compressed representation, and the dominant tool for doing so, sparse autoencoders, faces serious unsolved problems. Reconstruction error remains stubbornly high, with 10 to 40 per cent performance degradation. Features split and absorb in unpredictable ways. And the results depend heavily on the specific dataset used.

Anthropic, the AI safety company, has made mechanistic interpretability a central focus, extracting interpretable features from its Claude 3 Sonnet model using sparse autoencoders and publishing results showing features related to deception, sycophancy, bias, and dangerous content. Their attribution graphs, released in March 2025, can successfully trace computational paths for roughly 25 per cent of prompts. For the remaining 75 per cent, the computational pathways remain opaque.

A 2025 paper published at the International Conference on Learning Representations proved that many circuit-finding queries in neural networks are NP-hard, remain fixed-parameter intractable, and are inapproximable under standard computational assumptions. In plain language: for many of the questions we most urgently need to answer about what neural networks are doing, there may be no efficient algorithm that can provide the answer.

Now consider what happens when you move from language models to world models. JEPA operates in abstract embedding spaces that are, by design, removed from human-interpretable inputs and outputs. A language model at least traffics in words, which we can read. A world model's internal representations are abstract mathematical objects encoding relationships between physical phenomena. The interpretability challenge is not merely scaled up. It is qualitatively different.

The field is split on how to respond. Anthropic has set the ambitious goal of being able to “reliably detect most AI model problems by 2027.” Google DeepMind, meanwhile, has pivoted away from sparse autoencoders towards what it calls “pragmatic interpretability,” an acknowledgement that full mechanistic understanding of frontier models may be neither achievable nor necessary. Corti, a Danish AI company, has developed GIM (Gradient Interaction Modifications), a gradient-based method that has topped the Hugging Face Mechanistic Interpretability Benchmark, offering improved accuracy for identifying which components in a model are responsible for specific behaviours. But even these advances represent incremental progress against an exponentially growing challenge.

When Physics Engines Dream

The practical implications of AI systems that can simulate physical reality extend far beyond academic curiosity. Consider the domains AMI Labs is targeting: healthcare, robotics, wearables, and industrial automation. In each of these fields, the consequences of AI misunderstanding the physical world range from costly to catastrophic.

AMI Labs has already established a partnership with Nabla, the healthtech company LeBrun previously founded, providing a direct conduit to the healthcare sector. In medicine, the hallucinations that plague large language models are not merely embarrassing; they can be lethal. A world model that genuinely understands human physiology, drug interactions, and disease progression could revolutionise clinical decision-making. But the opacity of that understanding creates a novel kind of risk: a system that is right for reasons nobody can articulate, or wrong for reasons nobody can detect.

In robotics, world models promise to solve one of the field's most persistent bottlenecks. Training robots in the physical world is slow, expensive, and dangerous. World models enable training in simulation, where a robot can experience millions of scenarios in hours rather than years. NVIDIA's Cosmos platform already allows autonomous vehicle and robotics developers to synthesise rare, dangerous edge-case conditions that would be prohibitively risky to create in reality. But the fidelity of the simulation depends entirely on the accuracy of the world model, and verifying that accuracy requires understanding what the model has learned, which brings us back to the interpretability gap.

The autonomous vehicle industry illustrates the stakes with particular clarity. Waymo's decision to build its world model on Google DeepMind's Genie 3 represents a bet that AI-generated simulations can adequately capture the chaotic complexity of real-world driving. The potential benefits are enormous: safer vehicles, faster development cycles, dramatically reduced testing costs. The potential risks are equally significant. If the world model contains subtle errors in its understanding of physics (the way light refracts in rain, the friction coefficient of wet roads, the behaviour of pedestrians at unmarked crossings) those errors will be systematically baked into every vehicle trained on the simulation.

Governing What We Cannot See

The regulatory landscape is struggling to keep pace with these developments. The European Union's AI Act, the world's most comprehensive legal framework for artificial intelligence, entered into force in August 2024 and will be fully applicable by August 2026. Its risk-based classification system imposes graduated obligations based on potential harm, with penalties reaching up to 35 million euros or 7 per cent of global annual turnover for the most serious violations.

But the AI Act was designed primarily with current AI systems in mind. Its requirements for high-risk systems, including documented risk management, robust data governance, detailed technical documentation, automatic logging, human oversight, and safeguards for accuracy and robustness, assume a level of inspectability that world models may not provide. How do you document the risk management of a system whose internal representations of physical reality are abstract mathematical objects that resist human interpretation? How do you ensure “human oversight” of a physics simulation running in an embedding space that no human can directly perceive?

The European Council, on 13 March 2026, agreed a position to streamline rules on artificial intelligence, whilst the Commission's Digital Omnibus package, submitted in November 2025, proposed adjusting the timeline for high-risk system obligations. But these adjustments are largely procedural. The fundamental question of how to regulate AI systems whose internal workings are opaque to their creators remains unaddressed.

At the broader international level, the AI Impact Summit 2026 in New Delhi produced a Leaders' Declaration recognising that “AI's promise is best realised only when its benefits are shared by humanity.” The International Institute for Management Development's AI Safety Clock, which began at 29 minutes to midnight in September 2024, now stands at 18 minutes to midnight as of March 2026, reflecting growing expert concern about the pace of AI development relative to safety measures.

In the United States, the NIST AI Risk Management Framework and ISO/IEC 42001 provide voluntary guidelines, but nothing approaching the binding force of the EU's approach. China's own regulatory framework focuses on algorithmic transparency and content generation, but similarly lacks specific provisions for world models. The result is a patchwork of rules designed for yesterday's AI, applied to tomorrow's.

Voices From Both Sides of the Divide

The debate over world models and their implications has produced sharp divisions amongst the people who understand these systems best.

LeCun himself has been consistently dismissive of existential risk concerns. He has called discussion of AI-driven existential catastrophe “premature,” “preposterous,” and “complete B.S.,” arguing that superintelligent machines will have no inherent desire for self-preservation and that AI can be made safe through continuous, iterative refinement. His position is that the path to safety runs through open science and open source, not through restriction and secrecy. Staying true to this philosophy, AMI Labs has committed to publishing its research and releasing substantial code as open source. “We will also make a lot of code open source,” LeBrun has confirmed.

Geoffrey Hinton, who shared the 2018 Turing Award with LeCun and Yoshua Bengio for their contributions to deep learning, occupies the opposite pole. The researcher often described as the “Godfather of AI” has warned that advanced AI will become “much smarter than us” and render controls ineffective. At the Ai4 conference in 2025, Hinton proposed a “mother AI” concept to safeguard against potential AI takeover scenarios. Their public disagreements have become one of the defining intellectual conflicts in the field.

The broader expert community is similarly divided. Roman Yampolskiy, a computer scientist at the University of Louisville known for his work on AI safety, estimates a 99 per cent chance of an AI-caused existential catastrophe. LeCun places that probability at effectively zero. A survey of AI experts published in early 2025 found that many researchers, while highly skilled in machine learning, have limited exposure to core AI safety concepts, and that those least familiar with safety research are also the least concerned about catastrophic risk.

AGI timeline estimates vary wildly. Elon Musk has predicted AGI by 2026. Dario Amodei, CEO of Anthropic, has suggested 2026 or 2027. NVIDIA CEO Jensen Huang places the date at 2029. LeCun himself has argued it will take several more decades for machines to exceed human intelligence. Gary Marcus, the cognitive scientist and persistent AI sceptic, has suggested the timeline could be 10 or even 100 years.

What is notable about the world models debate is that it cuts across these existing fault lines. You do not need to believe in imminent superintelligence to be concerned about the understanding gap. A world model does not need to be superintelligent to be dangerous if it is deployed in high-stakes domains whilst remaining fundamentally opaque. The risk is not necessarily that AI becomes too smart. It is that AI becomes smart enough to matter in ways we cannot verify.

Reading the Black Box, Through a Glass Darkly

The technical community has not been idle in the face of these challenges. New architectures and methods are emerging that offer at least partial responses to the interpretability crisis.

Kolmogorov-Arnold Networks, or KANs, represent a fundamentally different neural network architecture that decomposes higher-dimensional functions into one-dimensional functions, increasing interpretability and allowing scientists to identify important features, reveal modular structures, and discover symbolic formulae in scientific data. However, their interpretability diminishes as network size increases, presenting a familiar scalability challenge: the very systems we most need to understand are the ones that resist understanding most stubbornly.

The collaborative paper published in January 2025 by 29 researchers across 18 organisations established the field's consensus open problems for mechanistic interpretability. Core concepts like “feature” still lack rigorous mathematical definitions. Computational complexity results prove that many interpretability queries are intractable. And practical methods continue to underperform simple baselines on safety-relevant tasks.

There is also the question of whether full interpretability is even the right goal. Some researchers argue for a more pragmatic approach: rather than trying to understand everything a model knows, develop reliable methods for detecting when a model is likely to fail. This is the philosophy behind DeepMind's pivot to pragmatic interpretability and behind Hassabis's proposed “Einstein test” for AGI, which asks whether an AI system trained on all human knowledge up to 1911 could independently discover general relativity. If it cannot, Hassabis argues, it remains “a very sophisticated pattern matcher” regardless of its other capabilities.

LeCun, characteristically, sees the problem differently. He has argued that the architecture itself is the solution: by designing systems that learn structured, abstract representations rather than opaque statistical correlations, world models could ultimately be more interpretable than language models, not less. JEPA's operation in abstract embedding space is, in his view, a feature rather than a bug, because those embeddings encode the meaningful structural relationships that humans also rely on to understand the world, even if the format is different.

This is an optimistic reading. Whether it proves correct will depend on research that has not yet been conducted, using methods that have not yet been invented, applied to systems that have not yet been built. In the meantime, the money is flowing, the labs are hiring, and the world models are being trained.

Europe's Unlikely Gambit

There is a geopolitical dimension to this story that deserves attention. LeCun has stated that there “is certainly a huge demand from the industry and governments for a credible frontier AI company that is neither Chinese nor American.” AMI Labs, with its Paris headquarters and European seed record, is positioning itself to fill that void.

The timing is deliberate. The EU's AI Continent Action Plan, published in April 2025, aims to make Europe a global leader in AI whilst safeguarding democratic values. France's state investment bank Bpifrance is amongst AMI's backers. The company's open research commitment aligns with European regulatory philosophy, which emphasises transparency and accountability in ways that closed American labs like OpenAI and Anthropic have been criticised for resisting.

But Europe's track record in turning fundamental research into commercially dominant technology is, to put it diplomatically, mixed. AMI Labs' $1.03 billion seed round is enormous, but it pales beside the tens of billions flowing into American and Chinese AI labs. LeBrun has acknowledged the challenge, noting that AMI will prioritise quality over quantity in building its team across its four global locations. The question is whether a commitment to open science and European values can coexist with the scale of resources needed to compete at the frontier.

The second-largest seed round ever, raised by the American firm Thinking Machines Lab in June 2025 at $2 billion, provides a sobering comparison. The world models race is global, and capital alone will not determine the winner. But capital certainly helps.

Sleepwalking With Eyes Open

So, are we sleepwalking into a future where AI understands the world better than we do, without us understanding the AI? The honest answer is: we might be, but not in the way the question implies.

The framing of “sleepwalking” suggests unawareness, but the striking thing about the current moment is how many people are aware of the problem and how few solutions are available. The researchers building world models know that interpretability is an unsolved challenge. The regulators drafting AI governance frameworks know that their rules were designed for a different generation of technology. The investors writing billion-dollar cheques know that the commercial applications are years away and the fundamental research questions remain open.

The danger is not ignorance. It is a collective decision to proceed despite uncertainty, driven by competitive pressure, scientific ambition, and the genuine potential of these systems to solve real problems. When LeCun talks about world models revolutionising healthcare by eliminating the hallucinations that make LLMs dangerous in clinical settings, he is not wrong about the potential. When Hassabis describes the need for AI that can reason about physics rather than merely predicting word probabilities, he is identifying a real limitation of current systems. When Fei-Fei Li argues for spatial intelligence as the next frontier, she is pointing towards capabilities that could transform robotics, manufacturing, and scientific discovery.

But potential is not proof. And the understanding gap, the asymmetry between AI's growing capacity to model reality and our limited capacity to model the AI, is real and widening. Every billion dollars invested in making world models more capable should, in principle, be matched by investment in making them more transparent. The evidence suggests that ratio is nowhere close to balanced.

The world models era is not something that is coming. It is here. AMI Labs' billion-dollar bet, backed by some of the most sophisticated investors and researchers on the planet, is one data point amongst many. The question is not whether machines will learn to simulate physical reality. It is whether we will develop the tools to understand what they have learned before the consequences of not understanding become irreversible.

LeCun has said that within three to five years, AMI aims to produce “fairly universal intelligent systems.” The AI Safety Clock stands at 18 minutes to midnight. And the gap between what AI can model and what humans can comprehend about those models grows wider with every training run.

We are not sleepwalking. We are walking with our eyes open, into a future whose shape we can see but whose details remain, for now, profoundly and perhaps permanently, beyond our ability to fully perceive.

References

  1. TechCrunch, “Yann LeCun's AMI Labs raises $1.03B to build world models,” 9 March 2026. https://techcrunch.com/2026/03/09/yann-lecuns-ami-labs-raises-1-03-billion-to-build-world-models/

  2. TechCrunch, “Who's behind AMI Labs, Yann LeCun's 'world model' startup,” 23 January 2026. https://techcrunch.com/2026/01/23/whos-behind-ami-labs-yann-lecuns-world-model-startup/

  3. MIT Technology Review, “Yann LeCun's new venture is a contrarian bet against large language models,” 22 January 2026. https://www.technologyreview.com/2026/01/22/1131661/yann-lecuns-new-venture-ami-labs/

  4. Sifted, “Yann LeCun's AMI Labs raises $1bn in Europe's biggest seed round,” March 2026. https://sifted.eu/articles/yann-lecun-ami-labs-meta-funding-round-nvidia

  5. Crunchbase News, “Turing Winner LeCun's New 'World Model' AI Lab Raises $1B In Europe's Largest Seed Round Ever,” March 2026. https://news.crunchbase.com/venture/world-model-ai-lab-ami-raises-europes-largest-seed-round/

  6. TechCrunch, “Yann LeCun confirms his new 'world model' startup, reportedly seeks $5B+ valuation,” 19 December 2025. https://techcrunch.com/2025/12/19/yann-lecun-confirms-his-new-world-model-startup-reportedly-seeks-5b-valuation/

  7. Meta AI Blog, “V-JEPA: The next step toward advanced machine intelligence,” 2024. https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/

  8. Meta AI Blog, “I-JEPA: The first AI model based on Yann LeCun's vision for more human-like AI,” 2023. https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/

  9. Introl, “World Models Race 2026: How LeCun, DeepMind, and others compete,” 2026. https://introl.com/blog/world-models-race-agi-2026

  10. News9live, “India AI Impact Summit 2026: DeepMind CEO Demis Hassabis says current AI still 'Jagged' and learning,” February 2026. https://www.news9live.com/technology/artificial-intelligence/india-ai-summit-2026-deepmind-hassabis-ai-jagged-learning-2932470

  11. Storyboard18, “Demis Hassabis says AGI not here yet, calls current AI 'jagged intelligence,'” 2026. https://www.storyboard18.com/brand-makers/google-deepmind-ceo-says-agi-not-here-yet-calls-current-ai-jagged-intelligence-90028.htm

  12. European Commission, “AI Act: Shaping Europe's digital future,” 2024. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  13. European Council, “Council agrees position to streamline rules on Artificial Intelligence,” 13 March 2026. https://www.consilium.europa.eu/en/press/press-releases/2026/03/13/council-agrees-position-to-streamline-rules-on-artificial-intelligence/

  14. TIME, “Meta's AI Chief Yann LeCun on AGI, Open-Source, and AI Risk,” 2024. https://time.com/6694432/yann-lecun-meta-ai-interview/

  15. WebProNews, “Yann LeCun and Geoffrey Hinton Clash on AI Safety in 2025,” 2025. https://www.webpronews.com/yann-lecun-and-geoffrey-hinton-clash-on-ai-safety-in-2025/

  16. arXiv, “Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts,” February 2025. https://arxiv.org/html/2502.14870v1

  17. Transformer Circuits, “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet,” 2024. https://transformer-circuits.pub/2024/scaling-monosemanticity/

  18. Springer Nature, “Recent Emerging Techniques in Explainable Artificial Intelligence,” 2025. https://link.springer.com/article/10.1007/s11063-025-11732-2

  19. Futurum Group, “Yann LeCun's AMI Raises $1BN Seed Round – Is the World Model Era Finally Here?” March 2026. https://futurumgroup.com/insights/yann-lecuns-ami-raises-1bn-seed-round-is-the-world-model-era-finally-here/

  20. The Next Web, “Yann LeCun just raised $1bn to prove the AI industry has got it wrong,” March 2026. https://thenextweb.com/news/yann-lecun-ami-labs-world-models-billion

  21. Corti, “Corti introduces GIM: Benchmark-leading method for understanding AI model behavior,” 2025. https://www.corti.ai/stories/gim-a-new-standard-for-mechanistic-interpretability

  22. PhysOrg, “Kolmogorov-Arnold networks bridge AI and scientific discovery by increasing interpretability,” December 2025. https://phys.org/news/2025-12-kolmogorov-arnold-networks-bridge-ai.html

  23. Sombrainc, “An Ultimate Guide to AI Regulations and Governance in 2026,” 2026. https://sombrainc.com/blog/ai-regulations-2026-eu-ai-act

  24. Zaruko, “The Einstein Test: Why AGI Is Not Around the Corner,” 2026. https://zaruko.com/insights/the-einstein-test


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...