When 20 Watts Beats 20 Megawatts: Rethinking Computer Design

The human brain is an astonishing paradox. It consumes roughly 20 watts of power, about the same as a dim light bulb, yet it performs the equivalent of an exaflop of operations per second. To put that in perspective, when Oak Ridge National Laboratory's Frontier supercomputer achieves the same computational feat, it guzzles 20 megawatts, a million times more energy. Your brain is quite literally a million times more energy-efficient at learning, reasoning, and making sense of the world than the most advanced artificial intelligence systems we can build.
This isn't just an interesting quirk of biology. It's a clue to one of the most pressing technological problems of our age: the spiralling energy consumption of artificial intelligence. In 2024, data centres consumed approximately 415 terawatt-hours of electricity globally, representing about 1.5 per cent of worldwide electricity consumption. The United States alone saw data centres consume 183 TWh, more than 4 per cent of the country's total electricity use. And AI is the primary driver of this surge. What was responsible for 5 to 15 per cent of data centre power use in recent years could balloon to 35 to 50 per cent by 2030, according to projections from the International Energy Agency.
The environmental implications are staggering. For the 12 months ending August 2024, US data centres alone were responsible for 105 million metric tonnes of CO2, accounting for 2.18 per cent of national emissions. Under the IEA's central scenario, global data centre electricity consumption could more than double between 2024 and 2030, reaching 945 terawatt-hours by the decade's end. Training a single large language model like OpenAI's ChatGPT-3 required about 1,300 megawatt-hours of electricity, equivalent to the annual consumption of 130 US homes. And that's just for training. The energy cost of running these models for billions of queries adds another enormous burden.
We are, quite simply, hitting a wall. Not a wall of what's computationally possible, but a wall of what's energetically sustainable. And the reason, an increasing number of researchers believe, lies not in our algorithms or our silicon fabrication techniques, but in something far more fundamental: the very architecture of how we build computers.
The Bottleneck We've Lived With for 80 Years
In 1977, John Backus stood before an audience at the ACM Turing Award ceremony and delivered what would become one of the most influential lectures in computer science history. Backus, the inventor of FORTRAN, didn't use the occasion to celebrate his achievements. Instead, he delivered a withering critique of the foundation upon which nearly all modern computing rests: the von Neumann architecture.
Backus described the von Neumann computer as having three parts: a CPU, a store, and a connecting tube that could transmit a single word between the CPU and the store. He proposed calling this tube “the von Neumann bottleneck.” The problem wasn't just physical, the limited bandwidth between processor and memory. It was, he argued, “an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand.”
Nearly 50 years later, we're still living with that bottleneck. And its energy implications have become impossible to ignore.
In a conventional computer, the CPU and memory are physically separated. Data must be constantly shuttled back and forth across this divide. Every time the processor needs information, it must fetch it from memory. Every time it completes a calculation, it must send the result back. This endless round trip is called the von Neumann bottleneck, and it's murderously expensive in energy terms.
The numbers are stark. Energy consumed accessing data from dynamic random access memory can be approximately 1,000 times more than the energy spent on the actual computation. Moving data between the CPU and cache memory costs 100 times the energy of a basic operation. Moving it between the CPU and DRAM costs 10,000 times as much. The vast majority of energy in modern computing isn't spent calculating. It's spent moving data around.
For AI and machine learning, which involve processing vast quantities of data through billions or trillions of parameters, this architectural separation becomes particularly crippling. The amount of data movement required is astronomical. And every byte moved is energy wasted. IBM Research, which has been at the forefront of developing alternatives to the von Neumann model, notes that data fetching incurs “significant energy and latency costs due to the requirement of shuttling data back and forth.”
How the Brain Solves the Problem We Can't
The brain takes a radically different approach. It doesn't separate processing and storage. In the brain, these functions happen in the same place: the synapse.
Synapses are the junctions between neurons where signals are transmitted. But they're far more than simple switches. Each synapse stores information through its synaptic weight, the strength of the connection between two neurons, and simultaneously performs computations by integrating incoming signals and determining whether to fire. The brain has approximately 100 billion neurons and 100 trillion synaptic connections. Each of these connections is both a storage element and a processing element, operating in parallel.
This co-location of memory and processing eliminates the energy cost of data movement. When your brain learns something, it modifies the strength of synaptic connections. When it recalls that information, those same synapses participate in the computation. There's no fetching data from a distant memory bank. The memory is the computation.
The energy efficiency this enables is extraordinary. Research published in eLife in 2020 investigated the metabolic costs of synaptic plasticity, the brain's mechanism for learning and memory. The researchers found that synaptic plasticity is metabolically demanding, which makes sense given that most of the energy used by the brain is associated with synaptic transmission. But the brain has evolved sophisticated mechanisms to optimise this energy use.
One such mechanism is called synaptic caching. The researchers discovered that the brain uses a hierarchy of plasticity mechanisms with different energy costs and timescales. Transient, low-energy forms of plasticity allow the brain to explore different connection strengths cheaply. Only when a pattern proves important does the brain commit energy to long-term, stable changes. This approach, the study found, “boosts energy efficiency manifold.”
The brain also employs sparse connectivity. Because synaptic transmission dominates energy consumption, the brain ensures that only a small fraction of synapses are active at any given time. Through mechanisms like imbalanced plasticity, where depression of synaptic connections is stronger than their potentiation, the brain continuously prunes unnecessary connections, maintaining a lean, energy-efficient network.
While the brain accounts for only about 2 per cent of body weight, it's responsible for about 20 per cent of our energy use at rest. That sounds like a lot until you realise that those 20 watts are supporting conscious thought, sensory processing, motor control, memory formation and retrieval, emotional regulation, and countless automatic processes. No artificial system comes close to that level of computational versatility per watt.
The question that's been nagging at researchers for decades is this: why can't we build computers that work the same way?
The Neuromorphic Revolution
Carver Mead had been thinking about this problem since the 1960s. A pioneer in microelectronics at Caltech, Mead's interest in biological models dated back to at least 1967, when he met biophysicist Max Delbrück, who stimulated Mead's fascination with transducer physiology. Observing graded synaptic transmission in the retina, Mead became interested in treating transistors as analogue devices rather than digital switches, noting parallels between charges moving in MOS transistors operated in weak inversion and charges flowing across neuronal membranes.
In the 1980s, after intense discussions with John Hopfield and Richard Feynman, Mead's thinking crystallised. In 1984, he published “Analog VLSI and Neural Systems,” the first book on what he termed “neuromorphic engineering,” involving the use of very-large-scale integration systems containing electronic analogue circuits to mimic neuro-biological architectures present in the nervous system.
Mead is credited with coining the term “neuromorphic processors.” His insight was that we could build silicon hardware that operated on principles similar to the brain: massively parallel, event-driven, and with computation and memory tightly integrated. In 1986, Mead and Federico Faggin founded Synaptics Inc. to develop analogue circuits based on neural networking theories. Mead succeeded in creating an analogue silicon retina and inner ear, demonstrating that neuromorphic principles could be implemented in physical hardware.
For decades, neuromorphic computing remained largely in research labs. The von Neumann architecture, despite its inefficiencies, was well understood, easy to program, and benefited from decades of optimisation. Neuromorphic chips were exotic, difficult to program, and lacked the software ecosystems that made conventional processors useful.
But the energy crisis of AI has changed the calculus. As the costs, both financial and environmental, of training and running large AI models have exploded, the appeal of radically more efficient architectures has grown irresistible.
A New Generation of Brain-Inspired Machines
The landscape of neuromorphic computing has transformed dramatically in recent years, with multiple approaches emerging from research labs and entering practical deployment. Each takes a different strategy, but all share the same goal: escape the energy trap of the von Neumann architecture.
Intel's neuromorphic research chip, Loihi 2, represents one vision of this future. A single Loihi 2 chip supports up to 1 million neurons and 120 million synapses, implementing spiking neural networks with programmable dynamics and modular connectivity. In April 2024, Intel introduced Hala Point, claimed to be the world's largest neuromorphic system. Hala Point packages 1,152 Loihi 2 processors in a six-rack-unit chassis and supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores. The entire system consumes 2,600 watts of power. That's more than your brain's 20 watts, certainly, but consider what it's doing: supporting over a billion neurons, more than some mammalian brains, with a tiny fraction of the power a conventional supercomputer would require. Research using Loihi 2 has demonstrated “orders of magnitude gains in the efficiency, speed, and adaptability of small-scale edge workloads.”
IBM has pursued a complementary path focused on inference efficiency. Their TrueNorth microchip architecture, developed in 2014, was designed to be closer in structure to the human brain than the von Neumann architecture. More recently, IBM's proof-of-concept NorthPole chip achieved remarkable performance in image recognition, blending approaches from TrueNorth with modern hardware designs to achieve speeds about 4,000 times faster than TrueNorth. In tests, NorthPole was 47 times faster than the next most energy-efficient GPU and 73 times more energy-efficient than the next lowest latency GPU. These aren't incremental improvements. They represent fundamental shifts in what's possible when you abandon the traditional separation of memory and computation.
Europe has contributed two distinct neuromorphic platforms through the Human Brain Project, which ran from 2013 to 2023. The SpiNNaker machine, located in Manchester, connects 1 million ARM processors with a packet-based network optimised for the exchange of neural action potentials, or spikes. It runs at real time and is the world's largest neuromorphic computing platform. In Heidelberg, the BrainScaleS system takes a different approach entirely, implementing analogue electronic models of neurons and synapses. Because it's implemented as an accelerated system, BrainScaleS emulates neurons at 1,000 times real time, omitting energy-hungry digital calculations. Where SpiNNaker prioritises scale and biological realism, BrainScaleS optimises for speed and energy efficiency. Both systems are integrated into the EBRAINS Research Infrastructure and offer free access for test usage, democratising access to neuromorphic computing for researchers worldwide.
At the ultra-low-power end of the spectrum, BrainChip's Akida processor targets edge computing applications where every milliwatt counts. Its name means “spike” in Greek, a nod to its spiking neural network architecture. Akida employs event-based processing, performing computations only when new sensory input is received, dramatically reducing the number of operations. The processor supports on-chip learning, allowing models to adapt without connecting to the cloud, critical for applications in remote or secure environments. BrainChip focuses on markets with sub-1-watt usage per chip. In October 2024, they announced the Akida Pico, a miniaturised version that consumes just 1 milliwatt of power, or even less depending on the application. To put that in context, 1 milliwatt could power this chip for 20,000 hours on a single AA battery.
Rethinking the Architecture
Neuromorphic chips that mimic biological neurons represent one approach to escaping the von Neumann bottleneck. But they're not the only one. A broader movement is underway to fundamentally rethink the relationship between memory and computation, and it doesn't require imitating neurons at all.
In-memory computing, or compute-in-memory, represents a different strategy with the same goal: eliminate the energy cost of data movement by performing computations where the data lives. Rather than fetching data from memory to process it in the CPU, in-memory computing performs certain computational tasks in place in memory itself.
The potential energy savings are massive. A memory access typically consumes 100 to 1,000 times more energy than a processor operation. By keeping computation and data together, in-memory computing can reduce attention latency and energy consumption by up to two and four orders of magnitude, respectively, compared with GPUs, according to research published in Nature Computational Science in 2025.
Recent developments have been striking. One compute-in-memory architecture processing unit delivered GPU-class performance at a fraction of the energy cost, with over 98 per cent lower energy consumption than a GPU over various large corpora datasets. These aren't marginal improvements. They're transformative, suggesting that the energy crisis in AI might not be an inevitable consequence of computational complexity, but rather a symptom of architectural mismatch.
The technology enabling much of this progress is the memristor, a portmanteau of “memory” and “resistor.” Memristors are electronic components that can remember the amount of charge that has previously flowed through them, even when power is turned off. This property makes them ideal for implementing synaptic functions in hardware.
Research into memristive devices has exploded in recent years. Studies have demonstrated that memristors can replicate synaptic plasticity through long-term and short-term changes in synaptic efficacy. They've successfully implemented many synaptic characteristics, including short-term plasticity, long-term plasticity, paired-pulse facilitation, spike-time-dependent plasticity, and spike-rating-dependent plasticity, the mechanisms the brain uses for learning and memory.
The power efficiency achieved is remarkable. Some flexible memristor arrays have exhibited ultralow energy consumption down to 4.28 attojoules per synaptic spike. That's 4.28 × 10⁻¹⁸ joules, a number so small it's difficult to comprehend. For context, that's even lower than a biological synapse, which operates at around 10 femtojoules, or 10⁻¹⁴ joules. We've built artificial devices that, in at least this one respect, are more energy-efficient than biology.
Memristor-based artificial neural networks have achieved recognition accuracy up to 88.8 per cent on the MNIST pattern recognition dataset, demonstrating that these ultralow-power devices can perform real-world AI tasks. And because memristors process operands at the location of storage, they obviate the need to transfer data between memory and processing units, directly addressing the von Neumann bottleneck.
The Spiking Difference
Traditional artificial neural networks, the kind that power systems like ChatGPT and DALL-E, use continuous-valued activations. Information flows through the network as real numbers, with each neuron applying an activation function to its weighted inputs to produce an output. This approach is mathematically elegant and has proven phenomenally successful. But it's also computationally expensive.
Spiking neural networks, or SNNs, take a different approach inspired directly by biology. Instead of continuous values, SNNs communicate through discrete events called spikes, mimicking the action potentials that biological neurons use. A neuron in an SNN only fires when its membrane potential crosses a threshold, and information is encoded in the timing and frequency of these spikes.
This event-driven computation offers significant efficiency advantages. In conventional neural networks, every neuron performs a multiply-and-accumulate operation for each input, regardless of whether that input is meaningful. SNNs, by contrast, only perform computations when spikes occur. This sparsity, the fact that most neurons are silent most of the time, mirrors the brain's strategy and dramatically reduces the number of operations required.
The utilisation of binary spikes allows SNNs to adopt low-power accumulation instead of the traditional high-power multiply-accumulation operations that dominate energy consumption in conventional neural networks. Research has shown that a sparse spiking network pruned to retain only 0.63 per cent of its original connections can achieve a remarkable 91 times increase in energy efficiency compared to the original dense network, requiring only 8.5 million synaptic operations for inference, with merely 2.19 per cent accuracy loss on the CIFAR-10 dataset.
SNNs are also naturally compatible with neuromorphic hardware. Because neuromorphic chips like Loihi and TrueNorth implement spiking neurons in silicon, they can run SNNs natively and efficiently. The event-driven nature of spikes means these chips can spend most of their time in low-power states, only activating when computation is needed.
The challenges lie in training. Backpropagation, the algorithm that enabled the deep learning revolution, doesn't work straightforwardly with spikes because the discrete nature of firing events creates discontinuities that make gradients undefined. Researchers have developed various workarounds, including surrogate gradient methods and converting pre-trained conventional networks to spiking versions, but training SNNs remains more difficult than training their conventional counterparts.
Still, the efficiency gains are compelling enough that hybrid approaches are emerging, combining conventional and spiking architectures to leverage the best of both worlds. The first layers of a network might process information in conventional mode for ease of training, while later layers operate in spiking mode for efficiency. This pragmatic approach acknowledges that the transition from von Neumann to neuromorphic computing won't happen overnight, but suggests a path forward that delivers benefits today whilst building towards a more radical architectural shift tomorrow.
The Fundamental Question
All of this raises a profound question: is energy efficiency fundamentally about architecture, or is it about raw computational power?
The conventional wisdom for decades has been that computational progress follows Moore's Law: transistors get smaller, chips get faster and more power-efficient, and we solve problems by throwing more computational resources at them. The assumption has been that if we want more efficient AI, we need better transistors, better cooling, better power delivery, better GPUs.
But the brain suggests something radically different. The brain's efficiency doesn't come from having incredibly fast, advanced components. Neurons operate on timescales of milliseconds, glacially slow compared to the nanosecond speeds of modern transistors. Synaptic transmission is inherently noisy and imprecise. The brain's “clock speed,” if we can even call it that, is measured in tens to hundreds of hertz, compared to gigahertz for CPUs.
The brain's advantage is architectural. It's massively parallel, with billions of neurons operating simultaneously. It's event-driven, activating only when needed. It co-locates memory and processing, eliminating data movement costs. It uses sparse, adaptive connectivity that continuously optimises for the tasks at hand. It employs multiple timescales of plasticity, from milliseconds to years, allowing it to learn efficiently at every level.
The emerging evidence from neuromorphic computing and in-memory architectures suggests that the brain's approach isn't just one way to build an efficient computer. It might be the only way to build a truly efficient computer for the kinds of tasks that AI systems need to perform.
Consider the numbers. Modern AI training runs consume megawatt-hours or even gigawatt-hours of electricity. The human brain, over an entire lifetime, consumes perhaps 10 to 15 megawatt-hours total. A child can learn to recognise thousands of objects from a handful of examples. Current AI systems require millions of labelled images and vast computational resources to achieve similar performance. The child's brain is doing something fundamentally different, and that difference is architectural.
This realisation has profound implications. It suggests that the path to sustainable AI isn't primarily about better hardware in the conventional sense. It's about fundamentally different hardware that embodies different architectural principles.
The Remaining Challenges
The transition to neuromorphic and in-memory architectures faces three interconnected obstacles: programmability, task specificity, and manufacturing complexity.
The programmability challenge is perhaps the most significant. The von Neumann architecture comes with 80 years of software development, debugging tools, programming languages, libraries, and frameworks. Every computer science student learns to program von Neumann machines. Neuromorphic chips and in-memory computing architectures lack this mature ecosystem. Programming a spiking neural network requires thinking in terms of spikes, membrane potentials, and synaptic dynamics rather than the familiar abstractions of variables, loops, and functions. This creates a chicken-and-egg problem: hardware companies hesitate to invest without clear demand, whilst software developers hesitate without available hardware. Progress happens, but slower than the energy crisis demands.
Task specificity presents another constraint. These architectures excel at parallel, pattern-based tasks involving substantial data movement, precisely the characteristics of machine learning and AI. But they're less suited to sequential, logic-heavy tasks. A neuromorphic chip might brilliantly recognise faces or navigate a robot through a cluttered room, but it would struggle to calculate your taxes. This suggests a future of heterogeneous computing, where different architectural paradigms coexist, each handling the tasks they're optimised for. Intel's chips already combine conventional CPU cores with specialised accelerators. Future systems might add neuromorphic cores to this mix.
Manufacturing at scale remains challenging. Memristors hold enormous promise, but manufacturing them reliably and consistently is difficult. Analogue circuits, which many neuromorphic designs use, are more sensitive to noise and variation than digital circuits. Integrating radically different computing paradigms on a single chip introduces complexity in design, testing, and verification. These aren't insurmountable obstacles, but they do mean that the transition won't happen overnight.
What Happens Next
Despite these challenges, momentum is building. The energy costs of AI have become too large to ignore, both economically and environmentally. Data centre operators are facing hard limits on available power. Countries are setting aggressive carbon reduction targets. The financial costs of training ever-larger models are becoming prohibitive. The incentive to find alternatives has never been stronger.
Investment is flowing into neuromorphic and in-memory computing. Intel's Hala Point deployment at Sandia National Laboratories represents a serious commitment to scaling neuromorphic systems. IBM's continued development of brain-inspired architectures demonstrates sustained research investment. Start-ups like BrainChip are bringing neuromorphic products to market for edge computing applications where energy efficiency is paramount.
Research institutions worldwide are contributing. Beyond Intel, IBM, and BrainChip, teams at universities and national labs are exploring everything from novel materials for memristors to new training algorithms for spiking networks to software frameworks that make neuromorphic programming more accessible.
The applications are becoming clearer. Edge computing, where devices must operate on battery power or energy harvesting, is a natural fit for neuromorphic approaches. The Internet of Things, with billions of low-power sensors and actuators, could benefit enormously from chips that consume milliwatts rather than watts. Robotics, which requires real-time sensory processing and decision-making, aligns well with event-driven, spiking architectures. Embedded AI in smartphones, cameras, and wearables could become far more capable with neuromorphic accelerators.
Crucially, the software ecosystem is maturing. PyNN, an API for programming spiking neural networks, works across multiple neuromorphic platforms. Intel's Lava software framework aims to make Loihi more accessible. Frameworks for converting conventional neural networks to spiking versions are improving. The learning curve is flattening.
Researchers have also discovered that neuromorphic computers may prove well suited to applications beyond AI. Monte Carlo methods, commonly used in physics simulations, financial modelling, and risk assessment, show a “neuromorphic advantage” when implemented on spiking hardware. The event-driven nature of neuromorphic chips maps naturally to stochastic processes. This suggests that the architectural benefits extend beyond pattern recognition and machine learning to a broader class of computational problems.
The Deeper Implications
Stepping back, the story of neuromorphic computing and in-memory architectures is about more than just building faster or cheaper AI. It's about recognising that the way we've been building computers for 80 years, whilst extraordinarily successful, isn't the only way. It might not even be the best way for the kinds of computing challenges that increasingly define our technological landscape.
The von Neumann architecture emerged in an era when computers were room-sized machines used by specialists to perform calculations. The separation of memory and processing made sense in that context. It simplified programming. It made the hardware easier to design and reason about. It worked.
But computing has changed. We've gone from a few thousand computers performing scientific calculations to billions of devices embedded in every aspect of life, processing sensor data, recognising speech, driving cars, diagnosing diseases, translating languages, and generating images and text. The workloads have shifted from calculation-intensive to data-intensive. And for data-intensive workloads, the von Neumann bottleneck is crippling.
The brain evolved over hundreds of millions of years to solve exactly these kinds of problems: processing vast amounts of noisy sensory data, recognising patterns, making predictions, adapting to new situations, all whilst operating on a severely constrained energy budget. The architectural solutions the brain arrived at, co-located memory and processing, event-driven computation, massive parallelism, sparse adaptive connectivity, are solutions to the same problems we now face in artificial systems.
We're not trying to copy the brain exactly. Neuromorphic computing isn't about slavishly replicating every detail of biological neural networks. It's about learning from the principles the brain embodies and applying those principles in silicon and software. It's about recognising that there are multiple paths to intelligence and efficiency, and the path we've been on isn't the only one.
The energy consumption crisis of AI might turn out to be a blessing in disguise. It's forcing us to confront the fundamental inefficiencies in how we build computing systems. It's pushing us to explore alternatives that we might otherwise have ignored. It's making clear that incremental improvements to the existing paradigm aren't sufficient. We need a different approach.
The question the brain poses to computing isn't “why can't computers be more like brains?” It's deeper: “what if the very distinction between memory and processing is artificial, a historical accident rather than a fundamental necessity?” What if energy efficiency isn't something you optimise for within a given architecture, but something that emerges from choosing the right architecture in the first place?
The evidence increasingly suggests that this is the case. Energy efficiency, for the kinds of intelligent, adaptive, data-processing tasks that AI systems perform, is fundamentally architectural. No amount of optimisation of von Neumann machines will close the million-fold efficiency gap between artificial and biological intelligence. We need different machines.
The good news is that we're learning how to build them. The neuromorphic chips and in-memory computing architectures emerging from labs and starting to appear in products demonstrate that radically more efficient computing is possible. The path forward exists.
The challenge now is scaling these approaches, building the software ecosystems that make them practical, and deploying them widely enough to make a difference. Given the stakes, both economic and environmental, that work is worth doing. The brain has shown us what's possible. Now we have to build it.
Sources and References
Energy Consumption and AI: – International Energy Agency (IEA), “Energy demand from AI,” Energy and AI Report, 2024. Available: https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai – Pew Research Center, “What we know about energy use at U.S. data centers amid the AI boom,” October 24, 2024. Available: https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/ – Global Efficiency Intelligence, “Data Centers in the AI Era: Energy and Emissions Impacts in the U.S. and Key States,” 2024. Available: https://www.globalefficiencyintel.com/data-centers-in-the-ai-era-energy-and-emissions-impacts-in-the-us-and-key-states
Brain Energy Efficiency: – MIT News, “The brain power behind sustainable AI,” October 24, 2024. Available: https://news.mit.edu/2025/brain-power-behind-sustainable-ai-miranda-schwacke-1024 – Texas A&M University, “Artificial Intelligence That Uses Less Energy By Mimicking The Human Brain,” March 25, 2025. Available: https://stories.tamu.edu/news/2025/03/25/artificial-intelligence-that-uses-less-energy-by-mimicking-the-human-brain/
Synaptic Plasticity and Energy: – Schieritz, P., et al., “Energy efficient synaptic plasticity,” eLife, vol. 9, e50804, 2020. DOI: 10.7554/eLife.50804. Available: https://elifesciences.org/articles/50804
Von Neumann Bottleneck: – IBM Research, “How the von Neumann bottleneck is impeding AI computing,” 2024. Available: https://research.ibm.com/blog/why-von-neumann-architecture-is-impeding-the-power-of-ai-computing – Backus, J., “Can Programming Be Liberated from the Von Neumann Style? A Functional Style and Its Algebra of Programs,” ACM Turing Award Lecture, 1977.
Neuromorphic Computing – Intel: – Sandia National Laboratories / Next Platform, “Sandia Pushes The Neuromorphic AI Envelope With Hala Point 'Supercomputer',” April 24, 2024. Available: https://www.nextplatform.com/2024/04/24/sandia-pushes-the-neuromorphic-ai-envelope-with-hala-point-supercomputer/ – Open Neuromorphic, “A Look at Loihi 2 – Intel – Neuromorphic Chip,” 2024. Available: https://open-neuromorphic.org/neuromorphic-computing/hardware/loihi-2-intel/
Neuromorphic Computing – IBM: – IBM Research, “In-memory computing,” 2024. Available: https://research.ibm.com/projects/in-memory-computing
Neuromorphic Computing – Europe: – Human Brain Project, “Neuromorphic Computing,” 2023. Available: https://www.humanbrainproject.eu/en/science-development/focus-areas/neuromorphic-computing/ – EBRAINS, “Neuromorphic computing – Modelling, simulation & computing,” 2024. Available: https://www.ebrains.eu/modelling-simulation-and-computing/computing/neuromorphic-computing/
Neuromorphic Computing – BrainChip: – Open Neuromorphic, “A Look at Akida – BrainChip – Neuromorphic Chip,” 2024. Available: https://open-neuromorphic.org/neuromorphic-computing/hardware/akida-brainchip/ – IEEE Spectrum, “BrainChip Unveils Ultra-Low Power Akida Pico for AI Devices,” October 2024. Available: https://spectrum.ieee.org/neuromorphic-computing
History of Neuromorphic Computing: – Wikipedia, “Carver Mead,” 2024. Available: https://en.wikipedia.org/wiki/Carver_Mead – History of Information, “Carver Mead Writes the First Book on Neuromorphic Computing,” 2024. Available: https://www.historyofinformation.com/detail.php?entryid=4359
In-Memory Computing: – Nature Computational Science, “Analog in-memory computing attention mechanism for fast and energy-efficient large language models,” 2025. DOI: 10.1038/s43588-025-00854-1 – ERCIM News, “In-Memory Computing: Towards Energy-Efficient Artificial Intelligence,” Issue 115, 2024. Available: https://ercim-news.ercim.eu/en115/r-i/2115-in-memory-computing-towards-energy-efficient-artificial-intelligence
Memristors: – Nature Communications, “Experimental demonstration of highly reliable dynamic memristor for artificial neuron and neuromorphic computing,” 2022. DOI: 10.1038/s41467-022-30539-6 – Nano-Micro Letters, “Low-Power Memristor for Neuromorphic Computing: From Materials to Applications,” 2025. DOI: 10.1007/s40820-025-01705-4
Spiking Neural Networks: – PMC / NIH, “Spiking Neural Networks and Their Applications: A Review,” 2022. Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC9313413/ – Frontiers in Neuroscience, “Optimizing the Energy Consumption of Spiking Neural Networks for Neuromorphic Applications,” 2020. Available: https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2020.00662/full

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk