The Brain Metaphor Trap: Why Silicon Should Not Mimic Neurons

The human brain runs on roughly 20 watts. That is less power than the light bulb illuminating your desk, yet it orchestrates consciousness, creativity, memory, and the ability to read these very words. Within that modest thermal envelope, approximately 100 billion neurons fire in orchestrated cascades, connected by an estimated 100 trillion synapses, each consuming roughly 10 femtojoules per synaptic event. To put that in perspective: the energy powering a single thought could not warm a thimble of water by a measurable fraction of a degree.
Meanwhile, the graphics processing units training today's large language models consume megawatts and require industrial cooling systems. Training a single frontier AI model can cost millions in electricity alone. The disparity is so stark, so seemingly absurd, that it has launched an entire field of engineering dedicated to a single question: can we build computers that think like brains?
The answer, it turns out, is far more complicated than the question implies.
The Efficiency Enigma
The numbers sound almost fictional. According to research published in the Proceedings of the National Academy of Sciences, communication in the human cortex consumes approximately 35 times more energy than computation itself, yet the total computational budget amounts to merely 0.2 watts of ATP. The remaining energy expenditure of the brain, around 3.5 watts, goes toward long-distance neural communication. This audit reveals something profound: biological computation is not merely efficient; it is efficient in ways that conventional computing architectures cannot easily replicate.
Dig deeper into the cellular machinery, and the efficiency story becomes even more remarkable. Research published in the Journal of Cerebral Blood Flow and Metabolism has mapped the energy budget of neural computation with extraordinary precision. In the cerebral cortex, resting potentials account for approximately 20% of total energy use, action potentials consume 21%, and synaptic processes dominate at 59%. The brain has evolved an intricate accounting system for every molecule of ATP.
The reason for this efficiency lies in the fundamental architecture of biological neural networks. Unlike the von Neumann machines that power our laptops and data centres, where processors and memory exist as separate entities connected by data buses, biological neurons are both processor and memory simultaneously. Each synapse stores information in its connection strength while also performing the computation that determines whether to pass a signal forward. There is no memory bottleneck because there is no separate memory.
This architectural insight drove Carver Mead, the Caltech professor who coined the term “neuromorphic” in the mid-1980s, to propose a radical alternative to conventional computing. Observing that charges moving through MOS transistors operated in weak inversion bear striking parallels to charges flowing across neuronal membranes, Mead envisioned silicon systems that would exploit the physics of transistors rather than fighting against it. His 1989 book, Analog VLSI and Neural Systems, became the foundational text for an entire field. Working with Nobel laureates John Hopfield and Richard Feynman, Mead helped create three new fields: neural networks, neuromorphic engineering, and the physics of computation.
The practical fruits of Mead's vision arrived early. In 1986, he co-founded Synaptics with Federico Faggin to develop analog circuits based on neural networking theories. The company's first commercial product, a pressure-sensitive computer touchpad, eventually captured 70% of the touchpad market, a curious reminder that brain-inspired computing first succeeded not through cognition but through touch.
Three and a half decades later, that field has produced remarkable achievements. Intel's Loihi 2 chip, fabricated on a 14-nanometre process, integrates 128 neuromorphic cores capable of simulating up to 130,000 synthetic neurons and 130 million synapses. A unique feature of Loihi's architecture is its integrated learning engine, enabling full on-chip learning via programmable microcode learning rules. IBM's TrueNorth, unveiled in 2014, packs one million neurons and 256 million synapses onto a chip consuming just 70 milliwatts, with a power density one ten-thousandth that of conventional microprocessors. The SpiNNaker system at the University of Manchester, conceived by Steve Furber (one of the original designers of the ARM microprocessor), contains over one million ARM processors capable of simulating a billion neurons in biological real-time.
These are genuine engineering marvels. But are they faithful translations of biological principles, or are they something else entirely?
The Translation Problem
The challenge of neuromorphic computing is fundamentally one of translation. Biological neurons operate through a bewildering array of mechanisms: ion channels opening and closing across cell membranes, neurotransmitters diffusing across synaptic clefts, calcium cascades triggering long-term changes in synaptic strength, dendritic trees performing complex nonlinear computations, glial cells modulating neural activity in ways we are only beginning to understand. The system is massively parallel, deeply interconnected, operating across multiple timescales from milliseconds to years, and shot through with stochasticity at every level.
Silicon, by contrast, prefers clean digital logic. Transistors want to be either fully on or fully off. The billions of switching events in a modern processor are choreographed with picosecond precision. Randomness is the enemy, meticulously engineered out through redundancy and error correction. The very physics that makes digital computing reliable makes biological fidelity difficult.
Consider spike-timing-dependent plasticity, or STDP, one of the fundamental learning mechanisms in biological neural networks. The principle is elegant: if a presynaptic neuron fires just before a postsynaptic neuron, the connection between them strengthens. If the timing is reversed, the connection weakens. This temporal precision, operating on timescales of milliseconds, allows networks to learn temporal patterns and causality.
Implementing STDP in silicon requires trade-offs. Digital implementations on platforms like SpiNNaker must maintain precise timing records for potentially millions of synapses, consuming memory and computational resources. Analog implementations face challenges with device variability and noise. Memristor-based approaches, which exploit the physics of resistive switching to store synaptic weights, offer elegant solutions for weight storage but struggle with the temporal dynamics. Each implementation captures some aspects of biological STDP while necessarily abandoning others.
The BrainScaleS system at Heidelberg University takes perhaps the most radical approach to biological fidelity. Unlike digital neuromorphic systems that simulate neural dynamics, BrainScaleS uses analog circuits to physically emulate them. The silicon neurons and synapses implement the underlying differential equations through the physics of the circuits themselves. No equation gets explicitly solved; instead, the solution emerges from the natural evolution of voltages and currents. The system runs up to ten thousand times faster than biological real-time, offering both a research tool and a demonstration that analog approaches can work.
Yet even BrainScaleS makes profound simplifications. Its 512 neuron circuits and 131,000 synapses per chip are a far cry from the billions of neurons in a human cortex. The neuron model it implements, while sophisticated, omits countless biological details. The dendrites are simplified. The glial cells are absent. The stochasticity is controlled rather than embraced.
The Stochasticity Question
Here is where neuromorphic computing confronts one of its deepest challenges. Biological neural networks are noisy. Synaptic vesicle release is probabilistic, with transmission rates measured in vivo ranging from as low as 10% to as high as 50% at different synapses. Ion channel opening is stochastic. Spontaneous firing occurs. The system is bathed in noise at every level. It is one of nature's great mysteries how such a noisy computing system can perform computation reliably.
For decades, this noise was viewed as a bug, a constraint that biological systems had to work around. But emerging research suggests it may be a feature. According to work published in Nature Communications, synaptic noise has the distinguishing characteristic of being multiplicative, and this multiplicative noise plays a key role in learning and probabilistic inference. The brain may be implementing a form of Bayesian computation, sampling from probability distributions to represent uncertainty and make decisions under incomplete information.
The highly irregular spiking activity of cortical neurons and behavioural variability suggest that the brain could operate in a fundamentally probabilistic way. One prominent idea in neuroscience is that neural computing is inherently stochastic and that noise is an integral part of the computational process rather than an undesirable side effect. Mimicking how the brain implements and learns probabilistic computation could be key to developing machine intelligence that can think more like humans.
This insight has spawned a new field: probabilistic or stochastic computing. Artificial neuron devices based on memristors and ferroelectric field-effect transistors can produce uncertain, nonlinear output spikes that may be key to bringing machine learning closer to human cognition.
But here lies a paradox. Traditional silicon fabrication spends enormous effort eliminating variability and noise. Device-to-device variation is a manufacturing defect to be minimised. Thermal noise is interference to be filtered. The entire thrust of semiconductor engineering for seventy years has been toward determinism and precision. Now neuromorphic engineers are asking: what if we need to engineer the noise back in?
Some researchers are taking this challenge head-on. Work on exploiting noise as a resource for computation demonstrates that the inherent noise and variation in memristor nanodevices can be harnessed as features for energy-efficient on-chip learning rather than fought as bugs. The stochastic behaviour that conventional computing spends energy suppressing becomes, in this framework, a computational asset.
The Memristor Revolution
The memristor, theorised by Leon Chua in 1971 and first physically realised by HP Labs in 2008, has become central to the neuromorphic vision. Unlike conventional transistors that forget their state when power is removed, memristors remember. Their resistance depends on the history of current that has flowed through them, a property that maps naturally onto synaptic weight storage.
Moreover, memristors can be programmed with multiple resistance levels, enhancing information density within a single cell. This technology truly shines when memristors are organised into crossbar arrays, performing analog computing that leverages physical laws to accelerate matrix operations. The physics of Ohm's law and Kirchhoff's current law perform the multiplication and addition operations that form the backbone of neural network computation.
Recent progress has been substantial. In February 2024, researchers demonstrated a circuit architecture that enables low-precision analog devices to perform high-precision computing tasks. The secret lies in using a weighted sum of multiple devices to represent one number, with subsequently programmed devices compensating for preceding programming errors. This breakthrough was achieved not just in academic settings but in cutting-edge System-on-Chip designs, with memristor-based neural processing units fabricated in standard commercial foundries.
In 2025, researchers presented a memristor-based analog-to-digital converter featuring adaptive quantisation for diverse output distributions. Compared to state-of-the-art designs, this converter achieved a 15-fold improvement in energy efficiency and nearly 13-fold reduction in area. The trajectory is clear: memristor technology is maturing from laboratory curiosity to commercial viability.
Yet challenges remain. Current research highlights key issues including device variation, the need for efficient peripheral circuitry, and systematic co-design and optimisation. By integrating advances in flexible electronics, AI hardware, and three-dimensional packaging, memristor logic gates are expected to support scalable, reconfigurable computing in edge intelligence and in-memory processing systems.
The Economics of Imitation
Even if neuromorphic systems could perfectly replicate biological neural function, the economics of silicon manufacturing impose their own constraints. The global neuromorphic computing market was valued at approximately 28.5 million US dollars in 2024, projected to grow to over 1.3 billion by 2030. These numbers, while impressive in growth rate, remain tiny compared to the hundreds of billions spent annually on conventional semiconductor manufacturing.
Scale matters in chip production. The fabs that produce cutting-edge processors cost tens of billions of dollars to build and require continuous high-volume production to amortise those costs. Neuromorphic chips, with their specialised architectures and limited production volumes, cannot access the same economies of scale. The manufacturing processes are not yet optimised for large-scale production, resulting in high costs per chip.
This creates a chicken-and-egg problem. Without high-volume applications, neuromorphic chips remain expensive. Without affordable chips, applications remain limited. The industry is searching for what some call a “killer app,” the breakthrough use case that would justify the investment needed to scale production.
Energy costs may provide that driver. Training a single large language model can consume electricity worth millions of dollars. Data centres worldwide consume over one percent of global electricity, and that fraction is rising. If neuromorphic systems can deliver on their promise of dramatically reduced power consumption, the economic equation shifts.
In April 2025, during the annual International Conference on Learning Representations, researchers demonstrated the first large language model adapted to run on Intel's Loihi 2 chip. It achieved accuracy comparable to GPU-based models while using half the energy. This milestone represents meaningful progress, but “half the energy” is still a long way from the femtojoule-per-operation regime of biological synapses. The gap between silicon neuromorphic systems and biological brains remains measured in orders of magnitude.
Beyond the Brain Metaphor
And this raises a disquieting question: what if the biological metaphor is itself a constraint?
The brain evolved under pressures that have nothing to do with the tasks we ask of artificial intelligence. It had to fit inside a skull. It had to run on the chemical energy of glucose. It had to develop through embryogenesis and remain plastic throughout a lifetime. It had to support consciousness, emotion, social cognition, and motor control simultaneously. These constraints shaped its architecture in ways that may be irrelevant or even counterproductive for artificial systems.
Consider memory. Biological memory is reconstructive rather than reproductive. We do not store experiences like files on a hard drive; we reassemble them from distributed traces each time we remember, which is why memories are fallible and malleable. This is fine for biological organisms, where perfect recall is less important than pattern recognition and generalisation. But for many computing tasks, we want precise storage and retrieval. The biological approach is a constraint imposed by wet chemistry, not an optimal solution we should necessarily imitate.
Or consider the brain's operating frequency. Neurons fire at roughly 10 hertz, while transistors switch at gigahertz, a factor of one hundred million faster. IBM researchers realised that event-driven spikes use silicon-based transistors inefficiently. If synapses in the human brain operated at the same rate as a laptop, as one researcher noted, “our brain would explode.” The slow speed of biological neurons is an artefact of electrochemical signalling, not a design choice. Forcing silicon to mimic this slowness wastes most of its speed advantage.
These observations suggest that the most energy-efficient computing paradigm for silicon may have no biological analogue at all.
Alternative Paradigms Without Biological Parents
Thermodynamic computing represents perhaps the most radical departure from both conventional and neuromorphic approaches. Instead of fighting thermal noise, it harnesses it. The approach exploits the natural stochastic behaviour of physical systems, treating heat and electrical noise not as interference but as computational resources.
The startup Extropic has developed what they call a thermodynamic sampling unit, or TSU. Unlike CPUs and GPUs that perform deterministic computations, TSUs produce samples from programmable probability distributions. The fundamental insight is that the random behaviour of “leaky” transistors, the very randomness that conventional computing engineering tries to eliminate, is itself a powerful computational resource. Simulations suggest that running denoising thermodynamic models on TSUs could be 10,000 times more energy-efficient than equivalent algorithms on GPUs.
Crucially, thermodynamic computing sidesteps the scaling challenges that plague quantum computing. While quantum computers require cryogenic temperatures, isolation from environmental noise, and exotic fabrication processes, thermodynamic computers can potentially be built using standard CMOS manufacturing. They embrace the thermal environment that quantum computers must escape.
Optical computing offers another path forward. Researchers at MIT demonstrated in December 2024 a fully integrated photonic processor that performs all key computations of a deep neural network optically on-chip. The device completed machine-learning classification tasks in less than half a nanosecond while achieving over 92% accuracy. Crucially, the chip was fabricated using commercial foundry processes, suggesting a path to scalable production.
The advantages of photonics are fundamental. Light travels at the speed of light. Photons do not interact with each other, enabling massive parallelism without interference. Heat dissipation is minimal. Bandwidth is essentially unlimited. Work at the quantum limit has demonstrated optical neural networks operating at just 0.038 photons per multiply-accumulate operation, approaching fundamental physical limits of energy efficiency.
Yet photonic computing faces its own challenges. Implementing nonlinear functions, essential for neural network computation, is difficult in optics precisely because photons do not interact easily. The MIT team's solution was to create nonlinear optical function units that combine electronics and optics, a hybrid approach that sacrifices some of the purity of all-optical computing for practical functionality.
Hyperdimensional computing takes inspiration from the brain but in a radically simplified form. Instead of modelling individual neurons and synapses, it represents concepts as very high-dimensional vectors, typically with thousands of dimensions. These vectors can be combined using simple operations like addition and multiplication, with the peculiar properties of high-dimensional spaces ensuring that similar concepts remain similar and different concepts remain distinguishable.
The approach is inherently robust to noise and errors, properties that emerge from the mathematics of high-dimensional spaces rather than from any biological mechanism. Because the operations are simple, implementations can be extremely efficient, and the paradigm maps well onto both conventional digital hardware and novel analog substrates.
Reservoir computing exploits the dynamics of fixed nonlinear systems to perform computation. The “reservoir” can be almost anything: a recurrent neural network, a bucket of water, a beam of light, or even a cellular automaton. Input signals perturb the reservoir, and a simple readout mechanism learns to extract useful information from the reservoir's state. Training occurs only at the readout stage; the reservoir itself remains fixed.
This approach has several advantages. By treating the reservoir as a “black box,” it can exploit naturally available physical systems for computation, reducing the engineering burden. Classical and quantum mechanical systems alike can serve as reservoirs. The computational power of the physical world is pressed into service directly, rather than laboriously simulated in silicon.
The Fidelity Paradox
So we return to the question posed at the outset: to what extent do current neuromorphic and in-memory computing approaches represent faithful translations of biological principles versus engineering approximations constrained by silicon physics and manufacturing economics?
The honest answer is: mostly the latter. Current neuromorphic systems capture certain aspects of biological neural computation, principally the co-location of memory and processing, the use of spikes as information carriers, and some forms of synaptic plasticity, while necessarily abandoning others. The stochasticity, the temporal dynamics, the dendritic computation, the neuromodulation, the glial involvement, and countless other biological mechanisms are simplified, approximated, or omitted entirely.
This is not necessarily a criticism. Engineering always involves abstraction and simplification. The question is whether the aspects retained are the ones that matter for efficiency, and whether the aspects abandoned would matter if they could be practically implemented.
Here the evidence is mixed. Neuromorphic systems do demonstrate meaningful energy efficiency gains for certain tasks. Intel's Loihi achieves performance improvements of 100 to 10,000 times in energy efficiency for specific workloads compared to conventional approaches. IBM's TrueNorth can perform 46 billion synaptic operations per second per watt. These are substantial achievements.
But they remain far from biological efficiency. The brain achieves femtojoule-per-operation efficiency; current neuromorphic hardware typically operates in the picojoule range or above, a gap of three to six orders of magnitude. Researchers have achieved artificial synapses operating at approximately 1.23 femtojoules per synaptic event, rivalling biological efficiency, but scaling these laboratory demonstrations to practical systems remains a formidable challenge.
The SpiNNaker 2 system under construction at TU Dresden, projected to incorporate 5.2 million ARM cores distributed across 70,000 chips in 10 server racks, represents the largest neuromorphic system yet attempted. One SpiNNaker2 chip contains 152,000 neurons and 152 million synapses across its 152 cores. It targets applications in neuroscience simulation and event-based AI, but widespread commercial deployment remains on the horizon rather than in the present.
Manufacturing Meets Biology
The constraints of silicon manufacturing interact with biological metaphors in complex ways. Neuromorphic chips require novel architectures that depart from the highly optimised logic and memory designs that dominate conventional fabrication. This means they cannot fully leverage the massive investments that have driven conventional chip performance forward for decades.
The BrainScaleS-2 system uses a mixed-signal design that combines analog neural circuits with digital control logic. This approach captures more biological fidelity than purely digital implementations but requires specialised fabrication and struggles with device-to-device variation. Memristor-based approaches offer elegant physics but face reliability and manufacturing challenges that CMOS transistors solved decades ago.
Some researchers are looking to materials beyond silicon entirely. Two-dimensional materials like graphene and transition metal dichalcogenides offer unique electronic properties that could enable new computational paradigms. By virtue of their atomic thickness, 2D materials represent the ultimate limit for downscaling. Spintronics exploits electron spin rather than charge for computation, with device architectures achieving approximately 0.14 femtojoules per operation. Organic electronics promise flexible, biocompatible substrates. Each of these approaches trades the mature manufacturing ecosystem of silicon for potentially transformative new capabilities.
The Deeper Question
Perhaps the deepest question is whether we should expect biological and silicon-based computing to converge at all. The brain and the processor evolved under completely different constraints. The brain is an electrochemical system that developed over billions of years of evolution, optimised for survival in unpredictable environments with limited and unreliable energy supplies. The processor is an electronic system engineered over decades, optimised for precise, repeatable operations in controlled environments with reliable power.
The brain's efficiency arises from its physics: the slow propagation of electrochemical signals, the massive parallelism of synaptic computation, the integration of memory and processing at the level of individual connections, the exploitation of stochasticity for probabilistic inference. These characteristics are not arbitrary design choices but emergent properties of wet, carbon-based, ion-channel-mediated computation. The brain's cognitive power emerges from a collective form of computation extending over very large ensembles of sluggish, imprecise, and unreliable components.
Silicon's strengths are different: speed, precision, reliability, manufacturability, and the ability to perform billions of identical operations per second with deterministic outcomes. These characteristics emerge from the physics of electron transport in crystalline semiconductors and the engineering sophistication of nanoscale fabrication.
Forcing biological metaphors onto silicon may obscure computational paradigms that exploit silicon's native strengths rather than fighting against them. Thermodynamic computing, which embraces thermal noise as a resource, may be one such paradigm. Photonic computing, which exploits the speed and parallelism of light, may be another. Hyperdimensional computing, which relies on mathematical rather than biological principles, may be a third.
None of these paradigms is necessarily “better” than neuromorphic computing. Each offers different trade-offs, different strengths, different suitabilities for different applications. The landscape of post-von Neumann computing is not a single path but a branching tree of possibilities, some inspired by biology and others inspired by physics, mathematics, or pure engineering intuition.
Where We Are, and Where We Might Go
The current state of neuromorphic computing is one of tremendous promise constrained by practical limitations. The theoretical advantages are clear: co-located memory and processing, event-driven operation, native support for temporal dynamics, and potential for dramatic energy efficiency improvements. The practical achievements are real but modest: chips that demonstrate order-of-magnitude improvements for specific workloads but remain far from the efficiency of biological systems and face significant scaling challenges.
The field is at an inflection point. The projected 45-fold growth in the neuromorphic computing market by 2030 reflects genuine excitement about the potential of these technologies. The demonstration of large language models on neuromorphic hardware in 2025 suggests that even general-purpose AI applications may become accessible. The continued investment by major companies like Intel, IBM, Sony, and Samsung, alongside innovative startups, ensures that development will continue.
But the honest assessment is that we do not yet know whether neuromorphic computing will deliver on its most ambitious promises. The biological brain remains, for now, in a category of its own when it comes to energy-efficient general intelligence. Whether silicon can ever reach biological efficiency, and whether it should try to or instead pursue alternative paradigms that play to its own strengths, remain open questions.
What is becoming clear is that the future of computing will not look like the past. The von Neumann architecture that has dominated for seventy years is encountering fundamental limits. The separation of memory and processing, which made early computers tractable, has become a bottleneck that consumes energy and limits performance. In-memory computing is an emerging non-von Neumann computational paradigm that keeps alive the promise of achieving energy efficiencies on the order of one femtojoule per operation. Something different is needed.
That something may be neuromorphic computing. Or thermodynamic computing. Or photonic computing. Or hyperdimensional computing. Or reservoir computing. Or some hybrid not yet imagined. More likely, it will be all of these and more, a diverse ecosystem of computational paradigms each suited to different applications, coexisting rather than competing.
The brain, after all, is just one solution to the problem of efficient computation, shaped by the particular constraints of carbon-based life on a pale blue dot orbiting an unremarkable star. Silicon, and the minds that shape it, may yet find others.
References and Sources
“Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number.” Proceedings of the National Academy of Sciences (PNAS). https://www.pnas.org/doi/10.1073/pnas.2008173118
“Can neuromorphic computing help reduce AI's high energy cost?” PNAS, 2025. https://www.pnas.org/doi/10.1073/pnas.2528654122
“Organic core-sheath nanowire artificial synapses with femtojoule energy consumption.” Science Advances. https://www.science.org/doi/10.1126/sciadv.1501326
Intel Loihi Architecture and Specifications. Open Neuromorphic. https://open-neuromorphic.org/neuromorphic-computing/hardware/loihi-intel/
Intel Loihi 2 Specifications. Open Neuromorphic. https://open-neuromorphic.org/neuromorphic-computing/hardware/loihi-2-intel/
SpiNNaker Project, University of Manchester. https://apt.cs.manchester.ac.uk/projects/SpiNNaker/
SpiNNaker 2 Specifications. Open Neuromorphic. https://open-neuromorphic.org/neuromorphic-computing/hardware/spinnaker-2-university-of-dresden/
BrainScaleS-2 System Documentation. Heidelberg University. https://electronicvisions.github.io/documentation-brainscales2/latest/brainscales2-demos/fp_brainscales.html
“Emerging Artificial Neuron Devices for Probabilistic Computing.” Frontiers in Neuroscience, 2021. https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2021.717947/full
“Exploiting noise as a resource for computation and learning in spiking neural networks.” Cell Patterns, 2023. https://www.sciencedirect.com/science/article/pii/S2666389923002003
“Thermodynamic Computing: From Zero to One.” Extropic. https://extropic.ai/writing/thermodynamic-computing-from-zero-to-one
“Thermodynamic computing system for AI applications.” Nature Communications, 2025. https://www.nature.com/articles/s41467-025-59011-x
“Photonic processor could enable ultrafast AI computations with extreme energy efficiency.” MIT News, December 2024. https://news.mit.edu/2024/photonic-processor-could-enable-ultrafast-ai-computations-1202
“Quantum-limited stochastic optical neural networks operating at a few quanta per activation.” PMC, 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC11698857/
“2025 IEEE Study Leverages Silicon Photonics for Scalable and Sustainable AI Hardware.” IEEE Photonics Society. https://ieeephotonics.org/announcements/2025ieee-study-leverages-silicon-photonics-for-scalable-and-sustainable-ai-hardwareapril-3-2025/
“Recent advances in physical reservoir computing: A review.” Neural Networks, 2019. https://www.sciencedirect.com/science/article/pii/S0893608019300784
“Brain-inspired computing systems: a systematic literature review.” The European Physical Journal B, 2024. https://link.springer.com/article/10.1140/epjb/s10051-024-00703-6
“Current opinions on memristor-accelerated machine learning hardware.” Solid-State Electronics, 2025. https://www.sciencedirect.com/science/article/pii/S1359028625000130
“A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks.” PMC, 2015. https://pmc.ncbi.nlm.nih.gov/articles/PMC4438254/
“Updated energy budgets for neural computation in the neocortex and cerebellum.” Journal of Cerebral Blood Flow & Metabolism, 2012. https://pmc.ncbi.nlm.nih.gov/articles/PMC3390818/
“Stochasticity from function – Why the Bayesian brain may need no noise.” Neural Networks, 2019. https://www.sciencedirect.com/science/article/pii/S0893608019302199
“Deterministic networks for probabilistic computing.” PMC, 2019. https://ncbi.nlm.nih.gov/pmc/articles/PMC6893033
“Programming memristor arrays with arbitrarily high precision for analog computing.” USC Viterbi, 2024. https://viterbischool.usc.edu/news/2024/02/new-chip-design-to-enable-arbitrarily-high-precision-with-analog-memories/
“Advances of Emerging Memristors for In-Memory Computing Applications.” PMC, 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12508526/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk








