Cognitive Foreclosure: How AI Is Quietly Reshaping a Generation of Minds

On a Tuesday morning in a primary school on the outskirts of Melbourne, a nine-year-old is asked to work out, without help, why a character in a short story is lying to his mother. She reads the paragraph twice. She frowns. Then she reaches for the tablet on the desk beside her, not out of defiance, but out of something that looks more like a reflex, the way a left-handed child reaches for a pencil. Her teacher, watching from the back of the room, later describes the gesture as “the most ordinary thing in the world, and the most frightening thing I see all day”. The girl has been using a chatbot to answer comprehension questions since she was seven. When her teacher gently removes the tablet and asks her to try again, the girl sits very still for a long moment, and then she begins to cry. Not because she is upset about the story. Because she does not know where to start.
The teacher who told me this story, and who asked that neither she nor her school be named because the parents in her catchment are already litigious about screen-time policies, says she has been teaching for twenty-two years. She has seen phonics wars, whole-language revivals, iPads promised as the saviour of literacy and then quietly stripped from her classroom, a pandemic, a long tail of pandemic, and the slow arrival of tools she still struggles to describe without sounding apocalyptic or ridiculous. What she has not seen before, she says, is a child who reaches for a machine not to cheat, but because she genuinely does not understand that thinking is something a person can do by herself.
That scene, or some version of it, is the one haunting a quieter argument now running beneath the louder one about AI and work. The loud argument is about jobs: which ones the models will take, which ones they will refashion, whether the productivity dividend will be broadly shared or narrowly hoarded. It is a serious argument, and it is the argument most of the research funding is chasing. But the quieter one, the one that turns up in developmental psychology journals, in Senate committee testimony, in the footnotes of arXiv preprints, is about something else. It is about whether a generation of children is growing up in an environment where the mental work that would have built their minds is being done for them, so reliably and so invisibly, that nobody, not even the children themselves, will be able to tell what has been lost until the loss is structural and the windows for repair have already shut.
The distinction nobody was making
In March 2026, a piece called “Adults Lose Skills to AI. Children Never Build Them.” appeared on the Psychology Today site under the byline of a researcher writing in its Algorithmic Mind column. The argument it makes is small and precise, and once you have seen it, the rest of the debate looks blurry. Adults who hand cognitive tasks to AI, the piece says, are offloading skills they already possess. The capacity existed; the neural scaffolding was built; the effortful years of doing the thing for themselves left behind an internal model that persists even when the external crutch is taken away. An accountant who uses a spreadsheet still knows, in some muscle-memory way, how the calculation should go. A journalist who leans on autocomplete still has, somewhere, the instinct for the shape of a sentence. This kind of offloading is what the piece calls atrophy. It is recoverable. Pull the tool away, do the exercise for a while, and the capacity comes back, stiff at first and then easier, like a limb out of a cast.
What happens to children, the piece argues, is not atrophy. It is foreclosure. A child who has never learnt to structure an argument, but who has been using AI to structure arguments since she was seven, is not weakening a capacity she already owns. She is skipping the developmental step at which the capacity would have been assembled in the first place. There is no cast to remove because there is no limb underneath. And because the child has no independent baseline, no memory of a self who used to be able to do this without help, she cannot recognise what is missing. She cannot mourn what she never had. From the inside, foreclosure does not feel like a loss. It feels like the way the world has always been.
This is the framing that the wider AI-and-cognition debate has largely missed, and its usefulness is that it cuts cleanly through a conversation that has been going round in circles since at least the mid-2010s. The calculator analogy, which is the default comfort blanket reached for whenever anyone raises concerns about AI in classrooms, assumes an adult model of cognition: people who already know their times tables can use a calculator without forgetting them, so children who already know how to write can use a chatbot without forgetting how. The problem is that the second clause is doing an enormous amount of quiet work. It presupposes the very thing AI in early education calls into question, which is whether the children in front of the tablet ever acquired the underlying capacity to begin with.
The Psychology Today framing also clarifies why “AI is just the new calculator” has always been the wrong metaphor, even for adults. Calculators replaced a narrow, visible, easily measurable skill: arithmetic drill. You could tell, at a glance, whether a sixteen-year-old could do long division. You could not tell, at a glance, whether a sixteen-year-old could construct an argument, weigh contradictory evidence, or notice when a paragraph did not quite make sense. The cognitive work that large language models absorb is precisely the invisible, foundational, harder-to-assess kind. You do not find out what has been foreclosed until the child is twenty-three, in her first real job, staring at a problem that no prompt will dissolve.
What the Fortune story actually said
The Psychology Today piece was not written in a vacuum. A few weeks earlier, Fortune had published a story, drawing on testimony the neuroscientist Jared Cooney Horvath gave to the United States Senate Committee on Commerce, Science, and Transportation in January 2026, with a headline sharp enough to survive the algorithmic churn: Gen Z, Horvath told senators, appeared to be the first generation in modern history to test as less cognitively capable than their parents. The follow-up Fortune story in March put a figure on the problem. The United States, the piece argued, had spent around thirty billion dollars since the mid-2000s replacing textbooks with laptops and tablets, and what it had bought for the money was not smarter children. It was the reversal of a century-long trend.
Horvath's headline claim is not, strictly, a claim about AI. It is a claim about screens, edtech, and the accumulated effects of two decades in which classrooms were rebuilt around the assumption that digital tools would make children sharper. What the actual data show, according to his Senate testimony, is something closer to the opposite. He cited the OECD's Programme for International Student Assessment, whose 2022 round, the most recent for which full results are public, recorded what the OECD itself described as an unprecedented drop in fifteen-year-olds' performance: reading down ten score points, mathematics down almost fifteen, compared with the 2018 cycle, with the mathematics decline three times larger than any previous consecutive change and not attributable solely to the pandemic. Science was flat. Reading had been drifting downward for about a decade. These are, by the OECD's own accounting, equivalent to roughly three-quarters of a year of lost learning, across 81 member countries and economies, involving around 700,000 children.
It is worth being careful about what Horvath did and did not say. He did not say that AI has broken the minds of Generation Z. The large language models that most worry the developmental psychologists arrived too recently to have shaped the cohorts PISA was measuring. What he said was that the decline began somewhere around 2010, which is the moment smartphones became ambient in teenagers' lives and the moment American school districts started buying laptops by the truckload. The declines, he added, cut across attention, memory, literacy, numeracy, executive function and general IQ. He argued that this is consistent with a structural mismatch between how human cognition develops and how digital platforms are engineered to harvest attention, fragment focus and reward task-switching. He also argued, importantly, that the effects appear to be environmental rather than genetic, and therefore at least in principle reversible.
Taken alone, the Horvath testimony would be a disputable but interesting data point. Taken together with the wider Flynn-effect-reversal literature, it becomes harder to wave away. The Flynn effect, named for the political scientist James Flynn, was the observation that IQ scores rose steadily, by roughly three points per decade, across most of the twentieth century in most of the developed world. It is one of the most replicated findings in psychometrics. What recent work, including the Bratsberg and Rogeberg sibling study in Norway, has found is that this rise began to stall in the 1990s and, in some countries, has reversed. Norway, Denmark, Finland, the United Kingdom and France have all produced cohorts whose measured IQ is lower than their parents'. The Bratsberg and Rogeberg work is particularly hard to explain away because it uses within-family comparisons, which rule out the usual dysgenic stories about immigration or differential fertility. Whatever is causing the reversal is environmental, which means it was built by choices and could be unbuilt by different ones.
This does not mean Horvath's stronger framing is uncontested. Critics point out, fairly, that the skills PISA tests, and the skills IQ tests were built to measure, are not the whole of cognition. Some of what looks like decline may be a genuine loss of older competences while newer ones, digital navigation, rapid information filtering, cross-modal search, are not being captured by instruments designed in the 1960s. Some of it may be a confound with the pandemic. Some of it may be a sampling artefact as participation rates drift. These are real objections. They are also, collectively, not enough to dispose of the trend. The honest reading of the evidence is that something is happening to the cognitive capacities of young people across several developed countries, that it predates generative AI by at least a decade, and that the arrival of generative AI has dropped an accelerant onto whatever fire was already lit.
How effort becomes capacity
The reason the Fortune story and the Psychology Today framing matter, and the reason they are more than just another moral panic about screens, is that there is a mechanism. The mechanism is old, well replicated, and wildly inconvenient for anyone who would like to believe that an AI tutor is the same as a human one with lower overheads.
Robert Bjork, the UCLA cognitive psychologist who, with his wife Elizabeth Bjork, spent the better part of four decades mapping how people actually learn, coined the term “desirable difficulties” in 1994. The phrase is counterintuitive by design. What Bjork's work showed, across hundreds of studies in his lab and elsewhere, is that conditions which make learning feel slower and harder in the moment, such as spacing practice sessions out, interleaving different topics, forcing yourself to retrieve an answer before checking it, generating your own examples, produce dramatically better long-term retention and transfer than conditions which make learning feel smooth. The cognitive struggle is not a bug on the way to understanding. It is the thing that builds the understanding. The feeling of effortful recall, the moment when your brain has to fetch something that is almost but not quite there, is, as far as anyone can tell, the moment at which the neural trace is strengthened. Easy learning is forgettable learning. Hard, but achievable, learning is the kind that lasts.
Retrieval practice, the Bjorks' most famous technique, is the clearest illustration. In a now-canonical 2006 study, the memory researchers Henry Roediger and Jeffrey Karpicke showed that students who spent part of their study time testing themselves on the material, rather than simply re-reading it, recalled roughly fifty per cent more of it a week later, even though in the moment the re-readers felt they knew the material better. The test-takers felt worse about their own learning and had actually learnt more. This gap between the feeling of fluency and the reality of competence is, for the Bjorks, the central pedagogical fact of the twentieth century, and it is exactly the fact that AI tools are engineered, by commercial necessity, to flatter.
Now consider what happens when a child faces a writing task and asks a chatbot to help. The child types a prompt. The model returns a draft. The child reads the draft, perhaps edits it, perhaps not, and submits. Somewhere in that loop, the part where the child had to sit with the blank page, feel the discomfort of not knowing where to start, retrieve the half-remembered fragment of an idea, generate a sentence and then judge whether the sentence was any good, has been excised. The child experiences a product. What has been bypassed is the process, and the process is the learning. The writing task, in Bjork's terms, has been stripped of every desirable difficulty that made it pedagogically useful in the first place, and what is left is a performance.
It is tempting to assume this is a problem only for writing. It is not. A preprint posted to arXiv by the Anthropic fellows Judy Hanwen Shen and Alex Tamkin in late January 2026, titled “How AI Impacts Skill Formation” (arXiv:2601.20245), ran a randomised controlled trial with fifty-two professional software engineers who used Python regularly but had not worked with Trio, a library for asynchronous programming. Half used an AI assistant to complete two feature-building tasks. Half did the tasks by hand. Both groups then took a comprehension quiz covering code reading, debugging, conceptual understanding and related competences. The AI-assisted engineers finished the tasks only marginally faster than the controls, but they scored seventeen per cent lower on the comprehension quiz, fifty per cent versus sixty-seven per cent on average, with the steepest deficit in debugging. The paper's bluntest line is that AI assistance, in this setup, bought almost no productivity and cost a substantial chunk of learning.
The Shen and Tamkin paper is important for two reasons. The first is its methodological cleanness: it is a randomised trial, with adults, in a domain where the outputs can be scored objectively, and it still finds that AI use impairs skill formation. Adults are the easy case, the case the Psychology Today framing says should be recoverable, and the study shows the effect arriving even there. The second reason is the paper's subtler finding, which is that not all AI interactions are equivalent. The authors identify six distinct patterns of how participants used the model, and three of them, broadly, the ones where users asked the AI conceptual questions, asked for explanations of code rather than code itself, or treated the model as a tutor rather than a dictation machine, preserved learning outcomes. The other three did not. The difference is precisely the amount of effortful processing the user still did for themselves. When the AI absorbed the cognitive work, skill formation suffered. When the AI augmented the cognitive work without replacing it, skill formation survived.
This is the mechanism that explains why the child in the Melbourne classroom cried. For her, every piece of writing she had ever done was an interaction pattern in which the model absorbed the cognitive work. The capacity to sit with a blank page and do the effortful retrieval herself had not atrophied; it had never been built. When the scaffold was removed, there was nothing underneath it, because the scaffold, in her experience, was what a paragraph was.
The windows that close in the dark
Developmental neuroscience has a concept that makes all of this more alarming than it would otherwise be, and that is the concept of the critical period. The idea, first established in work on the visual cortex by David Hubel and Torsten Wiesel in the 1960s, which won them the Nobel Prize, is that brains are unusually plastic at specific points in development and then harden into something more fixed. If a kitten's eye is sewn shut during the critical period for binocular vision, the animal never develops normal depth perception, even after the eye is opened. The relevant machinery has simply been pruned away. The window closes. The brain moves on.
The critical-period literature has since been extended, with varying degrees of confidence, to language, hearing, phonological discrimination, some aspects of social cognition, and, more cautiously, to higher-order skills like executive function and abstract reasoning. Nobody serious claims that essay writing has a critical period in the Hubel-Wiesel sense. The developmental windows for the cognitive skills most relevant to schoolwork are longer, softer, more “sensitive periods” than hard critical ones, more like doors that gradually narrow than doors that slam. But the general principle holds: the brain you have at thirty is substantially shaped by which circuits got exercised between the ages of four and fourteen, and the circuits that do not get exercised are quietly pruned in favour of the ones that do. The developing brain is ruthless about not maintaining capacity it does not seem to need.
What Psychology Today's March 2026 piece is really proposing, if you follow the logic through, is that the sensitive period for a whole cluster of cognitive capacities, not just reading and writing but the habits of retrieval, argument, patience with uncertainty, willingness to sit inside a problem, is being spent in environments where those capacities are not needed, because something else is doing the work. The child is not lazy. The child is responding, correctly, to the affordances of her environment. If the environment rewards prompting over thinking, the environment will get children who are very good at prompting and have never developed the cognitive muscle for thinking. The pruning is not a moral failure. It is how brains work.
This is the part of the argument where sensible people want to reach for the calculator analogy again, and it is the part where the analogy most obviously breaks. Calculators do not build arguments or interpret metaphors or quietly suggest that your reasoning is unsound. They do one narrow thing. A large language model does the whole general-purpose cognitive stack. The relevant comparison is not “what happened to mental arithmetic when calculators arrived” but “what would happen to reading if, from the age of four, a machine read everything aloud for you, summarised it, and told you what to think about it”. We have reasonable confidence, from decades of reading research, that the answer would not be “children who read as well as their parents, plus more”. It would be children who never acquired the circuitry that reading builds, and who would struggle to acquire it later, because the window would be smaller and the pruning already done.
The detection problem
If foreclosure is the worry, the next question is how you would even know. This is the problem that makes the whole subject genuinely difficult, because the honest answer is: at the moment, you would not. Not in time.
Consider the instruments. PISA runs every three years and publishes results with a lag of about eighteen months. The most recent full cycle, for which results exist, is 2022. The next, 2025, will tell us something about the cohort of fifteen-year-olds who were eleven when GPT-4 arrived, but it will tell us in 2026 or 2027, about a tool that reached maturity in late 2022, so the lag between capacity loss and its measurement is already around five years, and those are the fast instruments. Standardised tests administered in individual countries have their own lags, their own methodological controversies, their own periodic rewritings. IQ testing is rare, expensive and freighted with political baggage. The longitudinal studies that produced the Flynn-effect literature take decades to run and decades more to analyse. None of this machinery is built to detect a capacity collapse in real time.
Worse, the instruments we have are disproportionately good at measuring the things AI is already good at. A child who can prompt a chatbot to write a competent five-paragraph essay will produce a competent five-paragraph essay. The assessment, if it is marking surface features, will record a capable student. What the assessment cannot easily see is whether the child could have produced the essay without the machine, whether she could defend any of its claims under gentle questioning, whether she could identify the one sentence in it that is subtly wrong. The symptoms of foreclosure are, by construction, visible only in the conditions the test is not running. This is not a new problem in education. It is the old problem of fluency illusions, the Bjorks' observation that students routinely mistake the feeling of understanding for actual understanding, applied at population scale and accelerated by tools that are very good at generating the feeling.
There are earlier warning lights, but they are easy to miss. Teachers, if you ask them, will often tell you that something has changed. The sort of story the Melbourne teacher told me turns up in quiet rooms at education conferences more and more often: children who do not know how to begin, children who panic when the Wi-Fi goes down, children who can summarise a text without being able to explain what it meant, children who will tell you the answer is “whatever the AI said” and cannot say more. These are noisy anecdotes, easily dismissed as the usual generational grumbling. But teachers were also the first to notice that reading stamina was collapsing, years before any national test caught it, and the national tests eventually caught up. Anecdote at scale is data with the p-values stripped off.
Better instruments exist in principle. Cognitive load tasks, where a child is asked to reason aloud through a problem without a screen, can distinguish between the child who has internalised the process and the child who has only ever observed it. “Structured desisting” protocols, in which pupils are asked to complete a task the hard way while being observed, expose the difference between performance and competence. Neuropsychological batteries can pick up executive-function deficits that do not show up on content tests. None of these are new. All of them are more expensive, more intrusive and less media-friendly than a headline number. None of them are being rolled out at anything like the scale the problem would justify.
The deeper detection problem is temporal. Cognitive capacities, like compound interest, reveal themselves most obviously in the long run. A child who has not built argumentative stamina at nine may look fine at nine, because nine-year-olds are not asked to sustain long arguments. She may look fine at fourteen, when her assessments reward short-form production at which AI excels. The capacity she is missing only becomes load-bearing at nineteen, when she is asked to write a dissertation, or at twenty-six, when she is asked to lead a meeting nobody in the room quite understands, or at thirty-one, when she is the one expected to notice that a model's output is wrong. By that point, the window she would have needed to build the missing capacity in has long since narrowed, and the environment she is in has no incentive to reopen it.
This is what makes the foreclosure framing morally serious rather than merely alarming. If the worry were “children will do less well on tests next year”, we would notice next year. The worry is that children will do roughly as well on tests next year, and the year after, and the year after that, because the tests measure the thing the machine is doing, and the underlying cognitive formation will show up missing only much later, in contexts nobody is tracking, to people who have no baseline against which to know what they lost.
What knowing would demand
It is tempting, at this point in an argument of this kind, to reach for the policy conclusion most congenial to the writer's prior commitments. The restrictionists will want phone bans, chatbot bans, a return to pencils. The optimists will want more AI, of a better kind, with better pedagogical design, and will point, correctly, to the Shen and Tamkin finding that some interaction patterns preserve learning. Both of these are reflexes. Neither of them takes the detection problem seriously.
The harder thing to say is that if the Psychology Today framing is right, even approximately, the response has to be architectural rather than prohibitive. You cannot ban children out of the environment they live in. The environment is the internet, and the internet now has generative models woven into most of its surfaces, and that genie is not returning to its bottle. But you can, in the environments you control, engineer deliberate zones of desirable difficulty: places where the cognitive work is protected from outsourcing not because AI is bad, but because the work is the point. Classrooms that do some things on paper, not as a nostalgic gesture but as a cognitive-science intervention. Assessments that measure process, not just product. Homework that cannot be plausibly completed by a chatbot because it requires the child to explain her reasoning in real time, to a human, without a screen. The Danish school reforms Horvath cited in his Senate testimony, which pulled tablets out of early years and reintroduced pencils and books, are not a Luddite gesture. They are a bet that the developmental window matters more than the device.
Architectural responses also mean taking the detection problem as seriously as the problem itself. If we cannot know whether capacities are foreclosing until the cohort in question is adult and the window has shut, then the only responsible posture is to build, now, the instruments we will need then: longitudinal studies that follow today's seven-year-olds through to adulthood with periodic process-oriented assessments, funding for the boring, non-headline-grabbing work of measuring what is actually happening to attention spans and retrieval ability and argumentative stamina, independence for those studies from the platforms that would rather the results were flattering. This is expensive and unsexy and will produce results on a timescale longer than any electoral cycle. It is also the only way to avoid waking up in 2040 with a generation of adults who cannot do things their parents took for granted, and without the data to show how it happened.
What genuine concern looks like, if you take the evidence seriously, is neither the panic of the restrictionists nor the deflection of the optimists. It looks like a grown-up willingness to say that some things children used to do for themselves were not decoration; they were how the child's mind got built. It looks like designing schools and homes and apps on the assumption that effort is not friction to be smoothed away but the scaffolding on which capacity accretes. It looks like accepting that AI is a permanent feature of the adult environment, and therefore that the business of childhood, more urgently than ever, is to build the cognitive machinery the child will need in order to use those tools as an augmentation rather than a replacement. It looks, finally, like humility about what we do not yet know, and a willingness to act under uncertainty, because the alternative, waiting for proof that will only arrive when it is too late to act on, is a kind of negligence we have rehearsed before, with lead paint and with sugar and with tobacco, and which we keep promising ourselves we will not rehearse again.
The teacher in Melbourne told me the girl who cried over the comprehension question eventually, with coaxing, produced three sentences of her own. They were not very good. They were hers. “That's the first time this term she's thought on the page,” the teacher said. “And I had to physically take the tablet away. I had to sit there and wait. And the worst thing is, I kept wanting to give it back to her. Because it felt cruel. Because she was struggling. And the whole point is that she was supposed to be struggling. That was the lesson. That was the only lesson.”
What Psychology Today's March 2026 piece names is the possibility that the struggle, the messy, tearful, unproductive-looking work of a child sitting with a problem she cannot solve yet, is the developmental window. And the window closes in the dark, unremarked, while everyone is congratulating the child on how fluent her outputs have become. You will not notice when it shuts. You will notice, years later, what does not walk through it.
References
- Psychology Today, March 2026. “Adults Lose Skills to AI. Children Never Build Them.” The Algorithmic Mind column. https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202603/adults-lose-skills-to-ai-children-never-build-them
- Fortune, 21 February 2026. “Neuroscientist warns Gen Z first generation less cognitively capable than their parents.” https://fortune.com/2026/02/21/laptops-tablets-schools-gen-z-less-cognitively-capable-parents-first-time-cellphone-bans-standardized-test-scores/
- Fortune, 1 March 2026. “American schools are broken: Silicon Valley pushed computers in classrooms, plummeting test scores.” https://fortune.com/2026/03/01/american-schools-broken-silicon-valley-edtech-gen-z-test-scores/
- Shen, Judy Hanwen, and Tamkin, Alex. “How AI Impacts Skill Formation.” arXiv preprint arXiv:2601.20245, January 2026. https://arxiv.org/abs/2601.20245
- Horvath, Jared Cooney. Written testimony before the United States Senate Committee on Commerce, Science, and Transportation, January 2026. https://www.commerce.senate.gov/services/files/A19DF2E8-3C69-4193-A676-430CF0C83DC2
- OECD. PISA 2022 Results (Volume I): The State of Learning and Equity in Education. OECD Publishing, Paris, 2023. https://www.oecd.org/en/publications/pisa-2022-results-volume-i_53f23881-en.html
- Bratsberg, Bernt, and Rogeberg, Ole. “Flynn effect and its reversal are both environmentally caused.” Proceedings of the National Academy of Sciences, 115(26), 2018, pp. 6674-6678.
- Bjork, Robert A., and Bjork, Elizabeth L. “Desirable Difficulties in Theory and Practice.” Journal of Applied Research in Memory and Cognition, 2020. https://bjorklab.psych.ucla.edu/wp-content/uploads/sites/13/2016/07/RBjork_inpress.pdf
- Roediger, Henry L., and Karpicke, Jeffrey D. “Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention.” Psychological Science, 17(3), 2006, pp. 249-255.
- Lee, Hao-Ping, et al. “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers.” Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Microsoft Research, 2025. https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/
- Hensch, Takao K. “Critical periods of brain development.” Handbook of Clinical Neurology, 2020. https://pubmed.ncbi.nlm.nih.gov/32958196/
- Anthropic. “How AI assistance impacts the formation of coding skills.” Anthropic Research, 2026. https://www.anthropic.com/research/AI-assistance-coding-skills

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk