AI Exposed the Lie: Schools Never Taught Critical Thinking

Nearly seven in ten middle and high school students now say they believe artificial intelligence is eroding their critical thinking skills. They reported this in a December 2025 survey conducted by the RAND Corporation's American Youth Panel. They also reported, in the very same survey, that they are using AI for homework more than ever before, with usage climbing from 48 per cent to 62 per cent in barely seven months. The students, in other words, can see the problem clearly. They simply cannot stop participating in it.
This is an extraordinarily revealing paradox, and it deserves more scrutiny than the predictable hand-wringing it has generated. Because the most uncomfortable question here is not whether ChatGPT is making teenagers worse at thinking. It is whether the education system that ushered AI into classrooms with such breathless enthusiasm ever genuinely valued the kind of independent, rigorous, critical thought it now claims to be losing.
The answer, if you follow the evidence, is not encouraging.
The Paradox in the Numbers
The RAND data is striking in its internal contradictions. Among the 1,214 young people surveyed (aged 12 to 29, all enrolled in school during the 2025-26 academic year), 67 per cent endorsed the statement that “the more students use AI for their schoolwork, the more it will harm their critical thinking skills.” That figure had risen more than ten percentage points in just ten months. The concern was especially pronounced among female students, 75 per cent of whom agreed, compared with 59 per cent of male students.
Yet during the same period, the percentage of middle schoolers using AI for homework leapt from 30 per cent to 46 per cent, and among high schoolers it jumped from 49 per cent to 60 per cent. Most of these students (60 per cent) also expressed concern about using AI for school-related purposes. So they are worried and they are doing it anyway. This is not cognitive dissonance in any simple sense. It is something more structurally interesting: students have correctly diagnosed a systemic problem, but they exist within a system that gives them no rational incentive to behave differently.
Consider the logic from a student's perspective. Assignments are graded. Grades determine university admissions. University admissions determine (or are perceived to determine) life outcomes. If your peers are using AI and getting better grades, opting out is not a principled stand. It is a competitive disadvantage. The students are not confused. They are trapped.
Think of it another way. You are sixteen. You have five GCSEs to revise for, a personal statement to write, and a part-time job. Your classmates are producing polished coursework in half the time it takes you to write a first draft because they are running their ideas through ChatGPT. Your teachers, overwhelmed and under-resourced, cannot reliably tell the difference. The system rewards the output, not the process. In this environment, choosing not to use AI is not intellectual integrity. It is self-sabotage.
Meanwhile, faculty at the university level are sounding alarms with even greater urgency. A national survey conducted by the American Association of Colleges and Universities and Elon University's Imagining the Digital Future Centre in November 2025 found that 95 per cent of the 1,057 faculty respondents feared that generative AI would increase student overreliance on the technology. Ninety per cent said it would diminish students' critical thinking skills. Eighty-three per cent said AI would decrease student attention spans. And 78 per cent said cheating on their campuses had increased since these tools became widely available, with 57 per cent saying it had increased significantly.
The teachers see the same thing the students see. The difference is that teachers are surprised. The students are not.
A System That Never Quite Got Round to Critical Thinking
Here is where the conversation gets genuinely uncomfortable. Long before ChatGPT existed, education reformers, cognitive scientists, and classroom teachers themselves were raising the alarm about a system that was systematically undermining higher-order thinking. The culprit was not artificial intelligence. It was standardised testing.
The No Child Left Behind Act of 2001 (NCLB) represented, in the United States at least, the triumph of measurable outcomes over meaningful learning. Under its regime, schools were judged by their students' performance on standardised assessments. The consequences of poor scores were severe: funding cuts, staff dismissals, school closures. The entirely predictable result was what educators came to call “teaching to the test,” a practice in which classroom instruction was narrowed to the specific content and formats that would appear on state exams.
The effects were devastating and well-documented. Subjects not covered by standardised tests, including art, music, physical education, and social studies, were minimised or eliminated outright. Some principals eliminated recess to devote more time to test preparation. Science was replaced with additional maths drills. Social studies gave way to language arts worksheets. The phrase that captured this era most succinctly was “sit, get, spit, forget,” a cycle in which students received information passively, regurgitated it on an exam, and promptly forgot it, having never engaged with it at any depth.
The situation in the United Kingdom has followed a parallel trajectory. Successive reforms since the introduction of the National Curriculum in 1988, the expansion of league tables in the 1990s, and the intensification of Ofsted inspections have created an accountability culture that rewards measurable outcomes above all else. Teachers in England report spending enormous amounts of time on assessment preparation, data tracking, and administrative compliance, time that might otherwise be devoted to the kind of open-ended, inquiry-driven teaching that develops critical thinking. The Department for Education published expanded guidance on AI in education in June 2025, stressing that AI tools should support rather than replace subject knowledge and that students still need a strong foundation in reading, writing, and critical thinking to use these tools effectively. But guidance is one thing; structural reform is quite another.
Paulo Freire, the Brazilian educator and philosopher, would have recognised all of this instantly. In his seminal 1968 work “Pedagogy of the Oppressed,” Freire described what he called the “banking model” of education, in which teachers deposit knowledge into the passive receptacles of students' minds, and students are expected to receive, memorise, and repeat. Freire argued that this approach was fundamentally hostile to critical consciousness; the more students worked at storing deposits, the less they developed the critical thinking that would allow them to intervene in the world as transformers of that world. His alternative, critical pedagogy, was rooted in dialogue, in treating students as co-creators of knowledge rather than empty vessels to be filled.
NCLB was, in Freire's terms, the banking model with federal enforcement mechanisms. The UK's accountability framework achieved much the same outcome through different institutional channels. And while NCLB was eventually replaced by the Every Student Succeeds Act (ESSA) in 2015, which offered states greater flexibility in assessment design, the deeper cultural damage had been done. An entire generation of teachers on both sides of the Atlantic had been trained in a system that rewarded compliance over curiosity, memorisation over analysis, and standardised answers over independent thought.
So when commentators now lament that AI is destroying students' capacity for critical thinking, the honest follow-up question is: which critical thinking? When, precisely, was this golden age of independent thought in schools? Because the evidence suggests it was already in serious trouble long before a single student typed a homework question into ChatGPT.
Cognitive Offloading and the Science of Thinking Less
The cognitive science, meanwhile, tells a more nuanced story than either technophiles or technophobes would prefer. Research published in 2025 by Michael Gerlich of SBS Swiss Business School, in the journal Societies, investigated the relationship between AI tool usage and critical thinking through the lens of cognitive offloading, the well-established phenomenon in which humans delegate cognitive tasks to external resources to reduce mental demand.
Gerlich's study surveyed and interviewed 666 participants across diverse age groups and educational backgrounds, finding a significant negative correlation between frequent AI tool use and critical thinking abilities. The numbers were stark: cognitive offloading was strongly correlated with AI tool usage (r = +0.72) and inversely related to critical thinking (r = -0.75). Younger participants, those aged 17 to 25, showed higher dependence on AI tools and lower critical thinking scores compared to older age groups. However, and this is crucial, advanced educational attainment correlated positively with critical thinking skills, suggesting that education, when it works properly, can mitigate some of the cognitive costs of AI reliance. The implication is clear: the problem is not that education cannot protect against cognitive offloading, but that most education systems are not currently designed to do so.
A separate study from Microsoft Research, presented at CHI 2025 (the Conference on Human Factors in Computing Systems), surveyed 319 knowledge workers about their experiences with generative AI. The findings revealed a telling dynamic: higher confidence in AI was associated with less critical thinking, while higher self-confidence was associated with more critical thinking. The research also identified a fundamental shift in the nature of cognitive work, from information gathering to information verification, from problem-solving to AI response integration, and from doing tasks to supervising them.
This matters enormously for students, who are still in the process of building the very cognitive capacities that adults are now choosing to offload. A knowledge worker who has spent twenty years learning to construct arguments, evaluate evidence, and synthesise information can afford to delegate some of those tasks to AI without losing the underlying skill. A teenager who has never fully developed those skills in the first place is in a fundamentally different position. For them, cognitive offloading is not a convenience. It is a developmental short-circuit.
This is not merely a problem of laziness or moral failure. It is a predictable consequence of how human cognition interacts with powerful tools. We have always offloaded cognitive tasks onto external supports, from written language to calculators to search engines. The question with AI is whether the offloading is so comprehensive, and so seamless, that it crosses the line from scaffolding (which is temporary and empowering) to substitution (which is permanent and diminishing).
The critical distinction, as cognitive scientists have noted, is whether AI operates as a scaffold or a substitute. Scaffolding is characterised by temporariness, adaptability, and the goal of strengthening internal capacities. Substitution simply does the thinking for you. And the educational system, in its rush to adopt AI tools, has devoted remarkably little attention to ensuring the former rather than the latter.
The Teacher's Impossible Position
Any honest account of this situation must reckon with the position of teachers themselves, who are caught between contradictory demands with diminishing resources to meet any of them. Nearly half of teachers in the United States and the United Kingdom report chronic burnout. Teacher shortages are endemic. Class sizes in many state schools have grown. Administrative demands consume ever-larger portions of the working week.
Into this environment of exhaustion and scarcity comes AI, marketed to schools and teachers as a solution to the very problems the system has created. District leaders implementing AI tools report that teachers can reclaim an average of 5.9 hours per week by automating lesson planning, grading, and communication tasks. For a profession in crisis, this is not a trivial proposition. If a teacher can use AI to handle routine administrative work and spend more time on meaningful instruction, that sounds like progress.
But the reality is more complicated. Only about one in five teachers work at a school that has an AI policy. Teacher training on the pedagogical use of AI remains inconsistent and often superficial. The gap between the promise of AI as a teaching aid and the lived reality of its implementation is vast. Teachers are being asked to integrate a transformative technology into their practice while simultaneously meeting accountability targets, managing behaviour, differentiating instruction for diverse learners, and coping with the emotional demands of working with young people in an era of escalating mental health challenges.
The result is that AI adoption in schools is happening not through careful pedagogical planning, but through exhaustion. Teachers are adopting AI not because they have been trained to use it well, but because they are too stretched to do without it. And students are adopting AI not because they have been taught to use it critically, but because nobody has given them a compelling reason not to.
The Whiplash of Institutional Adoption
The speed at which schools reversed their positions on AI is itself a revealing story. In January 2023, New York City's Department of Education became one of the first major school systems to ban ChatGPT from its networks and devices. The ban was announced with the gravity of a public health measure, citing concerns about academic integrity and the tool's potential to provide students with answers that lacked critical thinking. Fairfax County Public Schools in Virginia and Austin Independent School District in Texas followed suit, citing child safety and academic integrity.
Within four months, New York City reversed its ban. The reversal came after convening tech industry representatives and educators to evaluate the technology's potential benefits. By 2024, more than three-quarters of educators reported that their districts had not banned ChatGPT or similar tools. The pattern, ban first, then embrace, played out across districts nationwide. Seattle Public Schools, which had initially banned ChatGPT and six additional AI writing assistance websites, similarly softened its stance.
This institutional whiplash is instructive. The initial bans suggested that schools understood, at least intuitively, that AI posed a genuine threat to the learning process. The rapid reversals suggested that this understanding was no match for the combined pressures of industry lobbying, parental expectations, competitive anxiety, and the sheer momentum of a technology that students were already using at home.
The AI in education market tells its own story of institutional capture. Valued at approximately 7 billion dollars in 2025, the sector is projected to grow to nearly 137 billion dollars by 2035, expanding at a compound annual growth rate of over 34 per cent. Major technology companies, including Microsoft, Google, Amazon, and Pearson, have invested heavily in educational AI products. In July 2025 alone, Microsoft announced plans to invest over 4 billion dollars in AI education initiatives. These investments are not philanthropic gestures. They are strategic plays for long-term market dominance in an industry that touches every child in the developed world.
These are not neutral actors offering disinterested tools. They are companies with revenue models that depend on deep integration into educational infrastructure. When schools adopt their platforms, they are not just choosing a product; they are choosing a pedagogical philosophy, one that often prioritises efficiency, personalisation through algorithmic recommendation, and scalable delivery over the messy, slow, deeply human process of learning to think for oneself.
The Khanmigo Question
Not all educational AI is created equal, and the differences matter. Khan Academy's Khanmigo, launched in limited beta in 2023 and reaching approximately 1.5 million users across 130 countries by the end of 2025, represents a philosophically distinct approach to AI in education. Unlike ChatGPT, Khanmigo is designed not to give answers directly. Instead, it employs a Socratic method, offering hints and guiding questions intended to help students find answers themselves.
According to Khan Academy's own data, 68 per cent of students preferred Khanmigo's approach over ChatGPT for homework help, citing reduced anxiety about cheating. There is, students reported, a real psychological difference between “the AI gave me the answer” and “I figured it out with help.” This is a meaningful distinction. The student who works through a problem with Socratic guidance is still engaging in the cognitive labour that builds understanding. The student who pastes an essay prompt into ChatGPT and submits the output is not.
This distinction matters because it reveals that the problem is not AI per se, but how AI is designed and deployed. A tool built to scaffold learning is fundamentally different from a tool optimised to generate complete, polished outputs on demand. Yet in practice, most students are not using carefully designed educational AI. They are using general-purpose large language models, tools built for productivity, not pedagogy. And the education system has done remarkably little to shape how students interact with these tools.
The gap between what is possible and what is actually happening is enormous. Khanmigo demonstrates that AI can be designed to support critical thinking rather than replace it. But Khanmigo also requires institutional investment, teacher training, and a deliberate pedagogical framework, precisely the things that the current system, oriented toward rapid adoption and measurable outcomes, is least equipped to provide.
We Have Been Here Before, Sort Of
The temptation to draw neat historical parallels is strong, and partly justified. In 1986, the Christian Science Monitor reported on fierce debates over calculator use in schools, with one Oregon teacher of the year warning that “once you have a crutch, you rely on it more and more.” The National Council of Teachers of Mathematics had urged the integration of calculators at all grade levels, and maths teachers in Washington, D.C. picketed their meetings in protest.
The pro-calculator camp cited studies showing that students with calculators performed at least as well on tests as those without them (except, curiously, in the fourth grade). The anti-calculator camp warned of atrophied mental arithmetic skills and dangerous dependency. Eventually, calculators became ubiquitous, and the debate faded into the background noise of educational history.
The AI parallel writes itself, but it is also misleading in important ways. A calculator is a tool for performing a specific, well-defined operation. It computes. AI, by contrast, is a tool for generating language, analysing arguments, synthesising information, and producing written outputs that closely mimic (and sometimes surpass) the kinds of work that students are assessed on. The calculator could not write your essay. ChatGPT can. The calculator did not threaten the process by which students learned to construct arguments, weigh evidence, or develop original perspectives. AI does. The scope of the offloading is categorically different, and so the historical precedent offers less comfort than its proponents suggest.
The more honest historical parallel might be the introduction of television in the 1950s and 1960s, when educators initially hailed the new medium as a revolutionary learning tool before gradually recognising that passive consumption of information was not the same as active engagement with ideas. The lesson from that era was not that television was inherently bad, but that it was easy to confuse exposure to information with genuine understanding. AI presents the same confusion in a more insidious form: the output looks like understanding. It reads like comprehension. But the student who submits it may not have comprehended anything at all.
The International View
The global picture offers both cautionary tales and faint glimmers of hope. The OECD's PISA 2022 assessment, which for the first time evaluated creative thinking skills across 64 countries and economies, revealed enormous international variation in how well education systems prepare students for higher-order cognition. Singapore, South Korea, Canada, Australia, New Zealand, Estonia, and Finland topped the creative thinking rankings, with Singapore's students scoring a mean of 41 points, well above the OECD average of 33. In Singapore, South Korea, and Canada, over 70 per cent of students performed at or above Level 4.
What distinguishes these high-performing systems is not the presence or absence of technology, but the pedagogical philosophy that underpins its use. Finland, consistently celebrated for its educational outcomes, emphasises teacher autonomy, minimal standardised testing, and a holistic approach in which children are encouraged to explore their interests rather than conform to rigid assessment frameworks. Finnish teachers enjoy the freedom to craft lessons tailored to their students' needs, a dynamic that fosters precisely the kind of critical and creative thinking that AI threatens to undermine elsewhere. Crucially, Finland has also launched national AI literacy programmes, including free online coursework, ensuring that citizens understand the technology rather than simply consuming it.
Singapore, meanwhile, has announced a national initiative to build AI literacy among students and teachers, with training to be offered at all levels by 2026. But Singapore's approach is embedded within its broader “Smart Nation” strategy, which explicitly aims to help teachers customise education for individual students rather than replace teacher judgement with algorithmic recommendation. The emphasis is on AI literacy, understanding what these tools are, what they can and cannot do, and how to use them critically, rather than mere AI adoption.
The contrast with the prevailing approach in the United States and United Kingdom is instructive. Where Finland and Singapore have invested in teacher preparation, pedagogical frameworks, and critical AI literacy, many anglophone systems have prioritised speed of adoption, market-driven solutions, and measurable outcomes, precisely the conditions under which AI is most likely to substitute for, rather than scaffold, genuine thinking. The PISA data suggests this is not a coincidence. Systems that invest in the conditions for critical thinking produce students who think critically. Systems that invest in accountability metrics produce students who are good at meeting metrics.
The Systemic Trap
What emerges from all of this is not a simple story about technology corrupting youth. It is a story about institutional incentives, structural pressures, and a decades-long failure to prioritise the very capacities that AI now threatens.
Consider the chain of causation. Standardised testing regimes devalued critical thinking in favour of measurable performance. This created an educational culture oriented toward right answers rather than good questions. Into this culture arrived AI tools optimised to produce right answers at unprecedented speed. Students, trained since primary school to value correct outputs over thoughtful processes, adopted these tools with the perfectly rational logic of the system they inhabit. And institutions, pressed by market forces, parental expectations, and competitive dynamics, facilitated this adoption with minimal safeguards.
The students who told RAND researchers that AI is harming their critical thinking are not confused. They are articulating something that adults in the system have been reluctant to say: that the educational infrastructure was never really set up to produce independent thinkers. It was set up to produce compliant test-takers. AI simply automated the compliance.
This framing shifts the burden of responsibility from individual students (who are often blamed for laziness or moral weakness) to the system that shaped their incentives. A 15-year-old who uses ChatGPT to complete an essay is not failing the education system. The education system is failing that 15-year-old, not because it allowed access to AI, but because it created conditions in which using AI to generate a polished essay and submitting it for a grade is the most rational thing a student can do.
What Would a Genuine Alternative Look Like
If the diagnosis is systemic, the treatment must be too. Banning AI, as the brief experiment of early 2023 demonstrated, is neither practical nor effective. Students will use these tools regardless of school policies, just as they use mobile phones in classrooms despite decades of prohibition attempts. The question is not whether students will interact with AI, but what kind of interaction the education system enables.
A genuinely transformative response would begin by acknowledging what the PISA data and international comparisons make clear: that systems emphasising teacher autonomy, reduced standardised testing, and inquiry-based learning produce students who are better equipped for creative and critical thought. This is not a new insight. It is a well-established finding that anglophone education systems have spent decades ignoring in favour of accountability frameworks and market-based reforms.
It would continue by investing in the kind of deliberate AI pedagogy that tools like Khanmigo gesture toward, in which AI is designed to support the development of thinking skills rather than bypass them. This requires not just better software, but better teacher training, smaller class sizes, and assessment reforms that reward the process of thinking rather than the product of having thought. It requires, in short, treating teachers as professionals with the autonomy and resources to teach well, rather than as data-entry operatives tasked with hitting numerical targets.
It would also require a fundamental rethinking of what education is for. If the purpose of schooling is to produce graduates who can pass standardised assessments and demonstrate competence on measurable metrics, then AI is not a threat; it is an upgrade. It does what the system was always asking students to do, only faster and more efficiently. If, however, the purpose of education is to cultivate human beings capable of independent judgement, ethical reasoning, creative problem-solving, and the ability to navigate complexity without algorithmic assistance, then the arrival of AI is not the crisis. It is the revelation that the crisis was already here.
The DfE's guidance in the United Kingdom acknowledges as much, at least implicitly. Its insistence that AI must operate under human oversight, that professional judgement and critical thinking remain essential, and that AI is a tool to inform decisions rather than make them, articulates a philosophy that is sound. Whether the institutional structures, the funding, the teacher training, and the assessment frameworks exist to make that philosophy real is an entirely different question.
The Revelation Nobody Wanted
The most provocative implication of the RAND data is not that AI is making students less capable. It is that the students themselves are more honest about the situation than the institutions that serve them. When 67 per cent of young people say AI is harming their critical thinking, they are not just reporting a technology problem. They are reporting a system problem. They are saying, in effect: we know this is making us worse at thinking, and we know the system gives us no reason to care.
That honesty deserves a response that is equally honest. Not more bans. Not more surveillance software. Not more hand-wringing opinion pieces from adults who themselves rely on AI for their professional work. What the moment demands is a structural reckoning with the values that education systems actually embody, as opposed to the values they claim in their mission statements.
The 95 per cent of faculty who fear student overreliance on AI are right to be concerned. But the overreliance they fear is not a new phenomenon introduced by ChatGPT. It is the logical extension of an educational philosophy that has been cultivating dependency on external authority, whether in the form of textbooks, standardised curricula, or high-stakes assessments, for generations. AI did not break the system. It revealed, with uncomfortable clarity, what the system was always building toward: a model of education in which the appearance of learning matters more than learning itself, and in which the correct output is valued infinitely more than the process of arriving at it.
The students, it turns out, were paying closer attention than anyone gave them credit for. They can see the trap. They can describe it with remarkable precision when asked. They just need the adults in the room to stop pretending it is not there.
References
RAND Corporation. “More Students Use AI for Homework, and More Believe It Harms Critical Thinking: Selected Findings from the American Youth Panel.” RAND Research Report RRA4742-1, March 2026. https://www.rand.org/pubs/research_reports/RRA4742-1.html
RAND Corporation. “Student Use of AI for Homework Rises as Concerns Grow About Critical Thinking Skills.” RAND Press Release, March 2026. https://www.rand.org/news/press/2026/03/student-use-of-ai-for-homework-rises-as-concerns-grow.html
Watson, C. Edward, and Rainie, Lee. “The AI Challenge: How College Faculty Assess the Present and Future of Higher Education in the Age of AI.” American Association of Colleges and Universities and Elon University, January 2026. https://www.aacu.org/newsroom/national-survey-95-of-college-faculty-fear-student-overreliance-on-ai-and-diminished-critical-thinking-among-learners-who-use-generative-ai-tools
Gerlich, Michael. “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” Societies, 15(1), 6, 2025. https://www.mdpi.com/2075-4698/15/1/6
Lee, et al. “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers.” Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/full/10.1145/3706598.3713778
Freire, Paulo. “Pedagogy of the Oppressed.” Continuum Publishing, 1968.
National Education Association. “Standardized Testing is Still Failing Students.” NEA Today. https://www.nea.org/nea-today/all-news-articles/standardized-testing-still-failing-students
CNN. “New York City public schools ban access to AI tool that could help students cheat.” CNN Business, January 2023. https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html
NBC News. “New York City public schools remove ChatGPT ban.” NBC News, May 2023. https://www.nbcnews.com/tech/chatgpt-ban-dropped-new-york-city-public-schools-rcna85089
Education Week. “Students Are Worried That AI Will Hurt Their Critical Thinking Skills.” Education Week, March 2026. https://www.edweek.org/technology/students-are-worried-that-ai-will-hurt-their-critical-thinking-skills/2026/03
OECD. “PISA 2022 Results (Volume III): Creative Minds, Creative Schools.” OECD Publishing, June 2024. https://www.oecd.org/en/publications/pisa-2022-results-volume-iii_765ee8c2-en.html
Khan Academy. “Meet Khanmigo: Khan Academy's AI-powered teaching assistant and tutor.” 2025. https://www.khanmigo.ai/
Precedence Research. “AI in Education Market Size to Surge USD 136.79 Bn by 2035.” Precedence Research, 2025. https://www.precedenceresearch.com/ai-in-education-market
Christian Science Monitor. “The great calculator debate: Educators disagree over their place in the classroom.” CSMonitor.com, 9 May 1986. https://www.csmonitor.com/1986/0509/dcalc-f.html
Centre on Reinventing Public Education. “Shockwaves and Innovations: How Nations Worldwide Are Approaching AI in Education.” CRPE, 2025. https://crpe.org/shockwaves-and-innovations-how-nations-worldwide-are-dealing-with-ai-in-education/
Emerald Publishing. “AI policies in school education: a comparative study on China, Singapore, Finland, and the US.” Journal of Science and Technology Policy Management, 2025. https://www.emerald.com/jstpm/article/doi/10.1108/JSTPM-06-2024-0218/1302351/
Brookings Institution. “The Impact of No Child Left Behind on Students, Teachers, and Schools.” Brookings Papers on Economic Activity, 2010. https://www.brookings.edu/wp-content/uploads/2010/09/2010b_bpea_dee.pdf
Education Week. “Does Your District Ban ChatGPT? Here's What Educators Told Us.” Education Week, February 2024. https://www.edweek.org/technology/does-your-district-ban-chatgpt-heres-what-educators-told-us/2024/02
Department for Education. “Generative AI in Education Settings.” UK Government, June 2025. https://thirdspacelearning.com/blog/ai-in-schools/
K-12 Dive. “Lighten teacher workloads and reduce burnout with AI designed for education.” K-12 Dive, 2025. https://www.k12dive.com/spons/lighten-teacher-workloads-and-reduce-burnout-with-ai-designed-for-education/758435/
Education Futures. “How did we get from 'schools kill creativity' to 'AI kills critical thinking in schools?'” Education Futures, 2025. https://educationfutures.com/post/how-did-we-get-from-schools-kill-creativity-to-ai-kills-creativity-in-schools/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk