The Mind in the Machine: Can We Think for Ourselves in the Age of AI?

Every morning, millions of us wake up and immediately reach for our phones. We ask our AI assistants about the weather, let algorithms choose our music, rely on GPS to navigate familiar routes, and increasingly, delegate our decisions to systems that promise to optimise everything from our calendars to our career choices. It's convenient, efficient, and increasingly inescapable. But as artificial intelligence becomes our constant companion, a more unsettling question emerges: are we outsourcing not just our tasks, but our ability to think?

The promise of AI has always been liberation. Free yourself from the mundane, the pitch goes, and focus on what really matters. Yet mounting evidence suggests we're trading something far more valuable than time. We're surrendering the very cognitive capabilities that make us human: our capacity for critical reflection, independent thought, and moral reasoning. And unlike a subscription we can cancel, the effects of this cognitive offloading may prove difficult to reverse.

The Erosion We Don't See

In January 2025, researcher Michael Gerlich from SBS Swiss Business School published findings that should alarm anyone who uses AI tools regularly. His study of 666 participants across the United Kingdom revealed a stark correlation: the more people relied on AI tools, the worse their critical thinking became. The numbers tell a troubling story. The researchers found a strong inverse relationship between AI usage and critical thinking scores, meaning that as people used AI more heavily, their critical thinking abilities declined proportionally. Even more concerning, they discovered that people who frequently delegated mental tasks to AI (a phenomenon called cognitive offloading) showed markedly worse critical thinking skills. The pattern was remarkably consistent and statistically robust across the entire study population.

This isn't just about getting rusty at maths or forgetting phone numbers. Gerlich's research, published in the journal Societies, demonstrated that frequent AI users exhibited “diminished ability to critically evaluate information and engage in reflective problem-solving.” The study employed the Halpern Critical Thinking Assessment alongside a 23-item questionnaire, using statistical techniques including ANOVA, correlation analysis, and random forest regression. What they found was a dose-dependent relationship: the more you use AI, the more your critical thinking skills decline.

Younger participants, aged 17 to 25, showed the highest dependence on AI tools and the lowest critical thinking scores compared to older age groups. This demographic pattern suggests we may be witnessing the emergence of a generation that has never fully developed the cognitive muscles required for independent reasoning. They've had a computational thought partner from the start.

The mechanism driving this decline is what researchers call cognitive offloading: the process of using external tools to reduce mental effort. Whilst this sounds efficient in theory, in practice it's more like a muscle that atrophies from disuse. “As individuals increasingly offload cognitive tasks to AI tools, their ability to critically evaluate information, discern biases, and engage in reflective reasoning diminishes,” Gerlich's study concluded. Like physical fitness, cognitive skills follow a use-it-or-lose-it principle.

But here's the troubling paradox: moderate AI usage didn't significantly affect critical thinking. Only excessive reliance led to diminishing cognitive returns. The implication is clear. AI itself isn't the problem. Our relationship with it is. We're not being forced to surrender our thinking; we're choosing to, seduced by the allure of algorithmic efficiency.

The GPS Effect

If you want to understand where unchecked AI adoption leads, look at what GPS did to our sense of direction. Research published in Scientific Reports found that habitual GPS users experienced measurably worse spatial memory during self-guided navigation. The relationship was dose-dependent: those who used GPS to a greater extent between two time points demonstrated larger declines in spatial memory across various facets, including spatial memory strategies, cognitive mapping, landmark encoding, and learning.

What makes this particularly instructive is that people didn't use GPS because they had a poor sense of direction. The causation ran the other way: extensive GPS use led to decline in spatial memory. The technology didn't compensate for a deficiency; it created one.

The implications extend beyond navigation. Studies have found that exercising spatial cognition might protect against age-related memory decline. The hippocampus, the brain region responsible for spatial navigation, naturally declines with age and its deterioration can predict conversion from mild cognitive impairment to Alzheimer's disease. By removing the cognitive demands of wayfinding, GPS doesn't just make us dependent; it may accelerate cognitive decline.

This is the template for what's happening across all cognitive domains. When we apply the GPS model to decision-making, creative thinking, problem-solving, and moral reasoning, we're running a civilisation-wide experiment with our collective intelligence. The early results aren't encouraging. Just as turn-by-turn navigation replaced the mental work of route planning and spatial awareness, AI tools threaten to replace the mental work of analysis, synthesis, and critical evaluation. The convenience is immediate; the cognitive cost accumulates silently.

The Paradox of Personal Agency

The Decision Lab, a behavioural science research organisation, emphasises a crucial distinction that helps explain why AI feels so seductive even as it diminishes us. As Dr. Krastev of the organisation notes, “our well-being depends on a feeling of agency, not on our actual ability to make decisions themselves.”

This reveals the psychological sleight of hand at work in our AI-mediated lives. We can technically retain the freedom to choose whilst simultaneously losing the sense that our choices matter. When an algorithm recommends and we select from its suggestions, are we deciding or merely ratifying? When AI drafts our emails and we edit them, are we writing or just approving? The distinction matters because the subjective feeling of meaningful control, not just theoretical choice, determines our wellbeing and sense of self.

Research by Hojman and Miranda demonstrates that agency can have effects on wellbeing comparable to income levels. Autonomy isn't a luxury; it's a fundamental human need. Yet it's also, as The Decision Lab stresses, “a fragile thing” requiring careful nurturing. People may unknowingly lose their sense of agency even when technically retaining choice.

This fragility manifests in workplace transformations already underway. McKinsey's 2025 research projects that by 2030, up to 70 per cent of office tasks could be automated by AI with agency. But the report emphasises a crucial shift: as automation redefines task boundaries, roles must shift towards “exception handling, judgement-based decision-making, and customer experience.” The question is whether we'll have the cognitive capacity for these higher-order functions if we've spent a decade offloading them to machines.

The agentic AI systems emerging in 2025 don't just execute tasks; they reason across time horizons, learn from outcomes, and collaborate with other AI agents in areas such as fraud detection, compliance, and capital allocation. When AI handles routine and complex tasks alike, workers may find themselves “less capable of addressing novel or unexpected challenges.” The shift isn't just about job displacement; it's about cognitive displacement. We risk transforming from active decision-makers into passive algorithm overseers, monitoring systems we no longer fully understand.

The workplace of 2025 offers a preview of this transformation. Knowledge workers increasingly find themselves in a curious position: managing AI outputs rather than producing work directly. This shift might seem liberating, but it carries hidden costs. When your primary role becomes quality-checking algorithmic work rather than creating it yourself, you lose the deep engagement that builds expertise. You become a validator without the underlying competence to truly validate.

Why We Trust the Algorithm (Even When We Shouldn't)

Here's where things get psychologically complicated. Research published in journals including the Journal of Management Information Systems reveals something counterintuitive: people often prefer algorithmic decisions to human ones. Studies have found that participants viewed algorithmic decisions as fairer, more competent, more trustworthy, and more useful than those made by humans.

When comparing GPT-4, simple rules, and human judgement for innovation assessment, research published in PMC found striking differences in predictive accuracy. The R-squared value of human judgement was 0.02, simple rules achieved 0.3, whilst GPT-4 reached 0.713. In narrow, well-defined domains, algorithms genuinely outperform human intuition.

This creates a rational foundation for deference to AI. Why shouldn't we rely on systems that demonstrably make better predictions and operate more consistently? The answer lies in what we lose even when the algorithm is right.

First, we lose the tacit knowledge that comes from making decisions ourselves. Research on algorithmic versus human advice notes that “procedural and tacit knowledge are difficult to codify or transfer, often acquired from hands-on experiences.” When we skip directly to the answer, we miss the learning embedded in the process.

Second, we lose the ability to recognise when the algorithm is wrong. A particularly illuminating study found that students using ChatGPT to solve maths problems initially outperformed their peers by 48 per cent. But when tested without AI, their scores dropped 17 per cent below their unassisted counterparts. They'd learned to rely on the tool without developing the underlying competence to evaluate its outputs. They couldn't distinguish good answers from hallucinations because they'd never built the mental models required for verification.

Third, we risk losing skills that remain distinctly human. As research on cognitive skills emphasises, “making subjective and intuitive judgements, understanding emotion, and navigating social nuances are still regarded as difficult for computers.” These capabilities require practice. When we delegate the adjacent cognitive tasks to AI, we may inadvertently undermine the foundations these distinctly human skills rest upon.

The Invisible Hand Shaping Our Thoughts

Recent philosophical research provides crucial frameworks for understanding what's at stake. A paper in Philosophical Psychology published in January 2025 investigates how recommender systems and generative models impact human decisional and creative autonomy, adopting philosopher Daniel Dennett's conception of autonomy as self-control.

The research reveals that recommender systems play a double role. As information filters, they can augment self-control in decision-making by helping us manage overwhelming choice. But they simultaneously “act as mechanisms of remote control that clamp degrees of freedom.” The system that helps us choose also constrains what we consider. The algorithm that saves us time also shapes our preferences in ways we may not recognise or endorse upon reflection.

Work published in Philosophy & Technology in 2025 analyses how AI decision-support systems affect domain-specific autonomy through two key components: skilled competence and authentic value-formation. The research presents emerging evidence that “AI decision support can generate shifts of values and beliefs of which decision-makers remain unaware.”

This is perhaps the most insidious effect: inaccessible value shifts that erode autonomy by undermining authenticity. When we're unaware that our values have been shaped by algorithmic nudges, we lose the capacity for authentic self-governance. We may believe we're exercising free choice whilst actually executing preferences we've been steered towards through mechanisms invisible to us.

Self-determination theory views autonomy as “a sense of willingness and volition in acting.” This reveals why algorithmically mediated decisions can feel hollow even when objectively optimal. The efficiency gain comes at the cost of the subjective experience of authorship. We become curators of algorithmic suggestions rather than authors of our own choices, and this subtle shift in role carries profound psychological consequences.

The Thought Partner Illusion

A Nature Human Behaviour study from October 2024 notes that computer systems are increasingly referred to as “copilots,” representing a shift from “designing tools for thought to actual partners in thought.” But this framing is seductive and potentially misleading. The metaphor of partnership implies reciprocity and mutual growth. Yet the relationship between humans and AI isn't symmetrical. The AI doesn't grow through our collaboration. We're the only ones at risk of atrophy.

Research on human-AI collaboration published in Scientific Reports found a troubling pattern: whilst GenAI enhances output quality, it undermines key psychological experiences including sense of control, intrinsic motivation, and feelings of engagement. Individuals perceived “a reduction in personal agency when GenAI contributes substantially to task outcomes.” The productivity gain came with a psychological cost.

The researchers recommend that “AI system designers should emphasise human agency in collaborative platforms by integrating user feedback, input, and customisation to ensure users retain a sense of control during AI collaborations.” This places the burden on designers to protect us from tools we've invited into our workflows, but design alone cannot solve a problem that's fundamentally about how we choose to use technology.

The European Commission's guidelines present three levels of human agency: human-in-the-loop (HITL), where humans intervene in each decision cycle; human-on-the-loop (HOTL), where humans oversee the system; and human-in-command (HIC), where humans maintain ultimate control. These frameworks recognise that preserving agency requires intentional design, not just good intentions.

But frameworks aren't enough if individual users don't exercise the agency these structures are meant to preserve. We need more than guardrails; we need the will to remain engaged even when offloading is easier.

What We Risk Losing

The conversation about AI and critical thinking often focuses on discrete skills: the ability to evaluate sources, detect bias, or solve problems. But the risks run deeper. We risk losing what philosopher Harry Frankfurt called our capacity for second-order desires, the ability to reflect on our desires and decide which ones we want to act on. We risk losing the moral imagination required to recognise ethical dimensions algorithms aren't programmed to detect.

Consider moral reasoning. It isn't algorithmic. It requires contextual understanding, emotional intelligence, recognition of competing values, and the wisdom to navigate ambiguity. Research on AI's ethical dilemmas acknowledges that as AI handles more decisions, questions arise about accountability, fairness, and the potential loss of human oversight.

The Pew Research Centre found that 68 per cent of Americans worry about AI being used unethically in decision-making. But the deeper concern isn't just that AI will make unethical decisions; it's that we'll lose the capacity to recognise when decisions have ethical dimensions at all. If we've offloaded decision-making for years, will we still have the moral reflexes required to intervene when the algorithm optimises for efficiency at the expense of human dignity?

The OECD Principles on Artificial Intelligence, the EU AI Act with its risk-based classification system, the NIST AI Risk Management Framework, and the Ethics Guidelines for Trustworthy AI outline principles including accountability, transparency, fairness, and human agency. But governance frameworks can only do so much. They can prevent the worst abuses and establish baseline standards. They can't force us to think critically about algorithmic outputs. That requires personal commitment to preserving our cognitive independence.

Practical Strategies for Cognitive Independence

The research points towards solutions, though they require discipline and vigilance. The key is recognising that AI isn't inherently harmful to critical thinking; excessive reliance without active engagement is.

Continue Active Learning in Ostensibly Automated Domains

Even when AI can perform a task, continue building your own competence. When AI drafts your email, occasionally write from scratch. When it suggests code, implement solutions yourself periodically. The point isn't rejecting AI but preventing complete dependence. Research on critical thinking in the AI era emphasises that continuing to build knowledge and skills, “even if it is seemingly something that a computer could do for you,” provides the foundation for recognising when AI outputs are inadequate.

Think of it as maintaining parallel competence. You don't need to reject AI assistance, but you do need to ensure you could function without it if necessary. This dual-track approach builds resilience and maintains the cognitive infrastructure required for genuine oversight.

Apply Systematic Critical Evaluation

Experts recommend “cognitive forcing tools” such as diagnostic timeouts and mental checklists. When reviewing AI output, systematically ask: Can this be verified? What perspectives might be missing? Could this be biased? What assumptions underlie this recommendation? Research on maintaining critical thinking highlights the importance of applying “healthy scepticism” especially to AI-generated content, which can hallucinate convincingly whilst being entirely wrong.

The Halpern Critical Thinking Assessment used in Gerlich's study evaluates skills including hypothesis testing, argument analysis, and likelihood and uncertainty reasoning. Practising these skills deliberately, even when AI could shortcut the process, maintains the cognitive capacity to evaluate AI outputs critically.

Declare AI-Free Zones

“The most direct path to preserving your intellectual faculties is to declare certain periods 'AI-free' zones.” This can be one hour, one day, or entire projects. Regular practice of self-guided navigation maintains spatial memory. Similarly, regular practice of unassisted thinking maintains critical reasoning abilities. Treat it like a workout regimen for your mind.

These zones serve multiple purposes. They maintain cognitive skills, they remind you of what unassisted thinking feels like, and they provide a baseline against which to evaluate whether AI assistance is genuinely helpful or merely convenient. Some tasks might be slower without AI, but that slower pace allows for the deeper engagement that builds understanding.

Practise Reflective Evaluation

After working with an AI, engage in deliberate reflection. How did it perform? What did it miss? Where did you need to intervene? What patterns do you notice in its strengths and weaknesses? This metacognitive practice strengthens your ability to recognise AI's limitations and your own cognitive processes. When you delegate a task to AI, you miss the reflective opportunity embedded in struggling with the problem yourself. Compensate by reflecting explicitly on the collaboration.

Verify and Cross-Check Information

Research on AI literacy emphasises verifying “the accuracy of AI outputs by comparing AI-generated content to authoritative sources, evaluating whether citations provided by AI are real or fabricated, and cross-checking facts for consistency.” This isn't just about catching errors; it's about maintaining the habit of verification. When we accept AI outputs uncritically, we atrophy the skills required to evaluate information quality.

Seek Diverse Perspectives Beyond Algorithmic Recommendations

Recommender systems narrow our information diet towards predicted preferences. Deliberately seek perspectives outside your algorithmic bubble. Read sources AI wouldn't recommend. Engage with viewpoints that challenge your assumptions. Research on algorithmic decision-making notes that whilst efficiency is valuable, over-optimisation can lead to filter bubbles and value shifts we don't consciously endorse. Diverse information exposure maintains cognitive flexibility.

Maintain Domain Expertise

Research on autonomy by design emphasises that domain-specific autonomy requires “skilled competence: the ability to make informed judgements within one's domain.” Don't let AI become a substitute for developing genuine expertise. Use it to augment competence you've already built, not to bypass the process of building it. The students who used ChatGPT for maths problems without understanding the concepts exemplify this risk. They had access to correct answers but lacked the competence to generate or evaluate them independently.

Understand AI's Capabilities and Limitations

Genuine AI literacy requires understanding how these systems work, their inherent limitations, and where they're likely to fail. When you understand that large language models predict statistically likely token sequences rather than reasoning from first principles, you're better equipped to recognise when their outputs might be plausible-sounding nonsense. This technical understanding provides cognitive defences against uncritical acceptance.

Designing for Human Autonomy

Individual strategies matter, but system design matters more. Research on supporting human autonomy in AI systems proposes multi-dimensional models examining how AI can support or hinder autonomy across various aspects, from interface design to institutional considerations.

The key insight from autonomy-by-design research is that AI systems aren't neutral. They embody choices about how much agency to preserve, how transparently to operate, and how much to nudge versus inform. Research on consumer autonomy in generative AI services found that “both excessive automation and insufficient autonomy can negatively affect consumer perceptions.” Systems that provide recommendations whilst clearly preserving human decision authority, that allow users to refine AI-generated outputs, and that make their reasoning transparent tend to enhance rather than undermine autonomy.

Shared responsibility mechanisms, such as explicitly acknowledging the user's role in final decisions, reinforce autonomy, trust, and engagement. The interface design choice of presenting options versus making decisions, of explaining reasoning versus delivering conclusions, profoundly affects whether users remain cognitively engaged or slide into passive acceptance. Systems should be built to preserve agency by default, not as an afterthought.

Research on ethical AI evolution proposes frameworks ensuring that even as AI systems become more autonomous, they remain governed by an “immutable ethical principle: AI must not harm humanity or violate fundamental values.” This requires building in safeguards, keeping humans meaningfully in the loop, and designing for comprehensibility, not just capability.

The Path Forward

The question posed asks how we can ensure technology serves to enhance rather than diminish our uniquely human abilities. The research suggests answers, though they require commitment.

First, we must recognise that cognitive offloading exists on a spectrum. Moderate AI use doesn't harm critical thinking; excessive reliance does. The dose makes the poison. We need cultural norms around AI usage that parallel our evolving norms around social media: awareness that whilst useful, excessive engagement carries cognitive costs.

Second, we must design AI systems that preserve agency by default. This means interfaces that inform rather than decide, that explain their reasoning, that make uncertainty visible, and that require human confirmation for consequential decisions.

Third, we need education that explicitly addresses AI literacy and critical thinking. Research emphasises that younger users show higher AI dependence and lower critical thinking scores. Educational interventions should start early, teaching students not just how to use AI but how to maintain cognitive independence whilst doing so. Schools and universities must become laboratories for sustainable AI integration, teaching students to use these tools as amplifiers of their own thinking rather than replacements for it.

Fourth, we must resist the algorithm appreciation bias that makes us overly deferential to AI outputs. In narrow domains, algorithms outperform human intuition. But many important decisions involve contextual nuances, ethical dimensions, and value trade-offs that algorithms aren't equipped to navigate. Knowing when to trust and when to override requires maintained critical thinking capacity.

Fifth, organisations implementing AI must prioritise upskilling in critical thinking, systems thinking, and judgement-based decision-making. McKinsey's research emphasises that as routine tasks automate, human roles shift towards exception handling and strategic thinking. Workers will only be capable of these higher-order functions if they've maintained the underlying cognitive skills. Organisations that treat AI as a replacement rather than an augmentation risk creating workforce dependency that undermines adaptation.

Finally, we need ongoing research into the long-term cognitive effects of AI usage. Gerlich's study provides crucial evidence, but we need longitudinal research tracking how AI reliance affects cognitive development in children, cognitive maintenance in adults, and cognitive decline in ageing populations. We need studies examining which usage patterns preserve versus undermine critical thinking, and interventions that can mitigate negative effects.

Choosing Our Cognitive Future

We are conducting an unprecedented experiment in cognitive delegation. Never before has a species had access to tools that can so comprehensively perform its thinking for it. The outcomes aren't predetermined. AI can enhance human cognition if we use it thoughtfully, maintain our own capabilities, and design systems that preserve agency. But it can also create intellectual learned helplessness if we slide into passive dependence.

The research is clear about the mechanism: cognitive offloading, when excessive, erodes the skills we fail to exercise. The solution is equally clear but more challenging to implement: we must choose engagement over convenience, critical evaluation over passive acceptance, and maintained competence over expedient delegation.

This doesn't mean rejecting AI. The productivity gains, analytical capabilities, and creative possibilities these tools offer are genuine and valuable. But it means using AI as a genuine thought partner, not a thought replacement. It means treating AI outputs as starting points for reflection, not endpoints to accept. It means maintaining the cognitive fitness required to evaluate, override, and contextualise algorithmic recommendations.

The calculator didn't destroy mathematical ability for everyone, but it did for those who stopped practising arithmetic entirely. GPS hasn't eliminated everyone's sense of direction, but it has for those who navigate exclusively through turn-by-turn instructions. AI won't eliminate critical thinking for everyone, but it will for those who delegate thinking entirely to algorithms.

The question isn't whether to use AI but how to use it in ways that enhance rather than replace our cognitive capabilities. The answer requires individual discipline, thoughtful system design, educational adaptation, and cultural norms that value cognitive independence as much as algorithmic efficiency.

Autonomy is fragile. It requires nurturing, protection, and active cultivation. In an age of increasingly capable AI, preserving our capacity for critical reflection, independent thought, and moral reasoning isn't a nostalgic refusal of progress. It's a commitment to remaining fully human in a world of powerful machines.

The technology will continue advancing. The question is whether our thinking will keep pace, or whether we'll wake up one day to discover we've outsourced not just our decisions but our very capacity to make them. The choice, for now, remains ours. Whether it will remain so depends on the choices we make today about how we engage with the algorithmic thought partners increasingly mediating our lives.

We have the research, the frameworks, and the strategies. What we need now is the will to implement them, the discipline to resist convenience when it comes at the cost of competence, and the wisdom to recognise that some things are worth doing ourselves even when machines can do them faster. Our cognitive independence isn't just a capability; it's the foundation of meaningful human agency. In choosing to preserve it, we choose to remain authors of our own lives rather than editors of algorithmic suggestions.


Sources and References

Academic Research

  1. Gerlich, M. (2025). “Increased AI Use Linked to Eroding Critical Thinking Skills.” Societies, 15(1), 6. DOI: 10.3390/soc15010006. https://phys.org/news/2025-01-ai-linked-eroding-critical-skills.html

  2. Nature Human Behaviour. (2024, October). “Good thought partners: Computer systems as thought partners.” Volume 8, 1851-1863. https://cocosci.princeton.edu/papers/Collins2024a.pdf

  3. Scientific Reports. (2020). “Habitual use of GPS negatively impacts spatial memory during self-guided navigation.” https://www.nature.com/articles/s41598-020-62877-0

  4. Philosophical Psychology. (2025, January). “Human autonomy with AI in the loop.” https://www.tandfonline.com/doi/full/10.1080/09515089.2024.2448217

  5. Philosophy & Technology. (2025). “Autonomy by Design: Preserving Human Autonomy in AI Decision-Support.” https://link.springer.com/article/10.1007/s13347-025-00932-2

  6. Frontiers in Artificial Intelligence. (2025). “Ethical theories, governance models, and strategic frameworks for responsible AI adoption and organizational success.” https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1619029/full

  7. Journal of Management Information Systems. (2022). “Algorithmic versus Human Advice: Does Presenting Prediction Performance Matter for Algorithm Appreciation?” Vol 39, No 2. https://www.tandfonline.com/doi/abs/10.1080/07421222.2022.2063553

  8. PNAS Nexus. (2024). “Public attitudes on performance for algorithmic and human decision-makers.” Vol 3, Issue 12. https://academic.oup.com/pnasnexus/article/3/12/pgae520/7915711

  9. PMC. (2023). “Machine vs. human, who makes a better judgement on innovation? Take GPT-4 for example.” https://pmc.ncbi.nlm.nih.gov/articles/PMC10482032/

  10. Scientific Reports. (2021). “Rethinking GPS navigation: creating cognitive maps through auditory clues.” https://www.nature.com/articles/s41598-021-87148-4

Industry and Policy Research

  1. McKinsey & Company. (2025). “AI in the workplace: A report for 2025.” https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

  2. McKinsey & Company. (2024). “Rethinking decision making to unlock AI potential.” https://www.mckinsey.com/capabilities/operations/our-insights/when-can-ai-make-good-decisions-the-rise-of-ai-corporate-citizens

  3. Pew Research Centre. (2023). “The Future of Human Agency.” https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/

  4. Pew Research Centre. (2017). “Humanity and human judgement are lost when data and predictive modelling become paramount.” https://www.pewresearch.org/internet/2017/02/08/theme-3-humanity-and-human-judgment-are-lost-when-data-and-predictive-modeling-become-paramount/

  5. World Health Organisation. (2024, January). “WHO releases AI ethics and governance guidance for large multi-modal models.” https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models

Organisational and Think Tank Sources

  1. The Decision Lab. (2024). “How to Preserve Agency in an AI-Driven Future.” https://thedecisionlab.com/insights/society/autonomy-in-ai-driven-future

  2. Hojman, D. & Miranda, A. (cited research on agency and wellbeing).

  3. European Commission. (2019, updated 2024). “Ethics Guidelines for Trustworthy AI.”

  4. OECD. (2019, updated 2024). “Principles on Artificial Intelligence.”

  5. NIST. “AI Risk Management Framework.”

  6. Harvard Business Review. (2018). “Collaborative Intelligence: Humans and AI Are Joining Forces.” https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces

Additional Research Sources

  1. IE University Centre for Health and Well-being. (2024). “AI's cognitive implications: the decline of our thinking skills?” https://www.ie.edu/center-for-health-and-well-being/blog/ais-cognitive-implications-the-decline-of-our-thinking-skills/

  2. Big Think. (2024). “Is AI eroding our critical thinking?” https://bigthink.com/thinking/artificial-intelligence-critical-thinking/

  3. MIT Horizon. (2024). “Critical Thinking in the Age of AI.” https://horizon.mit.edu/critical-thinking-in-the-age-of-ai

  4. Advisory Board. (2024). “4 ways to keep your critical thinking skills sharp in the ChatGPT era.” https://www.advisory.com/daily-briefing/2025/09/08/chat-gpt-brain

  5. NSTA. (2024). “To Think or Not to Think: The Impact of AI on Critical-Thinking Skills.” https://www.nsta.org/blog/think-or-not-think-impact-ai-critical-thinking-skills

  6. Duke Learning Innovation. (2024). “Does AI Harm Critical Thinking.” https://lile.duke.edu/ai-ethics-learning-toolkit/does-ai-harm-critical-thinking/

  7. IEEE Computer Society. (2024). “Cognitive Offloading: How AI is Quietly Eroding Our Critical Thinking.” https://www.computer.org/publications/tech-news/trends/cognitive-offloading

  8. IBM. (2024). “What is AI Governance?” https://www.ibm.com/think/topics/ai-governance

  9. Vinod Sharma's Blog. (2025, January). “2025: The Rise of Powerful AI Agents Transforming the Future.” https://vinodsblog.com/2025/01/01/2025-the-rise-of-powerful-ai-agents-transforming-the-future/

  10. SciELO. (2025). “Research Integrity and Human Agency in Research Intertwined with Generative AI.” https://blog.scielo.org/en/2025/05/07/research-integrity-and-human-agency-in-research-gen-ai/

  11. Nature. (2024). “Trust in AI: progress, challenges, and future directions.” Humanities and Social Sciences Communications. https://www.nature.com/articles/s41599-024-04044-8

  12. Camilleri. (2024). “Artificial intelligence governance: Ethical considerations and implications for social responsibility.” Expert Systems, Wiley Online Library. https://onlinelibrary.wiley.com/doi/full/10.1111/exsy.13406


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...