Handing Power to Machines: The Unresolved Dilemma of AI Agents

The machines are learning to act without us. Not in some distant, science fiction future, but right now, in the server rooms of Silicon Valley, the trading floors of Wall Street, and perhaps most disturbingly, in the operating systems that increasingly govern your daily existence. The question is no longer whether artificial intelligence will transform how we live and work. That transformation is already underway. The more pressing question, the one that should keep technology leaders and ordinary citizens alike awake at night, is this: when AI agents can execute complex tasks autonomously across multiple systems without human oversight, will this liberate you from mundane work and decision-making, or create a world where you lose control over the systems that govern your daily life?

The answer, as with most genuinely important questions about technology, is: both. And that ambiguity is precisely what makes this moment so consequential.

The Autonomous Revolution Arrives Ahead of Schedule

Walk into any major enterprise today, and you will find a digital workforce that would have seemed fantastical just three years ago. According to Gartner's August 2025 analysis, 40 per cent of enterprise applications will feature task-specific AI agents by the end of 2026, up from less than 5 per cent in early 2025. That is not gradual adoption; that is a technological tidal wave.

The numbers paint a picture of breathtaking acceleration. McKinsey research from 2025 shows that 62 per cent of survey respondents report their organisations are at least experimenting with AI agents, whilst 23 per cent are already scaling agentic AI systems somewhere in their enterprises. A G2 survey from August 2025 found that 57 per cent of companies already have AI agents in production, with another 22 per cent in pilot programmes. The broader AI agents market reached 7.92 billion dollars in 2025, with projections extending to 236.03 billion dollars by 2034, a compound annual growth rate that defies historical precedent for enterprise technology adoption.

These are not simply chatbots with better conversation skills. Modern AI agents represent a fundamental shift in how we think about automation. Unlike traditional software that follows predetermined rules, these systems can perceive their environment, make decisions, take actions, and learn from the outcomes, all without waiting for human instruction at each step. They can book your flights, manage your calendar, process insurance claims, monitor network security, and execute financial trades. They can, in short, do many of the things we used to assume required human judgment.

Deloitte predicts that 50 per cent of enterprises using generative AI will deploy autonomous AI agents by 2027, doubling from 25 per cent in 2025. A 2025 Accenture study goes further, predicting that by 2030, AI agents will be the primary users of most enterprises' internal digital systems. Pause on that for a moment. The primary users of your company's software will not be your employees. They will be algorithms. Gartner's projections suggest that by 2028, over one-third of enterprise software solutions will include agentic AI, making up to 15 per cent of day-to-day decisions autonomous.

An IBM and Morning Consult survey of 1,000 enterprise AI developers found that 99 per cent of respondents said they were exploring or developing AI agents. This is not a niche technology being evaluated by a handful of innovators. This is a fundamental reshaping of how business operates, happening simultaneously across virtually every major organisation on the planet.

Liberation from the Tedious and the Time-Consuming

For those weary of administrative drudgery, the promise of autonomous AI agents borders on the utopian. Consider the healthcare sector, where agents are transforming the patient journey whilst delivering a 3.20 dollar return for every dollar invested within 14 months, according to industry analyses. These systems take and read clinician notes, extract key data, cross-check payer policies, and automate prior authorisations and claims submissions. At OI Infusion Services, AI agents cut approval times from around 30 days to just three days, dramatically reducing treatment delays for patients who desperately need care.

The applications in healthcare extend beyond administrative efficiency. Hospitals are using agentic AI to optimise patient flow, schedule patient meetings, predict bed occupancy rates, and manage staff. At the point of care, agents assist with triage and chart preparation by summarising patient history, highlighting red flags, and surfacing relevant clinical guidelines. The technology is not replacing physicians; it is freeing them to focus on what they trained for years to do: heal people.

In customer service, the results are similarly striking. Boston Consulting Group reports that a global technology company achieved a 50 per cent reduction in time to resolution for service requests, whilst a European energy provider improved customer satisfaction by 18 per cent. A Chinese insurance company improved contact centre productivity by more than 50 per cent. A European financial institution has automated 90 per cent of its consumer loans. Effective AI agents can accelerate business processes by 30 to 50 per cent, according to BCG analysis, in areas ranging from finance and procurement to customer operations.

The financial sector has embraced these capabilities with particular enthusiasm. AI agents now continuously analyse high-velocity financial data, adjust credit scores in real time, automate Know Your Customer checks, calculate loans, and monitor financial health indicators. These systems can fetch data beyond traditional sources, including customer relationship management systems, payment gateways, banking data, credit bureaus, and sanction databases. CFOs are beginning to rely on these systems not just for static reporting but for continuous forecasting, integrating ERP data, market indicators, and external economic signals to produce real-time cash flow projections. Risk events have been reduced by 60 per cent in pilot environments.

The efficiency gains are real, and they are substantial. ServiceNow's AI agents are automating IT, HR, and operational processes, reducing manual workloads by up to 60 per cent. Enterprises deploying AI agents estimate up to 50 per cent efficiency gains in customer service, sales, and HR operations. And 75 per cent of organisations have seen improvements in satisfaction scores post-AI agent deployment.

For the knowledge worker drowning in email, meetings, and administrative overhead, these developments represent something close to salvation. The promise is straightforward: let the machines handle the tedious tasks, freeing humans to focus on creative, strategic, and genuinely meaningful work.

The Other Side of Autonomy

Yet there is a darker current running beneath this technological optimism, and it demands our attention. The same capabilities that make AI agents so useful, their ability to act independently, to make decisions without human oversight, to operate at speeds no human can match, also make them potentially dangerous.

The security implications alone are sobering. Nearly 48 per cent of respondents to a recent industry survey believe agentic AI will represent the top attack vector for cybercriminals and nation-state threats by the end of 2026. The expanded attack surface deriving from the combination of agents' levels of access and autonomy is and should be a real concern.

Consider what happened in November 2025. Anthropic, one of the leading AI safety companies, disclosed that Chinese state-sponsored hackers used Claude Code to orchestrate what they called “the first documented case of a large-scale cyberattack executed without substantial human intervention.” The AI performed 80 to 90 per cent of the attack work autonomously, mapping networks, writing exploits, harvesting credentials, and exfiltrating data from approximately 30 targets. The bypass technique was disturbingly straightforward: attackers told the AI it was an employee of a legitimate cybersecurity firm conducting defensive testing and decomposed malicious tasks into innocent-seeming subtasks.

This incident illustrated a broader concern: by automating repetitive, technical work, AI agents can also lower the barrier for malicious activity. Security experts expect to see fully autonomous intrusion attempts requiring little to no human oversight from attackers. These AI agents will be capable of performing reconnaissance, exploiting vulnerabilities, escalating privileges, and exfiltrating data at a pace no traditional security tool is prepared for.

For organisations, a central question in 2026 is how to govern and secure a new multi-hybrid workforce where machines and agents already outnumber human employees by an 82-to-1 ratio. These trusted, always-on agents have privileged access, making them potentially the most valuable targets for attackers. The concern is that adversaries will stop focusing on humans and instead compromise these agents, turning them into what security researchers describe as an “autonomous insider.”

Despite widespread AI adoption, only about 34 per cent of enterprises reported having AI-specific security controls in place in 2025, whilst less than 40 per cent conduct regular security testing on AI models or agent workflows. We are building a new digital infrastructure at remarkable speed, but the governance and security frameworks have not kept pace.

The Employment Question Nobody Wants to Discuss Honestly

The conversation about AI and employment has become almost liturgical in its predictability. Optimists point to historical precedent: technological revolutions have always created more jobs than they destroyed. Pessimists counter that this time is different, that the machines are coming for cognitive work, not just physical labour.

The data from 2025 suggests both camps are partially correct, which is precisely the problem with easy answers. Research reveals that whilst 85 million jobs will be displaced by 2025, 97 million new roles will simultaneously emerge, representing a net positive job creation of 12 million positions globally. By 2030, according to industry projections, 92 million jobs will be displaced but 170 million new ones will emerge.

However, the distribution of these gains and losses is deeply uneven. In 2025, there have been 342 layoffs at tech companies with 77,999 people impacted. Nearly 55,000 job cuts were directly attributed to AI, according to Challenger, Gray & Christmas, out of a total 1.17 million layoffs in the United States, the highest level since the 2020 pandemic.

Customer service representatives face the highest immediate risk with an 80 per cent automation rate by 2025. Data entry clerks face a 95 per cent risk of automation, as AI systems can process over 1,000 documents per hour with an error rate of less than 0.1 per cent, compared to 2 to 5 per cent for humans. Approximately 7.5 million data entry and administrative jobs could be eliminated by 2027. Bloomberg research reveals AI could replace 53 per cent of market research analyst tasks and 67 per cent of sales representative tasks, whilst managerial roles face only 9 to 21 per cent automation risk.

And here is the uncomfortable truth buried in the optimistic projections about new job creation: whilst 170 million new roles may emerge by 2030, 77 per cent of AI jobs require master's degrees, and 18 per cent require doctoral degrees. The factory worker displaced by robots could, with retraining, potentially become a robot technician. But what happens to the call centre worker whose job is eliminated by an AI agent? The path from redundant administrative worker to machine learning engineer is considerably less traversable.

The gender disparities are equally stark. Geographic analysis indicates that 58.87 million women in the US workforce occupy positions highly exposed to AI automation compared to 48.62 million men. Workers aged 18 to 24 are 129 per cent more likely than those over 65 to worry AI will make their job obsolete. Nearly half of Gen Z job seekers believe AI has reduced the value of their college education.

According to the World Economic Forum's 2025 Future of Jobs Report, 41 per cent of employers worldwide intend to reduce their workforce in the next five years. In 2024, 44 per cent of companies using AI said employees would “definitely” or “probably” be laid off due to AI, up from 37 per cent in 2023.

There is a mitigating factor, however: 63.3 per cent of all jobs include nontechnical barriers that would prevent complete automation displacement. These barriers include client preferences for human interaction, regulatory requirements, and cost-effectiveness considerations.

Liberation from tedious work sounds rather different when it means liberation from your livelihood entirely.

When Machines Make Decisions We Cannot Understand

Perhaps the most philosophically troubling aspect of autonomous AI agents is their opacity. As these systems make increasingly consequential decisions about our lives, from loan approvals to medical diagnoses to criminal risk assessments, we often cannot explain precisely why they reached their conclusions.

AI agents are increasingly useful across industries, from healthcare and finance to customer service and logistics. However, as deployment expands, so do concerns about ethical implications. Issues related to bias, accountability, and transparency have come to the forefront.

Bias in AI systems often originates from the data used to train these models. When training data reflects historical prejudices or lacks diversity, AI agents can inadvertently perpetuate these biases in their decision-making processes. Facial recognition technologies, for instance, have demonstrated higher error rates for individuals with darker skin tones. Researchers categorise these biases into three main types: input bias, system bias, and application bias.

As AI algorithms become increasingly sophisticated and autonomous, their decision-making processes can become opaque, making it difficult for individuals to understand how these systems are shaping their lives. Factors contributing to this include the complexity of advanced AI models with intricate architectures that are challenging to interpret, proprietary constraints where companies limit transparency to protect intellectual property, and the absence of universally accepted guidelines for AI transparency.

As AI agents gain autonomy, determining accountability becomes increasingly complex. When processes are fully automated, who bears responsibility for errors or unintended consequences?

The implications extend into our private spaces. When it comes to AI-driven Internet of Things devices that do not record audio or video, such as smart lightbulbs and thermostats using machine learning algorithms to infer sensitive information including sleep patterns and home occupancy, users remain mostly unaware of their privacy risks. From using inexpensive laser pointers to hijack voice assistants to hacking into home security cameras, cybercriminals have been able to infiltrate homes through security vulnerabilities in smart devices.

According to the IAPP Privacy and Consumer Trust Report, 68 per cent of consumers globally are either somewhat or very concerned about their privacy online. Overall, there is a complicated relationship between use of AI-driven smart devices and privacy, with users sometimes willing to trade privacy for convenience. At the same time, given the relative immaturity of privacy controls on these devices, users remain stuck in a state of what researchers call “privacy resignation.”

Lessons from Those Who Know Best

The researchers who understand AI most deeply are among those most concerned about its trajectory. Stuart Russell, professor of computer science at the University of California, Berkeley, and co-author of the standard textbook on artificial intelligence, has been sounding alarms for years. In a January 2025 opinion piece in Newsweek titled “DeepSeek, OpenAI, and the Race to Human Extinction,” Russell argued that competitive dynamics between AI labs were creating a “race to the bottom” on safety.

Russell highlighted a stark resource imbalance: “Between the startups and the big tech companies we're probably going to spend 100 billion dollars this year on creating artificial general intelligence. And I think the global expenditure in the public sector on AI safety research, on figuring out how to make these systems safe, is maybe 10 million dollars. We're talking a factor of about 10,000 times less investment.”

Russell has emphasised that “human beings in the long run do not want to be enfeebled. They don't want to be overly dependent on machines to the extent that they lose their own capabilities and their own autonomy.” He defines what he calls “the gorilla problem” as the question of whether humans can maintain their supremacy and autonomy in a world that includes machines with substantially greater intelligence. In a 2024 paper published in Science, Russell and co-authors proposed regulating advanced artificial agents, arguing that AI systems capable of autonomous goal-directed behaviour pose unique risks and should be subject to specific safety requirements, including a licensing regime.

Yoshua Bengio, a Turing Award winner often called one of the “godfathers” of deep learning, has emerged as another prominent voice of concern. He led the International AI Safety Report, published in January 2025, representing the largest international collaboration on AI safety research to date. Written by over 100 independent experts and backed by 30 countries and international organisations, the report serves as the authoritative reference for governments developing AI policies worldwide.

Bengio's concerns centre on the trajectory toward increasingly autonomous systems. As he has observed, the leading AI companies are increasingly focused on building generalist AI agents, systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control.

These risks arise from current AI training methods. Various scenarios and experiments have demonstrated the possibility of AI agents engaging in deception or pursuing goals that were not specified by human operators and that conflict with human interests, such as self-preservation.

Bengio calls for some red lines that should never be crossed by future AI systems: autonomous replication or improvement, dominant self-preservation and power seeking, assisting in weapon development, cyberattacks, and deception. At the heart of his recent work is an idea he calls “Scientist AI,” an approach to building AI that exists primarily to understand the world rather than act in it. His nonprofit LawZero, launched in June 2025 and backed by the Gates Foundation and existential risk funders, is developing new technical approaches to AI safety based on this research.

A February 2025 paper on arXiv titled “Fully Autonomous AI Agents Should Not be Developed” makes the case explicitly, arguing that mechanisms for oversight should account for increased complications related to increased autonomy. The authors argue that greater agent autonomy amplifies the scope and severity of potential safety harms across physical, financial, digital, societal, and informational dimensions.

Regulation Struggles to Keep Pace

As AI capabilities advance at breakneck speed, the regulatory frameworks meant to govern them lag far behind. The edge cases of 2025 will not remain edge cases for long, particularly when it comes to agentic AI. The more autonomously an AI system can operate, the more pressing questions of authority and accountability become. Should AI agents be seen as “legal actors” bearing duties, or “legal persons” holding rights? In the United States, where corporations enjoy legal personhood, 2026 may be a banner year for lawsuits and legislation on exactly this point.

Traditional AI governance practices such as data governance, risk assessments, explainability, and continuous monitoring remain essential, but governing agentic systems requires going further to address their autonomy and dynamic behaviour.

The regulatory landscape varies dramatically by region. In the European Union, the majority of the AI Act's provisions become applicable on 2 August 2026, including obligations for most high-risk AI systems. However, the compliance deadline for high-risk AI systems has effectively been paused until late 2027 or 2028 to allow time for technical standards to be finalised. The new EU Product Liability Directive, to be implemented by member states by December 2026, explicitly includes software and AI as “products,” allowing for strict liability if an AI system is found to be defective.

The United Kingdom's approach has been more tentative. Recent public reporting suggests the UK government may delay AI regulation whilst preparing a more comprehensive, government-backed AI bill, potentially pushing such legislation into the next parliamentary session in 2026 or later. The UK Information Commissioner's Office has published a report on the data protection implications of agentic AI, emphasising that organisations remain responsible for data protection compliance of the agentic AI that they develop, deploy, or integrate.

In the United States, acceleration and deregulation characterise the current administration's domestic AI agenda. The AI governance debate has evolved from whether to preempt state-level regulation to what a substantive federal framework might contain.

Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems, according to leading researchers. The first publicly reported AI-orchestrated hacking campaign appeared in 2025, and agentic AI systems are expected to reshape the offence-defence balance in cyberspace in the year ahead.

In 2026, ambiguity around responsible agentic AI will not be acceptable, according to industry analysts. Businesses will be expected to define who owns decisions influenced or executed by AI agents, how those decisions are reviewed, and how outcomes can be audited when questions arise.

The Case for Collaborative Autonomy

Between the techno-utopian vision of liberation from drudgery and the dystopian nightmare of powerlessness lies a middle path that deserves serious consideration: collaborative autonomy, a model where humans and AI systems work together, with each party contributing what they do best.

A 2025 paper in i-com journal explores this balance between leveraging automation for efficiency and preserving human intuition and ethical judgment, particularly in high-stakes scenarios. The research highlights benefits and challenges of automation, including risks of deskilling, automation bias, and accountability, and advocates for a hybrid approach where humans and systems work in partnership to ensure transparency, trust, and adaptability.

The human-in-the-loop approach offers a practical framework for maintaining control whilst capturing the benefits of AI agents. According to recent reports, at least 30 per cent of GenAI initiatives may be abandoned by the end of 2025 owing to poor data, inadequate risk controls, and ambiguous business cases, whilst Gartner predicts more than 40 per cent of agentic AI projects may be scrapped by 2027 due to cost and unclear business value. One practical way to address these challenges is keeping people involved where judgment, ethics, and context are critical.

The research perspective from the California Management Review suggests that whilst AI agents of the future are expected to achieve full autonomy, this is not always feasible or desirable in practice. AI agents must strike a balance between autonomy and human oversight, following what researchers call “guided autonomy,” which gives agents leeway to execute decisions within defined boundaries of delegation.

The most durable AI systems will not remove humans from the loop; they will redesign the loop. In 2026, human-in-the-loop approaches will mature beyond prompt engineering and manual oversight. The focus shifts to better handoffs, clearer accountability, and tighter collaboration between human judgment and machine execution, where trust, adoption, and real impact converge.

OpenAI's approach reflects this thinking. As stated in their safety documentation, human safety and human rights are paramount. Even when AI systems can autonomously replicate, collaborate, or adapt their objectives, humans must be able to meaningfully intervene and deactivate capabilities as needed. This involves designing mechanisms for remote monitoring, secure containment, and reliable fail-safes to preserve human authority.

The Linux Foundation is organising a group called the Agentic Artificial Intelligence Foundation with participation from major AI companies, including OpenAI, Anthropic, Google, and Microsoft, aiming to create shared open-source standards that allow AI agents to reliably interact with enterprise software.

MIT researchers note: “We are already well into the Agentic Age of AI. Companies are developing and deploying autonomous, multimodal AI agents in a vast array of tasks. But our understanding of how to work with AI agents to maximise productivity and performance, as well as the societal implications of this dramatic turn toward agentic AI, is nascent, if not nonexistent.”

The Stakes of Getting It Right

The decisions we make in the next few years about autonomous AI agents will shape human society for generations. This is not hyperbole. The technology we are building has the potential to fundamentally alter the relationship between humans and their tools, between workers and their employers, between citizens and the institutions that govern them.

As AI systems increasingly operate beyond centralised infrastructures, residing on personal devices, embedded hardware, and forming networks of interacting agents, maintaining meaningful human oversight becomes both more difficult and more essential. We must design mechanisms that preserve human authority even as we grant these systems increasing independence.

The question of whether autonomous AI agents will liberate us or leave us powerless is ultimately a question about choices, not destiny. The technology does not arrive with predetermined social consequences. It arrives with possibilities, and those possibilities are shaped by the decisions of engineers, executives, policymakers, and citizens.

Will we build AI agents that genuinely augment human capabilities whilst preserving human dignity and autonomy? Or will we stumble into a future where algorithmic systems make ever more consequential decisions about our lives whilst we lose the knowledge, skills, and institutional capacity to understand or challenge them?

The answers are not yet written. But the time to write them is running short. Ninety-six per cent of IT leaders plan to expand their AI agent implementations during 2025, according to industry surveys. The deployment is happening now. The governance frameworks, the safety standards, the social contracts that should accompany such transformative technology are still being debated, deferred, and delayed.

The great handover has begun. What remains to be determined is whether we are handing over our burdens or our birthright.


References and Sources

  1. Gartner. “Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026.” Press Release, August 2025. https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025

  2. McKinsey & Company. “The state of AI in 2025: Agents, innovation, and transformation.” 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

  3. G2. “Enterprise AI Agents Report: Industry Outlook for 2026.” August 2025. https://learn.g2.com/enterprise-ai-agents-report

  4. Deloitte. AI Market Projections and Enterprise Adoption Statistics. 2025.

  5. Accenture. Study on AI Agents as Primary Enterprise System Users. 2025.

  6. Boston Consulting Group. “Agentic AI Is the New Frontier in Customer Service Transformation.” 2025. https://www.bcg.com/publications/2025/new-frontier-customer-service-transformation

  7. Anthropic Security Disclosure. November 2025. As reported in Dark Reading and security industry analyses.

  8. Challenger, Gray & Christmas. 2025 Layoff Statistics and AI Attribution Analysis.

  9. World Economic Forum. “Future of Jobs Report 2025.”

  10. Russell, Stuart. “DeepSeek, OpenAI, and the Race to Human Extinction.” Newsweek, January 2025.

  11. Russell, Stuart, et al. “Regulating advanced artificial agents.” Science, 2024.

  12. Bengio, Yoshua, et al. “International AI Safety Report.” January 2025. https://internationalaisafetyreport.org/

  13. Bengio, Yoshua. “Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?” arXiv, February 2025. https://arxiv.org/abs/2502.15657

  14. Fortune. “AI 'godfather' Yoshua Bengio believes he's found a technical fix for AI's biggest risks.” January 2026. https://fortune.com/2026/01/15/ai-godfather-yoshua-bengio-changes-view-on-ai-risks-sees-fix-becomes-optimistic-lawzero-board-of-advisors/

  15. arXiv. “Fully Autonomous AI Agents Should Not be Developed.” February 2025. https://arxiv.org/html/2502.02649v3

  16. California Management Review. “Rethinking AI Agents: A Principal-Agent Perspective.” July 2025. https://cmr.berkeley.edu/2025/07/rethinking-ai-agents-a-principal-agent-perspective/

  17. i-com Journal. “Keeping the human in the loop: are autonomous decisions inevitable?” 2025. https://www.degruyterbrill.com/document/doi/10.1515/icom-2024-0068/html

  18. MIT Sloan. “4 new studies about agentic AI from the MIT Initiative on the Digital Economy.” 2025. https://mitsloan.mit.edu/ideas-made-to-matter/4-new-studies-about-agentic-ai-mit-initiative-digital-economy

  19. OpenAI. “Model Spec.” December 2025. https://model-spec.openai.com/2025-12-18.html

  20. IAPP. “AI governance in the agentic era.” https://iapp.org/resources/article/ai-governance-in-the-agentic-era

  21. IAPP. “Privacy and Consumer Trust Report.” 2023.

  22. European Union. AI Act Implementation Timeline and Product Liability Directive. 2025-2026.

  23. Dark Reading. “2026: The Year Agentic AI Becomes the Attack-Surface Poster Child.” https://www.darkreading.com/threat-intelligence/2026-agentic-ai-attack-surface-poster-child

  24. Frontiers in Human Dynamics. “Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making.” 2024. https://www.frontiersin.org/journals/human-dynamics/articles/10.3389/fhumd.2024.1421273/full

  25. National University. “59 AI Job Statistics: Future of U.S. Jobs.” https://www.nu.edu/blog/ai-job-statistics/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...