Reskilling Will Not Save Us: The Agentic AI Labour Crisis

In February 2026, Mustafa Suleyman, the chief executive of Microsoft AI, told the Financial Times that artificial intelligence would achieve “human-level performance on most, if not all, professional tasks” within 12 to 18 months. Most tasks that involve sitting down at a computer, he said, would be fully automated. Accounting. Legal work. Marketing. Project management. All of it. The prediction was remarkable not because it was outlandish but because Suleyman runs one of the largest AI operations on the planet and was speaking with the calm certainty of someone describing next quarter's product roadmap rather than a civilisational rupture.
A few months earlier, Salesforce chief executive Marc Benioff had already demonstrated what that roadmap looks like in practice. On The Logan Bartlett Show in September 2025, Benioff revealed that AI agents now handle roughly half of all customer service interactions at Salesforce. The company's support workforce had been cut from 9,000 to approximately 5,000 employees since the beginning of that year. Not through attrition. Through replacement. The automation shift, he noted, had lowered support costs by 17 per cent. “I need less heads,” Benioff said, with the bluntness of a chief executive who had already made the calculation and moved on.
These are not hypotheticals from a futurist's slide deck. They are operational realities at two of the world's largest technology companies. And they hint at something that the mainstream conversation about artificial intelligence and employment has been reluctant to confront: the labour crisis that is now forming will not resemble the one we spent a decade preparing for. It will be faster, broader, and stranger. The familiar narrative of blue-collar workers displaced by robots, retrained through government programmes, and reabsorbed into a new knowledge economy is not merely optimistic. It may be entirely wrong.
When the Tool Becomes the Worker
For the better part of a decade, the dominant metaphor for artificial intelligence in the workplace was “copilot.” AI would augment human work. It would handle the tedious parts. It would make professionals faster, sharper, more productive. The human remained in the loop, in the driver's seat, in control. This framing was convenient for technology companies selling enterprise subscriptions and for governments drafting policy papers. It was also, for a time, reasonably accurate.
That time is ending.
The shift now underway is from AI as a tool to AI as an agent. Where a copilot assists with a task, an agent completes a workflow. The distinction is not semantic. It is structural. A copilot requires a human to initiate each step, review each output, and decide what comes next. An agent receives a goal and executes the entire chain of actions required to achieve it, making intermediate decisions autonomously, invoking tools, and adapting its approach based on the results it encounters along the way.
Consider what this looks like in practice. An agentic AI system does not summarise a legal document and wait for a lawyer to act on it. It reads the document, identifies the relevant clauses, cross-references them against a regulatory database, drafts a compliance memo, and routes it to the appropriate department. End to end. No human in the loop until the output arrives. KPMG's Q2 2025 survey found that 33 per cent of organisations had already deployed AI agents, a threefold increase from 11 per cent in the prior survey period. The velocity of adoption is itself part of the story.
This distinction matters enormously. Previous waves of automation targeted discrete, repetitive tasks: welding car chassis, scanning barcodes, sorting parcels. The displacement was real but contained. It affected specific roles in specific industries and could, at least in theory, be addressed through retraining. Agentic AI is different because it targets entire occupational workflows, and it does so across the information economy simultaneously. A recent paper published on arXiv in April 2026, analysing occupational exposure across five major US technology regions, found that 93.2 per cent of 236 analysed occupations in information-intensive sectors crossed the moderate-risk threshold for agentic AI displacement by 2030. Credit analysts, sustainability specialists, and even judges appeared at the high end of the exposure spectrum.
The market reflects this shift. The AI agent market is projected to grow from 7.84 billion dollars in 2025 to 52.62 billion dollars by 2030, a compound annual growth rate of 46.3 per cent. Sixty-two per cent of organisations surveyed by McKinsey in 2025 said they were at least experimenting with AI agents. Half of AI high performers intended to use AI to transform their businesses entirely, redesigning workflows from the ground up rather than bolting AI onto existing processes. Among organisations with extensive agentic AI adoption, 45 per cent expected reductions in middle management layers. The infrastructure for a workerless middle is being built in real time, and the companies building it are not shy about saying so.
The Reskilling Illusion
Whenever automation threatens jobs, the response from governments and industry bodies follows a predictable script. Workers will be reskilled. New jobs will emerge. The economy will adapt. This narrative has been the centrepiece of labour policy for decades, and it has a comforting internal logic: since every previous technological revolution eventually created more jobs than it destroyed, this one will too.
The problem is that the logic depends on a transition period that may no longer exist.
The World Economic Forum's Future of Jobs Report 2025 projected that global trends in technology, economy, demographics, and green transition would generate 170 million new jobs by 2030 while displacing 92 million others, yielding a net increase of 78 million positions. On paper, the maths works. But the report also noted that 41 per cent of employers intended to reduce their workforce due to AI by 2030, and that the 22 per cent churn rate in the global workforce meant roles were being eliminated and recreated faster than workers could realistically transition. These displaced jobs and the new ones are not direct exchanges occurring in the same locations with the same individuals. A financial analyst made redundant in Manchester does not become a machine learning engineer in San Francisco. The geography, the skills, the timelines, and the economics of transition all point in different directions.
The reskilling infrastructure that is supposed to bridge this gap is, by most honest assessments, not fit for purpose. A 2026 survey found that 51 per cent of organisations reported widening skills gaps, with AI adoption outpacing reskilling efforts. Sixty-seven per cent of US workers said their organisation had not been proactive in training employees to work alongside AI. Only 17 per cent reported their company was doing anything meaningful to upskill workers in AI-impacted roles. The demand for AI fluency in job postings, meanwhile, has grown sevenfold in two years, from roughly 1 million workers in occupations requiring it in 2023 to about 7 million in 2025. The skills the market demands are racing ahead of the skills the workforce possesses, and the gap is widening, not narrowing.
The historical precedent is not encouraging either. The offshoring wave of the 1990s and 2000s generated similar promises about retraining displaced manufacturing workers. Those programmes were, by most economic assessments, deeply inadequate. Workers who lost factory jobs in the American Midwest or the industrial towns of northern England did not seamlessly transition into knowledge work. Many never recovered their previous earnings. The communities they lived in hollowed out. Decades later, the social and political consequences of that failure are still unfolding.
Now imagine that dynamic, but applied to the knowledge workers themselves.
Daron Acemoglu, the Nobel Prize-winning economist at MIT, has spent years developing a task-based framework for understanding automation's impact on labour markets. His central insight is that automation raises average productivity but does not necessarily increase, and may in fact reduce, worker marginal productivity. Over the past four decades, he argues, automation has multiplied corporate profits without producing shared prosperity. The crucial question about AI, in Acemoglu's framing, is whether it will take the form of “machine usefulness,” helping workers become more productive, or whether it will be aimed at mimicking general intelligence to replace human labour entirely. His concern is that the industry has overwhelmingly pursued the latter path: not providing new information to a biotechnologist but replacing a customer service worker with automated call-centre technology. The distinction determines whether AI becomes a force for broad-based prosperity or for further concentration of economic power.
The Vanishing First Rung
Perhaps the most underappreciated dimension of the emerging labour crisis is its impact on entry-level employment. The traditional career ladder in knowledge work has always begun with grunt work: junior lawyers reviewing documents, junior analysts building financial models, junior developers writing boilerplate code. These tasks were not merely busywork. They were the mechanism through which professionals developed expertise, judgement, and institutional knowledge. They were the learning curve itself.
Agentic AI is automating that learning curve.
A Stanford study published in 2025 found that hiring for entry-level, AI-impacted positions, such as junior accounting roles, fell by 16 per cent over approximately two years. In the United Kingdom, tech graduate roles fell by 46 per cent in 2024, with projections for a further 53 per cent decline by 2026. In the United States, entry-level postings in software development and data analysis have plummeted, with some estimates indicating a 67 per cent decrease in junior tech postings. The share of tech job postings requiring at least five years of experience jumped from 37 per cent to 42 per cent between mid-2022 and mid-2025, while the share open to candidates with two to four years of experience dropped from 46 per cent to 40 per cent over the same period.
The implications run deeper than unemployment statistics. If the entry-level rung of the career ladder is removed, the entire structure above it becomes unstable. Senior professionals retire. Mid-career workers advance. But the pipeline of replacements narrows to a trickle. Organisations that automate away their junior roles may find, within five to ten years, that they have a workforce with no depth: plenty of experienced employees nearing retirement, a handful of AI systems handling routine work, and virtually no one in between with the institutional knowledge and professional judgement that can only develop through years of hands-on practice.
This is not a speculative scenario. It is the logical consequence of decisions being made right now, and the people most affected can see it coming. A survey of the class of 2026 found that 89 per cent of graduates believed AI could replace entry-level roles, compared with 64 per cent just one year earlier. This is not irrational anxiety. It is a reasonable reading of the data. Employers project a marginal 1.6 per cent increase in hiring for the class of 2026 compared with the class of 2025, which in real terms signals a functional contraction in opportunity when adjusted for the increasing number of graduates entering the labour market.
The question is what happens to an entire generation of workers who cannot find their way onto the first rung. And what happens to the professions that depend on that generation eventually climbing to the top.
The White-Collar Reckoning
The conversation about AI and employment has been distorted by a persistent assumption: that white-collar, knowledge-intensive work is somehow safe. That assumption was always fragile. It is now collapsing.
The occupations most immediately vulnerable to agentic AI are not the ones most people expect. They are not cashiers and truck drivers, the perennial examples in automation discourse. They are financial analysts, compliance officers, legal researchers, administrative coordinators, marketing strategists, and mid-level project managers. These are the roles that consist primarily of information processing, pattern recognition, and structured decision-making, precisely the tasks that large language models and agentic systems now perform with increasing competence.
Anthropic's Claude 3.7 Sonnet, released in early 2025, reliably completes tasks that would take a human approximately one hour. Current frontier models from OpenAI and Google achieve near-perfect success rates on tasks requiring less than four minutes of human effort, though success rates drop below 10 per cent for tasks exceeding four hours. The capability curve is steep and climbing. What took an hour last year takes minutes today. What takes four hours today will take minutes within a year or two, if the trajectory holds.
A 2025 Thomson Reuters report found that lawyers, accountants, and auditors were experimenting with AI for targeted tasks like document review and routine analysis, with productivity improvements that were real but marginal. The gap between marginal improvement and wholesale replacement is precisely where agentic AI operates. When the system can handle not just the document review but the entire compliance workflow, from intake to analysis to output, the calculus changes fundamentally.
AI adoption in accounting firms leapt from 9 per cent in 2024 to 41 per cent in 2025. Routine reconciliations, expense categorisation, audit preparation, and compliance documentation are increasingly handled by AI-powered platforms. Senior accountants remain essential for oversight and regulatory interpretation, but the pyramid of junior and mid-level staff that supports them is being compressed. Employment consultancy Challenger, Gray and Christmas reported that approximately 55,000 job cuts in 2025 were explicitly AI-related, with modelling-adjusted estimates placing actual AI-displaced or foregone positions at 200,000 to 300,000 across the US economy. Across the technology sector alone, more than 64,000 jobs were eliminated in 2025, with Microsoft announcing cuts of approximately 15,000 positions despite strong earnings.
The Klarna case offers a cautionary study in both directions. The Swedish fintech company replaced the work of 700 customer service employees with AI agents, and chief executive Sebastian Siemiatkowski initially celebrated the efficiency gains. But by early 2025, internal reviews and customer feedback revealed that the AI systems lacked the capacity for empathetic engagement and nuanced problem-solving that customer support requires. Service quality deteriorated. Customer complaints mounted. Klarna began rehiring human staff. The lesson was not that AI cannot do the work. It was that the work is more complex than it appears from the outside, and that automating it poorly carries its own costs. It was also, crucially, that the workers who were let go do not simply wait in suspended animation until a company decides it needs them again. They move on, retrain, relocate, or fall out of the workforce entirely. The damage of premature automation is not easily reversed.
The Global Asymmetry
The labour impact of agentic AI will not be distributed evenly across the world. For countries that built their economic development strategies around the offshoring of information services, the threat is existential.
India employs between 7.5 and 8 million people in its technology services sector and another 2 to 2.5 million in customer service and business process outsourcing. In February 2026, venture capitalist Vinod Khosla warned that India's IT services and BPO sectors could “almost completely disappear” within five years as AI systems outperform human expertise. The prediction may be hyperbolic, but the underlying dynamic is real. An estimated 1.65 million Indians working in voice support, data processing, and administrative BPO roles face direct displacement from AI agents, some of which can already handle up to 95 per cent of customer queries without human involvement.
In a worst-case scenario analysed by industry researchers, the headcount in India's tech services sector could decline from its current levels to approximately 6 million by 2031, while the customer service sector could shrink from 2 to 2.5 million to 1.8 million. For a country where technology outsourcing has been a primary engine of middle-class formation, these are not incremental adjustments. They are structural fractures in the economic model that lifted hundreds of millions out of poverty.
The Philippines, another major outsourcing hub, faces similar pressures. So do parts of Eastern Europe, North Africa, and Latin America, where call centres and back-office operations have provided employment pathways for millions. The International Labour Organisation estimated in 2025 that around a quarter of jobs worldwide, more than 600 million roles, are potentially exposed to the effects of generative AI. In Latin America specifically, between 26 and 38 per cent of jobs, roughly 88 million roles, could be affected.
The irony is sharp. For decades, the promise of globalisation was that developing countries could leapfrog industrialisation by building service economies. Now the very service tasks that powered that model are the ones most susceptible to automation. The global South was told to skip the factory and go straight to the call centre. It turns out the call centre was a waypoint, not a destination. And the next waypoint is not yet visible.
Policy in Slow Motion
The regulatory response to agentic AI's labour implications has been, with a few exceptions, strikingly inadequate. Most policy frameworks treat AI governance as a safety and ethics question, not a labour question. The European Union's AI Act, which began entering into force in stages from 2024, establishes important guardrails around high-risk AI systems, but it does not address the core challenge of structural job displacement. Regulating how AI systems are developed and deployed is fundamentally different from managing how societies absorb the economic dislocation those systems create.
Some European policy experts have recognised this gap. The European Policy Centre has called for a “European AI Social Compact,” tied to the European Social Fund, that would align technological progress with labour protections and targeted upskilling across all 27 member states. The German Institute for Employment Research projected that 1.6 million jobs could be reshaped by or lost to AI in Germany alone over the next fifteen years. More broadly, researchers estimate that 50.2 million Europeans, or 32 per cent of the working population, face the risk of displacement in their current roles.
In the United States, policy responses have been fragmented and modest. The Guaranteed Income Pilot Programme Act of 2025, introduced by Representative Bonnie Watson Coleman, authorised 495 million dollars annually for five years to establish nationwide guaranteed income pilots. Representative Rashida Tlaib has proposed the BOOST Act, a 250-dollar monthly refundable tax credit. OpenAI chief executive Sam Altman has promoted the concept of an “American Equity Fund,” where large AI companies and landholders would contribute approximately 2.5 per cent of their value annually to a fund distributed to all citizens. California's Assembly Bill 661 mandates a feasibility study for a permanent statewide guaranteed income programme, explicitly acknowledging the threat automation poses to the state's tech-centric economy.
These are interesting experiments. They are not remotely proportional to the scale of the problem. A 250-dollar monthly payment does not replace a 60,000-dollar salary. A pilot programme does not constitute a safety net. And a feasibility study is, by definition, an acknowledgement that the actual policy does not yet exist.
The fundamental policy challenge is temporal. AI capabilities are advancing on a timeline measured in months. Labour markets adjust over years. Regulatory frameworks evolve over decades. The gap between the speed of technological disruption and the speed of institutional response is not closing. It is widening. And every month it widens further, the eventual adjustment becomes more painful.
What the Crisis Actually Looks Like
If the familiar automation narrative is wrong, what does the actual labour crisis look like?
It does not look like mass unemployment in the traditional sense. It looks like the gradual erosion of the occupational middle. The senior partners at law firms and accounting practices will be fine. The AI systems handling routine work will be fine. Everyone in between faces compression: fewer positions, lower wages, reduced autonomy, and a diminishing path from junior roles to senior ones.
It looks like a generation of graduates locked out of professions they trained for, holding degrees that assumed a labour market structure which no longer exists. It looks like the countries that built their development models around information services discovering that the rungs they climbed have been pulled up behind them.
It looks like corporate profits rising while median wages stagnate or decline, a pattern Acemoglu has documented across four decades of automation and one that agentic AI appears poised to accelerate. It looks like the political consequences of that divergence: the anger, the populism, the search for scapegoats that has already reshaped electoral politics across the Western world.
It looks, in other words, nothing like the orderly transition that government white papers and corporate social responsibility reports have been promising. And it looks precisely like the kind of crisis that societies tend not to prepare for, because preparing would require admitting that the current trajectory is unsustainable.
McKinsey's own research underscores the scale. Their study “Agents, Robots, and Us” found that currently demonstrated technologies could automate activities accounting for approximately 57 per cent of US work hours. Agents, defined as software systems that automate non-physical work, could perform tasks occupying 44 per cent of US work hours. Roles with the highest potential for automation make up approximately 40 per cent of total jobs, concentrated in legal and administrative services and in physically demanding roles such as drivers and machine operators.
The question is not whether work will change. It is whether the institutions meant to manage that change are capable of doing so at the required speed. The evidence, so far, suggests they are not.
Beyond the Productivity Promise
There is a deeper problem buried in the optimistic projections about AI and productivity, and it concerns what happens when the gains accrue almost entirely to capital rather than labour. Every previous technological revolution, from the steam engine to the personal computer, eventually produced broadly shared prosperity, but only after prolonged periods of dislocation, political conflict, and institutional reform. The factory system generated enormous wealth in 19th-century Britain, but it took decades of labour organising, legislative reform, and social upheaval before that wealth was distributed in a way that produced a functioning middle class.
The implicit assumption in today's AI discourse is that the same pattern will repeat, that displacement will be followed by adjustment, that new jobs will emerge, that the economy will find a new equilibrium. Perhaps. But the timescales that previous transitions required are worth noting. The Industrial Revolution's social adjustments took the better part of a century. The post-war economic settlement that created the modern welfare state was forged through two world wars and a global depression. The information technology revolution of the late 20th century produced three decades of wage stagnation for median workers before any broad-based gains materialised, and even those gains were unevenly distributed.
The agentic AI transition is occurring in a world with weaker unions, more fragile social safety nets, and governments already stretched thin by pandemic debt, climate adaptation costs, and geopolitical instability. The institutional capacity to manage a labour market shock of this magnitude is lower than at any point since the early Industrial Revolution.
A PricewaterhouseCoopers report found that 55 per cent of chief executives saw no measurable benefits from AI deployment, while a separate MIT study showed 95 per cent of enterprise uses of generative AI had no measurable impact on profit and loss. These findings might seem reassuring, but they cut both ways. If AI is not yet delivering transformative productivity gains for most organisations, then the job displacement that is already occurring is happening without the compensating economic growth that is supposed to fund new employment. The costs are arriving before the benefits. Workers are being replaced not because AI is producing extraordinary value but because it is producing adequate value at lower cost. That is a different economic story, and it has a different ending.
What Preparation Would Actually Require
Genuine preparation for the agentic AI transition would require a level of policy ambition that no major government has yet demonstrated. It would mean fundamentally rethinking education systems to emphasise adaptability and judgement over domain-specific knowledge that can be automated. It would mean building social insurance systems capable of supporting workers through multiple career transitions, not just one. It would mean confronting the distribution question directly: if AI-driven productivity gains flow primarily to the owners of AI systems, then some mechanism for redistribution is not a progressive wish but a structural necessity.
It would also mean being honest about what reskilling can and cannot do. Retraining a 45-year-old financial analyst to become an AI systems architect is not a realistic proposition for most people. The skills gap is not a gap that can be closed with a six-month certificate programme. And the jobs that are being created by the AI economy, the prompt engineers and machine learning specialists and AI safety researchers, represent a tiny fraction of the employment that is being automated away.
The Carnegie Endowment for International Peace argued in February 2026 that Europe's response to AI labour disruption must extend beyond regulating these systems into fiscal planning and institutional redesign. A credible response, their analysts wrote, rests on three pillars: establishing social protections for displaced workers, providing training infrastructure scaled for continuous transition, and building public trust. That framework is correct. Whether any government will implement it before the crisis arrives is another question entirely.
The pattern of the past decade suggests that policy will follow disaster rather than prevent it. We will get serious about the labour implications of agentic AI roughly 18 months after they become undeniable, which is to say, roughly 18 months too late.
The conversation about AI and jobs has been conducted in the language of the last disruption: automation of routine physical tasks, reskilling programmes, net job creation. That conversation is obsolete. What is coming is the automation of cognitive workflows at scale, the compression of occupational hierarchies, the elimination of the entry-level pipeline, and the concentration of economic gains in an ever-narrower segment of the labour market. It is not the crisis we were told to prepare for. And we are, by almost every meaningful measure, sleepwalking into it.
References and Sources
- Fortune, “Microsoft AI chief gives it 18 months for all white-collar work to be automated by AI,” February 2026.
- Fortune, “Salesforce CEO Marc Benioff says his company has cut 4,000 customer service jobs as AI steps in: 'I need less heads,'” September 2025.
- arXiv, “Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis of Emerging Labor Market Disruption,” April 2026.
- McKinsey and QuantumBlack, “The State of AI in 2025: Agents, Innovation, and Transformation,” 2025.
- McKinsey, “Insights on Today's Labor Market: Uncertainty, Agentic AI, and More,” 2025.
- World Economic Forum, “Future of Jobs Report 2025,” January 2025.
- Fast Company, “Reskilling Won't Save Us from AI. Here's What We Need to Do Instead,” 2025.
- Daron Acemoglu and Pascual Restrepo, “Automation and New Tasks: How Technology Displaces and Reinstates Labor,” Journal of Economic Perspectives, Vol. 33, No. 2, 2019.
- Daron Acemoglu, “The Simple Macroeconomics of AI,” MIT Economics Working Paper, 2024.
- IMF Finance and Development, “Rebalancing AI,” Daron Acemoglu and Simon Johnson, December 2023.
- Stanford University, study on AI impact on entry-level hiring, 2025. Reported via IEEE Spectrum, “AI Shifts Expectations for Entry Level Jobs,” 2025.
- IntuitionLabs, “AI's Impact on Graduate Jobs: A 2025 Data Analysis,” 2025.
- CNBC, “AI is not just ending entry-level jobs. It's the end of the career ladder as we know it,” September 2025.
- Thomson Reuters, report on AI adoption in legal and accounting services, 2025.
- Challenger, Gray and Christmas, AI-related job cut data, 2025.
- CNBC, “Klarna CEO says AI helped company shrink workforce by 40%,” May 2025.
- MLQ AI, “Klarna CEO admits AI job cuts went too far,” 2026.
- BusinessToday, “IT, BPO services will disappear in the next 5 yrs: Vinod Khosla,” February 2026.
- International Labour Organisation, generative AI exposure estimates, 2025.
- Ghost Research, “AI Impact on BPO: Transforming India's Workforce,” 2025.
- European Policy Centre, “AI's Impact on Europe's Job Market: A Call for a Social Compact,” 2025.
- Carnegie Endowment for International Peace, “How Europe Can Survive the AI Labor Transition,” February 2026.
- German Institute for Employment Research, AI job displacement projections for Germany, 2025.
- Congress.gov, Guaranteed Income Pilot Program Act of 2025 (H.R. 5830), introduced by Rep. Bonnie Watson Coleman, 2025.
- PricewaterhouseCoopers, report on CEO AI deployment outcomes, 2025.
- Deloitte, “The Agentic Reality Check: Preparing for a Silicon-Based Workforce,” Tech Trends 2026.
- PwC, “No More Pyramids: Rethinking Your Workforce for the Agentic AI Era,” 2025.
- HFS Research, “Prepare for Agentic AI to Shatter Corporate Workforces and Global Economies,” 2025.
- Equal Times, “Agentic AI and the Future of Work: Navigating Technological Promise and the Risk of Increased Automation,” 2025.
- AI2 Incubator, “Insights 15: The State of AI Agents in 2025: Balancing Optimism with Reality,” 2025.
- The Interview Guys, “89% of 2026 Grads Think AI Will Take Their Job Before They Even Get One,” 2026.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk