AI Will Not Take Your Job: It Will Hollow It Out

There is a particular kind of dread that does not show up in any labour market report. It is not the fear of being fired. It is the slow, creeping realisation that the thing you spent a decade learning to do well is now being done, competently enough, by a system that learned it in seconds. You still have your job. You still get paid. But something has shifted beneath you, something that the economists measuring unemployment rates and GDP growth have no instrument to detect.

In the March/April 2026 issue of the Harvard Business Review, researchers Erik Hermann of the European University Viadrina, Stefano Puntoni of the Wharton School at the University of Pennsylvania, and Carey K. Morewedge of Boston University's Questrom School of Business published a study that gave this dread a framework. Their paper, “Why Gen AI Feels So Threatening to Workers,” argued that the primary psychological threat of generative AI is not job displacement. It is something more intimate and harder to measure: the erosion of competence, autonomy, and relatedness, the three psychological needs that, according to decades of motivation research, make work feel meaningful in the first place. When those needs are satisfied, the authors found, employees embrace AI as a helpful tool. When they are frustrated, employees resist, disengage, and in some cases actively sabotage their organisation's AI initiatives.

The numbers are striking. A 2025 survey by Kyndryl, spanning 25 industries and eight countries, found that 45 per cent of CEOs report employees who are resistant or openly hostile to workplace generative AI. A separate cross-industry survey of 1,600 American knowledge workers found that 31 per cent admit to actively working against their company's AI strategy. Among Generation Z workers, that figure rises to 41 per cent. Meanwhile, according to a BCG survey published in 2025, 85 per cent of leaders and 78 per cent of managers regularly use generative AI, compared with only 51 per cent of frontline workers, a gap that reveals how differently the technology is experienced depending on where you sit in an organisation. This is not Luddism. This is something more psychologically complex: a workforce that senses, even if it cannot always articulate, that the introduction of AI is not merely changing what they do but hollowing out why it mattered.

The Competence Trap

To understand why AI feels so destabilising, even to workers whose jobs are ostensibly secure, you need to understand what competence actually means in the context of professional identity.

Self-determination theory, the psychological framework underpinning the Harvard Business Review study, holds that human beings have three basic psychological needs: competence (the feeling of being effective and capable), autonomy (the feeling of being in control of one's actions), and relatedness (the feeling of having meaningful interpersonal connections). These are not luxuries. They are the bedrock of intrinsic motivation, the internal drive that makes people voluntarily invest effort, pursue mastery, and find satisfaction in their work. When these needs are met, people thrive. When they are frustrated, the consequences ripple outward into disengagement, anxiety, and what psychologists call “controlled motivation,” where people continue to work but only because they feel they have to rather than because they want to.

Generative AI strikes at all three needs simultaneously, but the blow to competence is perhaps the most disorienting. For most knowledge workers, professional identity is inseparable from professional skill. A lawyer's sense of self is bound up in their ability to parse a complex contract. A writer's identity is entangled with their capacity to find the right word. A financial analyst's confidence rests on their ability to spot patterns in messy data. These are not just tasks. They are the cognitive and creative activities through which people develop, demonstrate, and maintain their sense of being good at something.

When a generative AI system can draft that contract, write that paragraph, or analyse that dataset in a fraction of the time and at a fraction of the cost, something happens to the person who used to do it. They may still be employed. They may even be more productive. But the specific activities that gave them a feeling of mastery, the activities that made them feel like skilled professionals rather than warm bodies occupying desks, are being absorbed by a machine. The Harvard Business Review authors found that this dynamic is particularly acute for younger workers, whose entry-level tasks (document review, data compilation, first drafts) are precisely the tasks most susceptible to automation. These are the assignments that, while unglamorous, constitute the learning curve itself. Remove them, and you remove the mechanism through which junior professionals develop expertise.

The autonomy dimension cuts equally deep. Hermann, Puntoni, and Morewedge described how mandatory AI use creates what they call “algorithmic cages,” standardised procedures that limit task customisation and strip workers of agency over their own cognitive process. Workers find themselves held responsible for AI-generated output they did not truly author, cast in a supporting role to a technology rather than functioning as drivers of their own work. The Ivanti Tech at Work report found that 32 per cent of generative AI users keep their usage hidden from employers, with reasons ranging from wanting a “secret advantage” (36 per cent) to fear of being fired (30 per cent) to concerns about impostor syndrome (27 per cent). When a third of workers feel they must hide their relationship with the primary tool of their profession, something has gone badly wrong with how that tool is being introduced.

A Stanford study published in 2025 found that hiring for entry-level, AI-impacted positions such as junior accounting roles fell by 16 per cent over roughly two years. In the United Kingdom, technology graduate roles fell by 46 per cent in 2024. The share of technology job postings requiring at least five years of experience jumped from 37 per cent to 42 per cent between mid-2022 and mid-2025, while the share open to candidates with two to four years of experience dropped from 46 per cent to 40 per cent over the same period. The bottom rung of the career ladder is not merely being restructured. It is being removed.

When the Tool Becomes the Crutch

The competence problem extends beyond entry-level workers. There is growing evidence that even experienced professionals are losing skills as they increasingly delegate cognitive work to AI systems.

In August 2025, The Lancet Gastroenterology and Hepatology published a multicentre observational study examining what happened to endoscopists at four Polish clinics that had introduced AI-assisted colonoscopy as part of the ACCEPT trial. The AI system helped doctors detect adenomas, a precancerous growth, with impressive accuracy. But when the AI assistance was later removed, the doctors' own detection rates had measurably declined. Average adenoma detection at non-AI-assisted colonoscopies fell from 28.4 per cent before AI exposure to 22.4 per cent after AI exposure, a 6 percentage point absolute reduction. The researchers attributed the decline to a natural human tendency to over-rely on the recommendations of decision support systems. The doctors had not become incompetent. They had simply stopped practising the skill, and, as with any unpractised skill, it had atrophied. This was, as the study's authors noted, the first research to suggest AI exposure might have a negative impact on patient-relevant endpoints in medicine.

This is not an isolated finding. Advait Sarkar, an AI and design researcher at Microsoft Research who delivered a TED talk at TEDAI Vienna in November 2025, coined a phrase that captures the dynamic with uncomfortable precision: when we outsource our reasoning to artificial intelligence, he argued, we reduce ourselves to “middle managers for our own thoughts.” Sarkar pointed to research showing that knowledge workers using AI assistants produce a smaller range of ideas than groups working without AI. People who rely on AI to write for them remember less of what they wrote. People who read AI-generated summaries remember less than if they had read the original document. The cognitive effects are measurable: fewer ideas, less critical examination of those ideas, weaker memory retention, and diminished capacity to perform the task independently.

A separate analysis published in the Harvard Gazette in November 2025, featuring perspectives from researchers at the Harvard Graduate School of Education and the Harvard Kennedy School, reinforced the concern. Tina Grotzer, a principal research scientist in education at Harvard, noted that overreliance on AI can reduce engagement with challenging mental skills, while users may avoid developing critical capacities like analysis and reflection. The researchers emphasised that the outcome depends entirely on how users engage with AI: as a thinking tool or as a cognitive shortcut. The evidence so far suggests most workplaces are optimising for the shortcut.

The philosopher Avigail Ferdman of the Technion, Israel Institute of Technology, published a paper in the journal AI and Society in 2025 that frames this dynamic as a structural problem rather than an individual failing. Ferdman introduced the concept of “capacity-hostile environments” to describe conditions in which AI mediation actively impedes the cultivation of human capacities. The argument is philosophically precise: humans develop and exercise their epistemic, moral, social, and creative capacities through a long, gradual process of habituation. We get better at things by doing them repeatedly, by failing, by adjusting, by trying again. When AI absorbs those activities, the environment in which capacity development occurs is fundamentally altered. Deskilling, in Ferdman's framing, is harmful not merely because it reduces economic productivity but because it “diminishes us as human beings, undermining the epistemic, social, moral and creative capacities required for practical reason, self-worth, as well as mutual respect between persons.”

Critically, Ferdman argues that expecting individuals to simply resist deskilling through personal discipline is naive. To a large extent, she writes, we develop and exercise our capacities in response to our social and material environment. If that environment is structured to reward cognitive offloading and penalise the slower, messier process of independent thought, then deskilling is not a failure of individual willpower. It is the predictable result of structural conditions. This is not a problem that a training programme can fix.

The Illusion of Competence

Perhaps the most insidious dimension of AI-mediated deskilling is that its victims often do not recognise it is happening.

A 2025 study published in the International Journal of Research and Scientific Innovation by researchers at Mount Kenya University examined what they called the “illusion of competence,” a misleading perception of mastery created by AI-generated outputs that mask underlying cognitive deficits. The researchers found that as AI tools take over cognitive tasks, users develop an inflated sense of their own ability. They confuse their skill at operating the tool with genuine expertise in the underlying domain. A junior lawyer who uses an AI system to draft a motion may feel confident in the output without having developed the legal reasoning to evaluate whether the motion is actually sound. A financial analyst who relies on AI to build models may not notice when the model rests on flawed assumptions, because they never developed the intuition that comes from building hundreds of models by hand. The study identified specific risks including academic underperformance, reduced originality, erosion of self-efficacy, and the devaluation of human expertise across professional contexts.

The 2025 Microsoft New Future of Work report reinforced this finding, observing that knowledge workers reported generative AI made tasks seem cognitively easier while researchers found the workers were ceding problem-solving expertise to the system. The report noted that junior workers aged 22 to 25 in high-AI-exposure jobs have seen employment drop by approximately 13 per cent, and warned that organisations risk “eroding collaboration and mutual support if AI is used to replace social engagement.” The Microsoft report also found that 52 per cent of surveyed employees report moderate to high workplace loneliness, a finding that speaks directly to the relatedness dimension of the psychological threat identified by the Harvard Business Review authors.

This illusion of competence creates a dangerous feedback loop. Workers feel more capable because their AI-assisted output is better. Organisations see improved productivity metrics. Everyone appears to be benefiting. But beneath the surface, the actual human skill base is eroding. And the erosion only becomes visible when something goes wrong: when the AI system fails, when it hallucinates, when the situation requires precisely the kind of independent judgement that the worker no longer possesses because they stopped practising it years ago. The Wharton/GBK Collective annual survey captured this paradox neatly: 89 per cent of senior decision-makers say generative AI enhances employee skills, while 71 per cent simultaneously believe it will cause skill atrophy and job replacement. Both things, it turns out, can be true at the same time.

The Identity Crisis Nobody Measured

The psychological damage of competence erosion extends well beyond the workplace. For most adults in industrialised societies, professional identity is a core component of personal identity. What you do for a living is, for better or worse, a significant part of who you are. When the substance of that work is hollowed out, the identity built around it becomes unstable.

Maha Hosain Aziz, a professor at New York University's MA International Relations programme and a risk and foresight adviser to the World Economic Forum, published an essay on the Forum's platform in August 2025 describing what she calls the “AI precariat,” borrowing the term coined by economist Guy Standing in 2011 to describe a class defined by insecurity, exclusion, and anxiety. Aziz's argument is that the AI version of this precariat will face not just economic hardship but an occupational identity crisis: “the loss of purpose, structure and social belonging that comes when work disappears.” She points to historical precedents from post-coal Britain to post-industrial American towns, where the disappearance of livelihoods led to deteriorating mental health, rising addiction, and fertile ground for political extremism. The AI wave, Aziz warns, could replicate those dynamics on a global scale and at a far faster pace. Her proposed solutions include “precariat labs,” cross-sector hubs where governments, companies, and civil society test interventions for at-risk workers, integrating mental health care, retraining, and community-building to preserve both livelihoods and identity.

The data on worker engagement suggests this identity crisis is already underway. According to Gallup's State of the Global Workplace reports, global employee engagement fell from 23 per cent to 21 per cent in 2025, the sharpest decline since the early days of the pandemic. Fewer than one in three employees feel strongly connected to their company's mission. Less than half of employees (47 per cent) strongly agree they know what is expected of them at work, which Gallup identifies as a foundational element of engagement. In 2026, 52 per cent of workers reported that burnout was dragging down their engagement, up from 34 per cent the previous year, with 83 per cent of workers experiencing some degree of burnout. These are broad trends with multiple causes, but the timing is difficult to separate from the rapid deployment of generative AI across knowledge work. When the tasks that gave work meaning are automated, and the remaining tasks feel like supervisory busywork, disengagement is not a mystery. It is a predictable consequence.

The ManpowerGroup's Global Talent Barometer 2026 captured this dynamic with unusual clarity: regular AI usage among workers jumped 13 per cent in 2025, while confidence in the technology's use plummeted 18 per cent. The confidence gap was most pronounced among older workers, with a 35 per cent decrease in confidence among baby boomers and a 25 per cent drop among Generation X workers. Nearly nine in ten workers (89 per cent) are confident they have the skills to succeed in their current roles, but 43 per cent fear automation may replace their job within the next two years. Workers are using AI more and trusting it less. They are becoming more productive by measures that appear on dashboards while feeling less capable and less purposeful by measures that do not. This is the gap that no employment statistic can capture.

The Organisational Blind Spot

Most organisations have responded to AI's disruption of work with a familiar playbook: skills training, upskilling programmes, change management initiatives. These are not inherently misguided, but they systematically miss the psychological dimension of the problem.

The Harvard Business Review study found that only 36 per cent of employees felt properly trained for generative AI tools. An Amazon Web Services survey found that 52 per cent of IT decision-makers did not understand their employees' training requirements. But training, even when well-executed, addresses only one dimension of the threat. It addresses competence in the narrow sense of knowing how to use the tool. It does not address the deeper issue: the feeling of being deskilled, the loss of autonomy over one's own cognitive process, the erosion of the interpersonal connections that emerge when people collaborate on intellectually demanding work. Only 44 per cent of business leaders involve workers in AI implementation decisions, according to the Harvard Business Review authors, a figure that reveals how little most organisations understand about what is actually at stake.

Hermann, Puntoni, and Morewedge proposed a framework they call AWARE: acknowledge employee concerns, watch for adaptive and maladaptive coping behaviours, align support systems with psychological needs, redesign workflows around human-AI synergies, and empower workers through transparency and inclusion. The framework is sensible. But it is also demanding, requiring a level of psychological literacy and organisational intentionality that most companies have not demonstrated.

The contrast between organisations that get this right and those that do not is instructive. Duolingo's CEO Luis von Ahn publicly shared a memo in April 2025 detailing an “AI-first” approach that included reducing reliance on contractors and a policy of hiring only when automation could not handle the work. The company had already cut around 10 per cent of its contractor workforce at the end of 2023, with further cuts in October 2024, replacing first translators and then writers with AI systems. The backlash to the memo was immediate and fierce, with users flooding the company's social media pages with criticism. Von Ahn later admitted the memo “did not give enough context” and clarified that no full-time employees would be laid off. The damage, however, was done. The message received by workers and the public was clear: human skill is a cost centre to be minimised.

Compare this with PwC, which created a dedicated AI “playground” for employees, ran “prompting parties” to build collective AI literacy, and designated peer “activators” to support adoption. Or BNY, which achieved 60 per cent employee adoption by emphasising universal access and encouraging 5,000 employees to build their own custom AI agents. Or Moderna, which merged its technology and human resources departments to design collaborative AI workflows from the ground up. These approaches treat workers as co-creators of the AI-augmented workplace rather than passive recipients of a technology imposed upon them.

The difference is not merely strategic. It is psychological. When workers participate in shaping how AI is integrated into their roles, their sense of autonomy is preserved. When they develop new skills alongside AI rather than watching AI absorb their existing skills, their sense of competence is maintained. When AI adoption is a collective endeavour rather than a top-down mandate, relatedness survives.

What Policymakers Cannot See

The policy conversation about AI and work remains overwhelmingly focused on employment numbers. Will AI create more jobs than it destroys? How fast will displacement occur? What retraining programmes should governments fund? These are important questions. But they are the wrong questions if the primary harm is not unemployment but the psychological hollowing out of work that continues to exist.

There is no government metric for “the feeling of being good at something.” There is no Bureau of Labour Statistics category for “work that still feels meaningful.” The entire apparatus of labour market policy is designed to measure and respond to job loss, not to the subtler and potentially more corrosive phenomenon of job degradation, where employment persists but its psychological substance is drained.

Aziz proposed the creation of an “AI Anxiety Index” to track how occupational displacement affects mental well-being across societies. The American Enterprise Institute published a 2025 report on deskilling the knowledge economy that argued the workers best positioned to thrive would be those combining legacy technical skills with AI literacy and broader capabilities such as critical thinking, communication, and adaptability. The AEI report noted that as AI platforms absorb routine tasks, entry-level and mid-level knowledge workers in finance, business services, government, and health care face growing vulnerability. These are useful contributions, but they remain at the margins of policy discourse. The dominant conversation is still about headcounts.

This is a structural failure of imagination. If AI's primary harm to workers is not economic but psychological, then the response cannot be purely economic. Policies that address only unemployment and retraining will miss the damage being done to workers who remain employed but whose professional identities are being systematically undermined. What is needed is a framework that recognises work as a source of meaning and not merely income, and that treats the erosion of that meaning as a harm worthy of policy attention.

Reclaiming Craft in an Age of Automation

The question, then, is whether it is possible to preserve the psychological substance of work in an era when the cognitive and creative tasks that gave work its substance are increasingly performed by machines.

The answer is not obvious, and anyone who tells you it is should be treated with suspicion. But there are starting points.

First, at the individual level, there is Sarkar's argument that AI should function as a “tool for thought” that challenges rather than obeys. The distinction matters. An AI system that generates a first draft and presents it as a finished product encourages cognitive offloading. An AI system that generates competing hypotheses, flags weaknesses in the user's reasoning, or refuses to provide an answer until the user has articulated their own position first encourages deeper engagement. The technology exists to build either kind of system. The question is which kind organisations choose to deploy.

Second, at the organisational level, the AWARE framework and similar approaches point toward a principle that should be obvious but apparently is not: the goal of AI integration should be to augment human capability, not merely to reduce headcount or increase throughput. This means deliberately preserving the tasks that build and maintain expertise, even when AI could perform them more efficiently. A law firm that automates all document review for junior associates may save money in the short term, but it will find itself, within a decade, with a generation of senior lawyers who never developed the foundational skills on which legal judgement depends. The short-term efficiency gain produces a long-term competence deficit.

Third, at the policy level, governments need to develop new metrics and new categories of harm. The Gallup engagement data, the ManpowerGroup confidence data, and the Harvard Business Review psychological needs framework all point toward measurable indicators of work quality that exist outside traditional employment statistics. Integrating these indicators into policy-making would at least begin to make visible the damage that current metrics cannot see. Aziz's proposed precariat labs offer a model for what this might look like in practice: cross-sector interventions that treat AI-driven disruption not merely as an employment problem but as a crisis of identity, mental health, and social cohesion.

Fourth, at the philosophical level, there is a conversation that the technology industry has been remarkably reluctant to have: about what work is for. The dominant framing treats work as a production function, an input-output equation in which the goal is to maximise output per unit of input. Within this framing, any technology that increases productivity is unambiguously good. But if work is also a site of human development, a context in which people cultivate skill, exercise judgement, and build identity, then a technology that increases output while eroding the human experience of producing it is not unambiguously good at all. It is, at best, a trade-off that deserves honest acknowledgement.

Ferdman's concept of “capacity-conducive environments” offers a useful compass here. The question to ask of any AI deployment is not simply “Does this increase productivity?” but “Does this create conditions in which human capacities can develop, or conditions in which they atrophy?” The answers will not always be comfortable. They will sometimes point toward deliberately choosing less efficient arrangements because those arrangements better serve the humans within them. But that discomfort is the price of taking seriously the idea that work is more than a transaction.

The Unasked Question

The conversation about AI and work has, for the better part of a decade, been dominated by a single question: will the robots take our jobs? It is the wrong question, or at least an incomplete one. The more urgent question, the one that the Harvard Business Review research and a growing body of psychological, philosophical, and medical evidence points toward, is this: what happens when the robots take the part of our jobs that made us who we are?

The employment statistics will not tell you. The productivity dashboards will not tell you. The quarterly earnings calls, with their triumphant announcements of AI-driven efficiency gains, will certainly not tell you. You will have to look elsewhere: at the endoscopist whose diagnostic eye has dulled, at the junior lawyer who never learned to think like a lawyer, at the writer who can no longer find the sentence without asking a machine for it first, at the 31 per cent of knowledge workers who are quietly sabotaging their company's AI strategy not because they are afraid of unemployment but because they sense, at some level beneath articulation, that something essential is being taken from them.

That something is competence. It is craft. It is the hard-won, slowly-built, deeply personal experience of being good at something. And no algorithm, however sophisticated, has figured out how to give it back.

References

  1. Hermann, E., Puntoni, S., and Morewedge, C.K. “Why Gen AI Feels So Threatening to Workers.” Harvard Business Review, March/April 2026.
  2. Kyndryl. CEO Survey on AI Adoption and Employee Resistance, 2025. Spanning 25 industries and eight countries.
  3. Writer/Workplace Intelligence. Enterprise AI Adoption Survey: Knowledge Worker Resistance to AI Initiatives, 2025. Survey of 1,600 U.S. knowledge workers.
  4. BCG. “AI at Work 2025: Momentum Builds, but Gaps Remain.” Boston Consulting Group, 2025.
  5. Budzyn, K., Romanczyk, M., Kitala, D., Kolodziej, P., Bugajski, M., et al. “Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.” The Lancet Gastroenterology and Hepatology, vol. 10, no. 10, October 2025, pp. 896-903.
  6. Sarkar, A. “How to stop AI from killing your critical thinking.” TED Talk, TEDAI Vienna, November 2025.
  7. Ferdman, A. “AI deskilling is a structural problem.” AI and Society, Springer Nature, 2025.
  8. Matueny, R.M. and Nyamai, J.J. “Illusion of Competence and Skill Degradation in Artificial Intelligence Dependency among Users.” International Journal of Research and Scientific Innovation, vol. 12, no. 5, 2025.
  9. Microsoft Research. New Future of Work Report 2025, published December 2025.
  10. Grotzer, T. et al. “Is AI dulling our minds?” Harvard Gazette, November 2025.
  11. Aziz, M.H. “The overlooked global risk of the AI precariat.” World Economic Forum, August 2025.
  12. Standing, G. The Precariat: The New Dangerous Class. Bloomsbury Academic, 2011.
  13. Gallup. State of the Global Workplace Report, 2025.
  14. ManpowerGroup. Global Talent Barometer, 2026.
  15. Amazon Web Services. Gen AI Adoption Index: Survey of IT Decision-Makers, 2025.
  16. Stanford University. Study on entry-level hiring declines in AI-impacted positions, 2025.
  17. American Enterprise Institute. “De-Skilling the Knowledge Economy.” AEI Report, 2025.
  18. Ivanti. Tech at Work Report: Survey on hidden AI usage among workers, 2025.
  19. Wharton/GBK Collective. Annual Survey on AI and Employee Skills, 2025.
  20. Duolingo. CEO Luis von Ahn's “AI-first” memo and subsequent clarification, April-August 2025. Reported by Fortune, CNBC, and HR Grapevine.
  21. Hermann, E., Puntoni, S., and Morewedge, C.K. “GenAI and the psychology of work.” Trends in Cognitive Sciences, 2025.

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...