Your Boss Bets Your Job on AI: The Tech Does Not Work Yet

In September 2025, Salesforce CEO Marc Benioff went on a podcast and said something that should have sent a chill through every office worker in the world. His company, he explained, had cut its customer support division from 9,000 employees to roughly 5,000 because AI agents were now handling 30 to 50 per cent of the work. “I need less heads,” he told host Logan Bartlett on The Logan Bartlett Show, with the casual confidence of a man who had just discovered a cheat code. Just two months earlier, in a Fortune interview, Benioff had publicly dismissed fears that AI would replace workers, insisting it only augmented them. The pivot was breathtaking in both its speed and its honesty.

But here is the thing about cheat codes: they do not always work the way you expect. Across the technology industry and well beyond it, companies are making enormous bets on artificial intelligence's ability to replace human workers. The trouble is that many of these bets are based not on what AI can actually do right now, but on what executives hope it will do someday. And workers are paying the price for that speculation.

The data paints a picture that is simultaneously reassuring and alarming. At the macroeconomic level, AI has not yet triggered the mass unemployment event that dominates headlines and anxious dinner-table conversations. But at the level of individual companies, individual careers, and individual communities, the decisions being made in boardrooms are already reshaping who works, who does not, and who gets to decide.

The Great Anticipatory Layoff

A landmark Harvard Business Review study published in January 2026 laid bare the speculative nature of corporate AI strategy. The study was authored by Thomas H. Davenport, the President's Distinguished Professor of Information Technology at Babson College and a visiting scholar at the MIT Initiative on the Digital Economy, alongside Laks Srinivasan, co-founder and CEO of the Return on AI Institute and former COO of Opera Solutions. Together, they surveyed 1,006 global executives in December 2025. The findings were striking.

Sixty per cent of organisations had already reduced headcount in anticipation of AI's future impact. Another 29 per cent had slowed hiring for the same reason. Yet only 2 per cent said they had made large layoffs tied to actual AI implementation that was already delivering measurable results.

Read that again. Six in ten companies were cutting staff based on what AI might be able to do, not what it was currently doing. Over 600 of the polled executives admitted to making layoffs in anticipation of future AI capabilities, treating their workforce like poker chips in a speculative bet on technology that has not yet proved itself in their own operations. The remaining cuts came from companies reducing hiring pipelines, freezing positions, or restructuring departments around theoretical automation gains rather than demonstrated ones.

The scale of this is not trivial. According to Challenger, Gray and Christmas, the outplacement consultancy that has tracked layoff data for decades, AI was cited as a contributing factor in approximately 55,000 job cuts across the United States in 2025. That figure represents a thirteenfold increase from two years earlier, when the firm first began tracking AI as a reason for layoffs. Since 2023, AI has been cited in a total of 71,825 job cut announcements. The broader context makes the number even more unsettling: total US job cuts in 2025 reached 1.17 million, the highest level since the pandemic year of 2020, and planned hiring fell to just 507,647, the lowest figure since 2010.

Prominent companies leading this charge included Amazon, which announced 15,000 job cuts, and Workday, the cloud-based HR and finance platform, which slashed 1,750 positions (8.5 per cent of its workforce) explicitly to reallocate resources towards AI investments. Workday CEO Carl Eschenbach framed the decision as necessary for “durable growth,” even though the company had posted revenue growth of nearly 16 per cent and a 69 per cent profit increase in the preceding quarter. The cuts cost the company between 230 and 270 million dollars in severance and restructuring charges, raising the obvious question: if AI is delivering so much value, why is it so expensive to implement?

The Trust Deficit Nobody Can Afford to Ignore

While executives charge ahead with AI-fuelled restructuring, a growing body of evidence suggests that the people on the receiving end of these decisions have very good reasons to be sceptical. And this scepticism is not a soft problem. It is a business-critical crisis that threatens to undermine the very AI adoption that companies are betting on.

Deloitte's TrustID Index, a daily pulse measurement of customer and employee sentiment created by principal Ashley Reichheld, revealed a 31 per cent decline in trust in company-provided generative AI tools between May and July 2025. Even more striking, trust in agentic AI systems, those designed to act autonomously rather than merely make recommendations, collapsed by 89 per cent in the same period. Employees were growing deeply uneasy with technology assuming decisions that had previously been theirs to make. The Deloitte data also showed that employees' trust in their employers decreased by 139 per cent when employers introduced AI technologies to their workforce, a remarkable figure that suggests the mere act of deploying AI can actively damage the employer-employee relationship.

The Gartner research consultancy reported that only 26 per cent of job candidates trusted AI to evaluate them fairly, even though 52 per cent believed their applications were already being screened by automated systems. This gap between the perceived ubiquity of AI and the perceived fairness of AI creates a toxic dynamic in which workers feel surveilled but not supported.

Meanwhile, PwC's 2025 Global Workforce Hopes and Fears Survey, which polled 49,843 workers across 48 countries and 28 sectors, found that employees under financial pressure were significantly less trusting, less motivated, and less candid with their employers. With 55 per cent of the global workforce reporting financial strain in 2025, up from 52 per cent the previous year, and just over a third of workers feeling overwhelmed at least once a week (rising to 42 per cent among Generation Z), the conditions for a widespread trust crisis were firmly in place. Only 53 per cent of workers felt strongly optimistic about the future of their roles, with non-managers (43 per cent) trailing far behind executives (72 per cent).

The anxiety is not abstract. Worker concerns about job loss due to AI have skyrocketed from 28 per cent in 2024 to 40 per cent in 2026, according to preliminary findings from Mercer's Global Talent Trends report, which surveyed 12,000 people worldwide. A Reuters/Ipsos poll from August 2025 found that 71 per cent of Americans feared permanent job loss as a result of AI.

Deloitte's own research demonstrated why this matters commercially: high-trust companies are 2.6 times more likely to see successful AI adoption, and organisations with strong trust scores enjoy up to four times higher market value. Trust, it turns out, is not a warm and fuzzy HR metric. It is the infrastructure on which successful AI deployment depends.

The Stubborn Gap Between Narrative and Reality

Yet the data tells a more complicated story than either the corporate cheerleaders or the doomsayers suggest. The Yale Budget Lab, which has been tracking AI's impact on US employment since ChatGPT's release in November 2022, has consistently found that employment patterns have remained largely unchanged at the aggregate level. The proportion of workers in jobs with high, medium, and low AI exposure has stayed remarkably stable. Their November and December 2025 Current Population Survey updates showed no meaningful shift from earlier findings. The occupational mix is shifting, but largely along trajectories that were already well established before generative AI arrived.

A February 2026 Fortune report on the Yale Budget Lab research noted that while there has been enormous anxiety about AI's impact on jobs, “the data isn't showing it.” The researchers emphasised that even the most transformative technologies, from steam power to electricity to personal computers, took decades to generate large-scale economic effects. The expectation that AI would upend the labour market within 33 months of ChatGPT's release was always, in retrospect, somewhat fanciful.

Goldman Sachs Research further reinforced this view, finding no significant statistical correlation between AI exposure and a host of labour market measures, including job growth, unemployment rates, job finding rates, layoff rates, growth in weekly hours, or average hourly earnings growth.

But absence of evidence at the macro level is not evidence of absence at the individual level. And the company-by-company reality is far more unsettling than the aggregate numbers suggest.

When the Machines Fall Short

If the macroeconomic data suggests that AI has not yet caused the employment apocalypse that many fear, individual company experiences tell a more cautionary tale about what happens when you replace people with technology that is not ready.

The most instructive case study comes from Klarna, the Swedish fintech company. Between 2022 and 2024, Klarna eliminated approximately 700 positions, primarily in customer service, and replaced them with an AI assistant developed in partnership with OpenAI. The company's headcount dropped from over 5,500 to roughly 3,400. At its peak, Klarna claimed its AI systems were managing two-thirds to three-quarters of all customer interactions, and the company trumpeted savings of 10 million dollars in marketing expenses alone by assigning tasks such as translation, art creation, and data analysis to generative AI.

Then quality collapsed. Customers complained about robotic responses and inflexible scripts. They found themselves trapped in what one observer described as a Kafkaesque loop, repeating their problems to a human agent after the bot had failed to resolve them. Resolution times for complex issues increased. Customer satisfaction scores dropped. The pattern that every customer service professional could have predicted came to pass: AI was excellent at handling routine, well-structured queries, and terrible at everything else.

Klarna CEO Sebastian Siemiatkowski eventually acknowledged the mistake publicly. “Cost, unfortunately, seems to have been a too predominant evaluation factor when organising this,” he told Bloomberg. “What you end up having is lower quality.” In a separate statement, he was even more direct: “We went too far.”

Klarna reversed course, began rehiring human agents, and pivoted to a hybrid model in which AI handles basic enquiries while humans take over for issues requiring empathy, discretion, or escalation. The company is now recruiting remote support staff with flexible schedules, piloting what it calls an “Uber-style” workforce model and specifically targeting students, rural residents, and loyal Klarna users. The U-turn came just as Klarna completed its US initial public offering, with shares rising 30 per cent on their debut, giving the company a post-IPO valuation of 19.65 billion dollars. Apparently, investors valued the company more after it admitted its AI experiment had gone too far, not less.

Salesforce itself showed signs of a similar reckoning. Despite Benioff's bold claims about AI replacing customer support workers, internal reports later suggested the company had been “too confident” in AI's ability to replace human judgement, particularly for complex customer scenarios. Automated systems struggled with nuanced issues, escalations, and what the industry calls “long-tail” customer problems, those unusual edge cases that require genuine understanding rather than pattern matching. A Salesforce spokesperson later clarified that many of the 4,000 support staff who left had been “redeployed” into sales and other areas, a framing that clashed somewhat with Benioff's blunt “I need less heads” declaration.

Forecasting firm Forrester predicted that this pattern of laying off workers for AI that is not ready, then quietly hiring offshore replacements, would accelerate across industries throughout 2026.

The Corporate Fiction of AI Layoffs

Oxford Economics weighed in on this phenomenon with a research briefing published in January 2026 that was remarkably blunt. The firm argued that companies were not, in fact, replacing workers with AI on any significant scale. Instead, many appeared to be using AI as a convenient narrative to justify routine headcount reductions. “We suspect some firms are trying to dress up layoffs as a good news story rather than bad news, such as past over-hiring,” the report stated.

The logic is cynical but straightforward. Telling investors you are cutting staff because demand is soft, or because you hired too aggressively during the pandemic, is bad news. Telling them you are cutting staff because you are deploying cutting-edge AI is a growth story. It signals innovation. It excites shareholders. Deutsche Bank analysts warned bluntly that “AI redundancy washing will be a significant feature of 2026.”

Lisa Simon, chief economist at labour analytics firm Revelio Labs, expressed similar scepticism. “Companies want to get rid of departments that no longer serve them,” she told reporters. “For now, AI is a little bit of a front and an excuse.”

Oxford Economics pointed to a revealing piece of evidence: if AI were genuinely replacing labour at scale, productivity growth should be accelerating. It is not. Productivity measures across major economies have remained sluggish, and in some quarters have actually slowed compared to the period before generative AI emerged. The firm noted that productivity metrics “haven't really improved all that much since 2001,” recalling the famous productivity paradox identified by Nobel Prize-winning economist Robert Solow, who observed in 1987 that “you can see the computer age everywhere but in the productivity statistics.”

The numbers bear this out. While AI was cited as the reason for nearly 55,000 US job cuts in the first 11 months of 2025, that figure represented a mere 4.5 per cent of total reported job losses. By comparison, standard “market and economic conditions” accounted for roughly four times as many cuts, and DOGE-related federal workforce reductions were responsible for nearly six times more.

The Vanishing Entry-Level Job

While the aggregate labour market may look stable, a more targeted disruption is already underway, and it is hitting the workers who can least afford it: those just starting their careers.

Between 2018 and 2024, the share of jobs requiring three years of experience or less dropped sharply in fields most exposed to AI. In software development, entry-level positions fell from 43 per cent to 28 per cent. In data analysis, they declined from 35 per cent to 22 per cent. In consulting, the drop went from 41 per cent to 26 per cent. Senior-level hiring in these same fields held steady, indicating that companies were not shrinking overall but were instead raising the bar for who gets through the door.

According to labour research firm Revelio Labs, postings for entry-level jobs in the US declined approximately 35 per cent from January 2023 onwards, with AI playing a significant role. Venture capital firm SignalFire found a 50 per cent decline in new role starts by people with less than one year of post-graduate work experience between 2019 and 2024, a trend consistent across every major business function from sales to engineering to finance. Hiring of new graduates by the 15 largest technology companies has fallen by more than 50 per cent since 2019, and before the pandemic, new graduates represented 15 per cent of hires at major technology companies; that figure has collapsed to just 7 per cent.

The US Bureau of Labor Statistics data reveals the sharpness of the shift: overall programmer employment fell 27.5 per cent between 2023 and 2025. In San Francisco, more than 80 per cent of positions labelled “entry-level” now require at least two years of experience, creating a paradox where you need the job to get the job.

The result is a cruel irony. Companies are shutting out the very generation most capable of working with AI. PwC's survey found that Generation Z workers had the highest AI literacy scores, yet they faced the steepest barriers to employment. Nearly a third of entry-level workers said they were worried about AI's impact on their future, even as they were also the most curious (47 per cent) and optimistic (38 per cent) about the technology's long-term potential.

A Stanford working paper documented a 13 per cent relative employment drop for 22-to-25-year-olds in occupations with high AI exposure, after controlling for firm-specific factors. The declines came through layoffs and hiring freezes, not through reduced wages or hours, suggesting that young workers were simply being locked out rather than gradually displaced.

Six Million at the Sharp End

Not everyone is equally vulnerable to AI displacement, and the research is increasingly precise about who faces the greatest risk.

A joint study by the Centre for the Governance of AI (GovAI) and Brookings Metro, led by researcher Sam Manning and published as a National Bureau of Economic Research working paper, measured the adaptive capacity of American workers facing AI-driven job displacement. Of the 37.1 million US workers in the top quartile of occupational AI exposure, 26.5 million, roughly 70 per cent, also had above-median adaptive capacity, meaning they possessed the financial resources, transferable skills, and local opportunities to manage a job transition if necessary.

But 6.1 million workers, approximately 4.2 per cent of the workforce, faced both high AI exposure and low adaptive capacity. These workers were concentrated in clerical and administrative roles: office clerks (2.5 million workers), secretaries and administrative assistants (1.7 million), receptionists and information clerks (965,000), and medical secretaries (831,000). About 86 per cent of these vulnerable workers were women.

The study highlighted a stark disparity in adaptive capacity between roles with similar AI exposure levels. Financial analysts and office clerks, for instance, are equally exposed to AI. But financial analysts scored 99 per cent for adaptive capacity, while office clerks scored just 22 per cent. The difference comes down to savings, transferable skills, age, and the availability of alternative employment in their local labour markets. Geographically, the most vulnerable workers are concentrated in smaller metropolitan areas, particularly university towns and midsized markets in the Mountain West and Midwest, while concentrations of highly exposed but highly adaptive workers are greatest in technology hubs such as San Jose and Seattle.

As one of the researchers noted, “A complete laissez-faire approach to this might well be a recipe for dissatisfaction and agitation.”

Fighting Back Without Falling Behind

So how do workers protect themselves in a world where their employers are making decisions based on speculative AI capabilities, where trust in corporate AI deployment is plummeting, and where the most vulnerable stand to lose the most? The answer requires action on multiple fronts simultaneously.

Become the person who makes AI work, not the person AI replaces. PwC's survey data revealed a significant split between daily AI users and everyone else. Workers who used generative AI daily were far more likely to report productivity gains (92 per cent versus 58 per cent for infrequent users), improved job security (58 per cent versus 36 per cent), and higher salaries (52 per cent versus 32 per cent). Daily users were also substantially more optimistic about their roles over the next 12 months (69 per cent) compared to infrequent users (51 per cent) and non-users (44 per cent). Yet only 14 per cent of workers reported using generative AI daily, barely up from 12 per cent the previous year, and a mere 6 per cent were using agentic AI daily. The gap between AI adopters and AI avoiders is a chasm, and it is widening. Workers who engage deeply with AI tools rather than avoiding them are better positioned to survive restructuring, but the opportunity to get ahead of the curve remains wide open precisely because so few people have taken it.

Demand collective bargaining rights over AI deployment. The labour movement is waking up to AI's implications with increasing urgency. In January 2025, more than 200 trade union members and technologists gathered at a landmark conference in Sacramento to strategise about defending workers against AI-driven displacement. SAG-AFTRA executive director Duncan Crabtree-Ireland argued that AI underscores why workers must organise, because collective bargaining can force employers to negotiate their use of AI rather than unilaterally deciding to introduce it. AFL-CIO Tech Institute executive director Amanda Ballantyne emphasised that including AI in collective bargaining negotiations is essential given the breadth of AI's potential use cases across every industry.

The results of organised action are already visible. The International Longshoremen's Association secured a landmark six-year collective bargaining agreement in February 2025, ratified with nearly 99 per cent approval, that includes iron-clad protections against automation and semi-automation at ILA ports. The agreement also delivered a 62 per cent wage increase. ILA President Harold Daggett subsequently organised the first global “Anti-Automation Conference” in Lisbon in November 2025, where a thousand union dockworker and maritime leaders from around the world unanimously passed the Lisbon Summit Resolution opposing job-destroying port automation. The Writers Guild of America and the Culinary Workers Union have both secured agreements including severance and retraining provisions to counter AI displacement. The UC Berkeley Labor Center has documented provisions from more than 175 collective bargaining agreements addressing workplace technology.

Insist on transparency and regulatory protection. The California Privacy Protection Agency is drafting rules that would require businesses to inform job applicants and workers when AI is being used in decisions that affect them, and to allow employees to opt out of AI-driven data collection without penalty. California would become the first US state to enact such rules. The California Civil Rights Department is separately drafting rules to protect workers from AI that automates discrimination. Meanwhile, SAG-AFTRA has filed unfair labour practice charges before the National Labor Relations Board against companies that have used AI-generated content to replace bargaining unit work without providing notice or an opportunity to negotiate.

Recognise that retraining has limits, and plan accordingly. Brookings Institution research has been pointedly honest about the limitations of worker retraining programmes as a response to AI displacement. While retraining is important, the research notes that the potential for advanced machine learning to automate core human cognitive functions could spark extremely rapid labour substitution, making traditional retraining programmes inadequate on their own. The challenge is compounded by access inequality: PwC found that only 51 per cent of non-managers feel they have access to the learning and development opportunities they need, compared to 66 per cent of managers and 72 per cent of senior executives. Workers need to build financial resilience alongside new skills, diversifying their income sources where possible and building emergency reserves.

Push for shared productivity gains, not just shared pain. One of the most promising ideas to emerge from the AI productivity debate is the concept of the “time dividend.” Rather than converting AI-driven efficiency gains entirely into headcount reductions, companies could share those gains with workers through shortened working weeks. Research published in Nature Human Behaviour by Boston College's Wen Fan and colleagues, studying 141 companies across six countries and tracking more than 2,800 employees, found that workers on a four-day week saw 67 per cent reduced burnout, 41 per cent improved mental health, and 38 per cent fewer sleep issues, with no deterioration in key business metrics including revenue, absenteeism, and turnover. Companies such as Buffer have reported that productivity increased by 22 per cent and job applications rose by 88 per cent after adopting a four-day week. The question is not whether AI-driven productivity gains can support shorter working weeks. The question is whether employers will share those gains or simply pocket them.

Target roles that require human judgement, not just human labour. The Klarna and Salesforce experiences demonstrate that AI consistently struggles with tasks requiring empathy, contextual understanding, and nuanced decision-making. Roles that combine technical knowledge with interpersonal skills, creative thinking, or ethical judgement remain far more resistant to automation than those involving routine information processing, regardless of how cognitively complex that processing may appear. The US Bureau of Labor Statistics data confirms this pattern: while programmer employment fell dramatically, employment for software developers, a more design-oriented and judgement-intensive role, declined by only 0.3 per cent in the same period. Positions such as information security analyst and AI engineer are actively growing.

What Employers Owe Their Workers

The burden of adaptation should not fall entirely on employees. Companies that are making workforce decisions based on AI's potential rather than its performance owe their workers more than a redundancy package and a vague promise about “upskilling opportunities.”

The HBR study by Davenport and Srinivasan concluded that to realise AI's potential, companies need to invest in human employees and their training to help them make the best use of new technologies, rather than simply replacing workers outright. PwC's survey found that employees who trusted their direct manager the most were 72 per cent more motivated than those who trusted them the least. Workers who understood their organisation's strategic direction saw a 78 per cent rise in motivation. Only 64 per cent of employees surveyed said they understood their organisation's goals, and among non-managers and Generation Z workers, that figure was considerably lower. The lesson is straightforward: transparency is not just ethical; it is profitable.

The Brookings research offered concrete policy recommendations: governments should expand tax credits for businesses that retrain workers displaced by AI. Paid apprenticeships and AI-assisted training roles could help bridge the gap between entry-level workers and the increasingly demanding requirements of the AI-augmented workplace. Policymakers must ensure that the impact of AI-related job losses does not fall disproportionately on those least able to retrain, find new work, or relocate, as this would guarantee disparate impacts on already marginalised populations.

The Honest Reckoning Ahead

The uncomfortable truth that emerges from the data is that the AI employment crisis of 2025 and 2026 is not primarily a technology story. It is a trust story, a governance story, and a power story. Companies are making consequential decisions about people's livelihoods based on speculative technology capabilities, often using AI as a convenient label for cuts driven by entirely conventional business pressures. Workers, meanwhile, are watching their trust in employers erode as they recognise the gap between corporate rhetoric about AI augmentation and the reality of AI-justified layoffs.

The Oxford Economics report put it well: the shifts unfolding in the labour market are likely to be “evolutionary rather than revolutionary.” But evolutionary change can still be devastating for the individuals caught in its path, particularly the 6.1 million workers who lack the financial cushion, transferable skills, or local opportunities to adapt.

The workers who will navigate this transition most successfully are those who refuse to be passive participants in their own displacement. That means engaging with AI tools rather than fearing them, demanding a seat at the table where deployment decisions are made, insisting on transparency about how AI is being used to evaluate and replace workers, and building coalitions with other workers facing similar pressures.

It also means holding employers accountable for a basic standard of honesty. If you are cutting my job because demand is soft or because you over-hired during the pandemic, say so. Do not dress it up as an AI transformation story to impress your shareholders. And if you are genuinely deploying AI to replace human workers, prove that the technology actually works before you show people the door.

Klarna learned that lesson the hard way. Salesforce is learning it now. The question is whether the rest of the corporate world will learn it before millions more workers pay the price for their employers' speculative bets on a technology that, for all its genuine promise, has not yet earned the right to replace anyone.


References and Sources

  1. Davenport, T.H. and Srinivasan, L. (2026) “Companies Are Laying Off Workers Because of AI's Potential, Not Its Performance,” Harvard Business Review, January 2026. Available at: https://hbr.org/2026/01/companies-are-laying-off-workers-because-of-ais-potential-not-its-performance

  2. Challenger, Gray and Christmas (2025) “2025 Year-End Challenger Report: Highest Q4 Layoffs Since 2008; Lowest YTD Hiring Since 2010.” Available at: https://www.challengergray.com/blog/2025-year-end-challenger-report-highest-q4-layoffs-since-2008-lowest-ytd-hiring-since-2010/

  3. Deloitte (2025) “Trust Emerges as Main Barrier to Agentic AI Adoption.” TrustID Index data, May-July 2025. Available at: https://www.deloitte.com/us/en/about/press-room/trust-main-barrier-to-agentic-ai-adoption-in-finance-and-accounting.html

  4. PwC (2025) “Global Workforce Hopes and Fears Survey 2025.” 49,843 respondents across 48 countries. Available at: https://www.pwc.com/gx/en/issues/workforce/hopes-and-fears.html

  5. Gartner (2025) “Survey Shows Just 26% of Job Applicants Trust AI Will Fairly Evaluate Them.” Available at: https://www.gartner.com/en/newsroom/press-releases/2025-07-31-gartner-survey-shows-just-26-percent-of-job-applicants-trust-ai-will-fairly-evaluate-them

  6. Oxford Economics (2026) “Evidence of an AI-driven shakeup of job markets is patchy.” Available at: https://www.oxfordeconomics.com/resource/evidence-of-an-ai-driven-shakeup-of-job-markets-is-patchy/

  7. Yale Budget Lab (2025) “Evaluating the Impact of AI on the Labor Market: Current State of Affairs.” Available at: https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs

  8. Yale Budget Lab (2025) “Evaluating the Impact of AI on the Labor Market: November/December CPS Update.” Available at: https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-novemberdecember-cps-update

  9. Brookings Metro and GovAI (2025) “Measuring US Workers' Capacity to Adapt to AI-Driven Job Displacement.” Lead author: Sam Manning, GovAI. Also published as NBER Working Paper No. 34705. Available at: https://www.brookings.edu/articles/measuring-us-workers-capacity-to-adapt-to-ai-driven-job-displacement/

  10. Brookings Institution (2025) “AI Labor Displacement and the Limits of Worker Retraining.” Available at: https://www.brookings.edu/articles/ai-labor-displacement-and-the-limits-of-worker-retraining/

  11. CNBC (2025) “Salesforce CEO confirms 4,000 layoffs 'because I need less heads' with AI,” 2 September 2025. Available at: https://www.cnbc.com/2025/09/02/salesforce-ceo-confirms-4000-layoffs-because-i-need-less-heads-with-ai.html

  12. Fortune (2026) “AI layoffs are looking more and more like corporate fiction that's masking a darker reality, Oxford Economics suggests,” 7 January 2026. Available at: https://fortune.com/2026/01/07/ai-layoffs-convenient-corporate-fiction-true-false-oxford-economics-productivity/

  13. Klarna (2025) “Klarna Claimed AI Was Doing the Work of 700 People. Now It's Rehiring,” Reworked. Bloomberg interviews with CEO Sebastian Siemiatkowski. Available at: https://www.reworked.co/employee-experience/klarna-claimed-ai-was-doing-the-work-of-700-people-now-its-rehiring/

  14. CalMatters (2025) “Fearing AI will take their jobs, California workers plan a long battle against tech,” January 2025. Available at: https://calmatters.org/economy/technology/2025/01/unions-plot-ai-strategy/

  15. UC Berkeley Labor Center (2025) “A First Look at Labor's AI Values” and “Negotiating Tech” searchable inventory. Available at: https://laborcenter.berkeley.edu/a-first-look-at-labors-ai-values/

  16. Goldman Sachs Research (2025) “How Will AI Affect the Global Workforce?” Available at: https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce

  17. ILA Union (2025) “Rank-and-File Members Overwhelmingly Ratify Provisions of New Six-Year Master Contract,” 25 February 2025. Available at: https://ilaunion.org/rank-and-file-members-of-international-longshoremens-association-at-atlantic-and-gulf-coast-ports-overwhelmingly-ratify-provisions-of-new-six-year-master-contract/

  18. Fan, W. et al. (2024) “Four-day workweek and well-being,” Nature Human Behaviour. Study of 141 companies across six countries, 2,800+ employees. Boston College.

  19. Fortune (2025) “Salesforce CEO Marc Benioff says AI cut customer service jobs,” 2 September 2025. Available at: https://fortune.com/2025/09/02/salesforce-ceo-billionaire-marc-benioff-ai-agents-jobs-layoffs-customer-service-sales/

  20. Workday (2025) “Workday Layoffs of 1,750 to Support AI Investment,” Channel Futures, February 2025. Available at: https://www.channelfutures.com/cloud/workday-layoffs-1750-support-ai-investment

  21. IEEE Spectrum (2025) “AI Shifts Expectations for Entry Level Jobs.” Available at: https://spectrum.ieee.org/ai-effect-entry-level-jobs

  22. CNBC (2025) “AI was behind over 50,000 layoffs in 2025,” 21 December 2025. Available at: https://www.cnbc.com/2025/12/21/ai-job-cuts-amazon-microsoft-and-more-cite-ai-for-2025-layoffs.html

  23. Fortune (2026) “If AI is roiling the job market, the data isn't showing it, Yale Budget Lab report says,” 2 February 2026. Available at: https://fortune.com/2026/02/02/ai-labor-market-yale-budget-lab-ai-washing/

  24. HBR (2025) “Workers Don't Trust AI. Here's How Companies Can Change That,” November 2025. Available at: https://hbr.org/2025/11/workers-dont-trust-ai-heres-how-companies-can-change-that

  25. Mercer (2026) “Global Talent Trends 2026.” Preliminary findings, 12,000 respondents worldwide.

  26. Reuters/Ipsos (2025) Poll on American attitudes toward AI and employment, August 2025.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...