The Apprenticeship Severance: How AI Is Breaking the Expertise Pipeline

In a Davos meeting room in January 2026, a panel of chief executives, labour economists, and education ministers sat through a slide nobody seemed entirely able to answer. It was a simple chart drawn from the World Economic Forum's recent labour data: the traditional corporate ladder, with its familiar pyramid geometry of firms feeding juniors through years of progressively more demanding work, was losing its middle rungs. The session had been meant to reassure executives that the 170 million jobs the Forum projected would be created by 2030 would more than offset the 92 million expected to vanish. Instead, it produced one of the week's most uncomfortable discussions, because the numbers at the bottom of the ladder had started to tell a different story from the numbers at the top.

In the most AI-exposed occupations in the United States, employment among workers aged 22 to 25 had fallen by 13 per cent since late 2022, according to a Stanford Digital Economy Lab study by Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, updated in November 2025 and titled, with studied understatement, “Canaries in the Coal Mine?”. Older workers in the same occupations had seen their employment hold steady or grow. Entry-level work, the Forum's own March 2026 analysis noted bluntly, was not being redistributed. It was being redefined out of existence. The message the Davos panel kept circling was that the foundation for the next generation of senior experts was being removed at the same pace as the work it had historically performed.

Three weeks later, on 13 February 2026, the Guardian published an investigation by Lucy Knight with additional reporting by Sumaiya Motara titled “The big AI job swap: why white-collar workers are ditching their careers”. It ran a sequence of portraits: Jacqueline Bowman, a 30-year-old Californian freelance writer whose work “kind of dried up” in 2024; Janet Feenstra, a 52-year-old academic editor in Malmö who had left a decade at Malmö University to retrain as a baker; Richard, 39, a chartered occupational health and safety professional in Northampton who had taken “a huge cut” to retrain as an electrical engineer; Paola Adeitan, 31, who had abandoned her plans to become a solicitor despite a law degree and a master's. Angela Joyce, chief executive of Capital City College in London, confirmed “steady growth in students of all ages” enrolling in engineering, culinary, and childcare programmes. A 2023 UK Department for Education report found finance, law, and business management among the most AI-exposed occupations; a King's College London study from October 2025 identified software engineering and management consultancy as facing the steepest AI-driven declines.

What made the Guardian piece difficult to classify was that the people in it had not, for the most part, been made redundant. They had looked at a future they could not see the bottom of and decided to jump. Carl Benedikt Frey of the Oxford Internet Institute, whose 2013 paper with Michael Osborne launched the entire modern genre of automation-panic statistics, told the Guardian something almost embarrassed: manual work “is going to be harder to automate”, yes, but career decisions driven by hypotheticals rather than evidence might produce their own harms. Dr Bouke Klein Teeselink of King's College offered a different warning: “becoming really good at working with AI is probably going to be a skill that will pay off”.

By early 2026, a number of writers and researchers had converged on a framing that the economic vocabulary of displacement could not quite capture. The most influential of these, circulating widely on Substack, in newsletters, and across professional networks, was built around a single phrase: the apprenticeship severance. The argument was that what was being lost was not principally jobs, or wages, or even the first rung of a ladder. What was being severed was the mechanism through which one generation of professionals had historically transmitted tacit knowledge, professional judgement, and domain expertise to the next. The loss, in other words, was epistemic before it was economic. If the mechanism disappeared before a replacement existed, the consequences would not land for five years, or ten, but would surface as a slow subsidence in the quality of senior expertise two decades out, when today's missing juniors were supposed to be tomorrow's partners, principals, and surgeons.

This is an argument worth taking seriously on its own terms, because it says something the standard productivity-and-displacement debate cannot.

The Thing That Cannot Be Written Down

The Hungarian-British philosopher Michael Polanyi spent the second half of his life worrying about what he called the tacit dimension of knowledge. In his 1966 book of the same name, he offered the formulation that would define the field: “we can know more than we can tell”. His examples were ordinary and devastating. We recognise a familiar face without being able to list the features that identify it. A driver cannot be produced by reading the theory of the motorcar. A swimmer does not swim by consulting the physics of buoyancy. There is, Polanyi argued, a whole order of human capability that resists articulation, and it is transmitted not by instruction but by contact: the apprentice watches the master, absorbs rhythms, imitates, fails, adjusts, and eventually acquires the same unarticulated competence.

The sociologist Harry Collins spent decades refining this idea. In his 2010 book Tacit and Explicit Knowledge, Collins broke the concept down into three types. Relational tacit knowledge is the sort that could, in principle, be written down, but in practice is not, because the effort of articulation is too great or the social context too specific. Somatic tacit knowledge is what the body knows: balance, coordination, the grip of a surgeon's hand. Collective tacit knowledge, in Collins's view, is the only truly irreducible form, the kind that exists not in any individual at all but in the fabric of a social group, and which can only be acquired by long immersion in that group's practices.

What all three types share is a resistance to codification. You do not learn them by reading a document. You learn them by being placed, awkwardly and often inefficiently, alongside somebody who already has them, and by spending enough time in that proximity that something percolates. In professional contexts, that structured proximity has a name: apprenticeship. The junior associate buried in a document review is not, from the firm's perspective, primarily performing document review. They are developing a sense of what cases look like, what contracts signal, how partners think, when to push back, when to shut up. Document review is the pretext. The product is the slow accretion of professional judgement.

This is the core of the epistemic argument. If you automate away the pretext, you do not thereby eliminate the need for the product. You just eliminate one of the primary mechanisms by which the product was ever produced.

The Surgical Analogy Nobody Wants

The person who has done more than anyone to document what happens when this kind of severance occurs in real working environments is Matt Beane, an assistant professor in the Technology Management Program at UC Santa Barbara. His 2019 paper in Administrative Science Quarterly, “Shadow Learning: Building Robotic Surgical Skill When Approved Means Fail”, remains the clearest field study of the phenomenon, and it concerns not lawyers or consultants but surgeons.

Beane's work began with a puzzle. American hospitals had rapidly adopted robotic surgical systems, and the formal curriculum for residents had been updated to accommodate them. Residents rotated through robotic cases, accumulated hours, and received their certifications on schedule. On paper, the training pipeline was intact. In the operating room, something else was happening. Beane's two-year ethnographic study across multiple sites, combined with blinded interviews at thirteen top-tier teaching hospitals, found that residents trained on robotic systems were receiving ten to twenty times less hands-on practice than their predecessors on traditional techniques. The robot, by automating the fine motor work and concentrating decisions in the hands of the attending surgeon, had quietly removed most of the intermediate positions from which a resident used to learn. The mentor was no longer close enough, literally, to guide in real time. Residents were graduating licensed to operate but missing the tacit competencies their predecessors had acquired almost invisibly.

The residents who did manage to develop expertise, Beane found, were doing so through what he called “shadow learning”: prematurely specialising, rehearsing in simulators without proper supervision, and engaging in “undersupervised struggle” near the edge of their capacity. They were acquiring skill in ways that violated the formal training model, learning by proximity and repetition, as Polanyi described, but jury-rigging the proximity themselves, often outside their supervisors' knowledge. The skill-building had simply gone underground.

Beane's subsequent work, including his 2024 book The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, generalises the finding. Across professions where intelligent systems are rapidly displacing the routine work that once constituted the learning phase, he argues, organisations are systematically prioritising short-term productivity at the expense of the long-term capability of their own workforces. The robot gets faster. The partner gets more output per associate. The associate gets less practice.

The question the apprenticeship severance argument poses is whether the same pattern is now unfolding across the entire knowledge economy, only with generative AI playing the role of the surgical robot and with no equivalent of “shadow learning” yet visible in the data.

What the Numbers Actually Say

The empirical picture is partial but pointed. Using granular ADP payroll data covering millions of workers at thousands of US firms, Brynjolfsson, Chandar, and Chen documented a sharp divergence from late 2022. Employment for workers aged 22 to 25 in the most AI-exposed occupations fell 6 per cent in absolute terms between late 2022 and mid-2025, and 13 per cent relative to less-exposed sectors. Employment for older workers in the same occupations either held steady or grew. In software engineering and customer service, entry-level employment fell close to 20 per cent. The effect was concentrated where AI automates tasks rather than augmenting them; augmentative fields showed no equivalent decline.

Around those findings, a scaffolding of smaller studies has accumulated. An IESE Insight analysis of AI-exposed firms found starting wages fell 4.5 per cent after ChatGPT's launch, with a 6.3 per cent drop for junior positions and stable or rising pay for senior hires. Between 2018 and 2024, the share of jobs in AI-exposed fields requiring three years of experience or less fell sharply: software development from 43 to 28 per cent, data analysis from 35 to 22 per cent, consulting from 41 to 26 per cent. In law, trade press and firm-level reporting confirm that automated document review has reduced the tasks first-year associates used to perform. Above the Law reported in March 2026 that at one BigLaw firm, AI training had been made mandatory for associates but would not count as billable hours, a tidy illustration of how firms treat the developmental cost of new tools.

The counter-evidence is real and worth stating fairly. McKinsey announced in late 2025 that it would increase North American hiring by 12 per cent in 2026, arguing that deploying AI strategically requires more creative problem-solvers, not fewer. Several law firms, including Ropes and Gray, have built substantial AI training programmes that treat junior associates' experimentation as a firm-wide investment, reportedly allowing first-years to spend up to 400 hours of their annual 1,900 billable-hour target on AI work. The WEF's March 2026 analysis argued that entry-level roles are not disappearing so much as being reshaped: from task execution toward judgement-based work, from drafting toward reviewing, from producing outputs toward triaging the outputs of machines.

This is the terrain on which reasonable disagreement sits. Not whether AI is changing entry-level work, which is not in dispute, but whether the change is structurally compatible with the transmission of expertise or structurally corrosive to it.

The Reviewer's Trap

The most seductive framing of the current moment, the one that dominates corporate training decks and consultancy white papers, is that juniors will move “up the value chain”. Instead of drafting, they will review. Instead of producing raw outputs, they will edit, critique, and direct AI systems that produce them. This is often presented as a promotion: the machine does the tedious bit, the human does the interesting bit, and juniors get to spend their early careers on judgement rather than grunt work.

There is a specific problem with this framing, which Beane has been among the sharpest to articulate. Reviewing is not the same skill as producing; in most professional domains it is derivative, presupposing the producer's craft rather than replacing it. A senior editor can improve a draft because she has written drafts for years and knows, in her body, what a draft looks like when it is working. Ask her to review a draft she could not herself have written, and the quality of her review degrades sharply. The same is true of surgery, code, legal argumentation, and financial modelling. Judgement is not a free-standing capability. It is the residue of having done the work often enough to develop instincts about it, and the instincts will not form if the work is never done.

Research on how junior developers use AI coding assistants supports the worry. A study of 52 junior engineers reported in InfoQ in February 2026, drawing on Anthropic-sponsored research into skill formation, found a stark divide between those who used AI for conceptual questions (scoring 65 per cent or higher on subsequent assessments) and those who delegated code generation to AI (scoring below 40 per cent). A separate data point suggested that 78 per cent of junior engineers trusted AI-generated output with high specificity, compared with 39 per cent of seniors. The junior's confidence, in other words, scales inversely with their capacity to evaluate the output. They cannot yet tell when they are being deceived. Seniors can, but only because they paid the price of the uncodified learning in their own earlier careers.

This is the reviewer's trap. If you redefine junior work as review, you have not simplified the developmental path. You have inverted it. Review-first workflows ask people to do the hard thing before they have done the easy thing, without noticing that the easy thing was never really easy, it was just where the hard thing was silently being learnt.

The Economic Argument Is Not the Whole Argument

There is a version of this debate that treats the apprenticeship severance as essentially a labour-market problem to be solved by re-aggregating work, subsidising training, or reconfiguring career ladders. The argument in the widely shared early-2026 analyses was that this framing concedes too much ground to the language of displacement. Even if every junior role eliminated by AI were replaced, dollar for dollar and hour for hour, the epistemic problem would remain. The concern is not aggregate employment, or aggregate wages, or even aggregate hours. It is the specific quality of the experience an individual professional accumulates on their way to expertise, and the mechanism by which that experience was transmitted.

The professions most exposed (law, finance, consulting, the creative fields) are precisely the ones in which senior practitioners have historically insisted that what they do cannot be taught from a textbook. Partners talk constantly about judgement, about a feel for the case or the deal or the client. They say these things because they are true. Their expertise is not a stored library of facts; it is a trained intuition, shaped over thousands of low-stakes decisions that were actually quite high-stakes for their formation. The junior who drafts a memo a partner tears apart is being taught something, but what they are being taught is not contained in the partner's edits. It is diffused across years of such edits, accumulating into a capacity to anticipate the tear-apart before it happens.

If that process is interrupted, even gently, the cost does not register immediately. It registers at the moment when the former junior is herself asked to be the partner, and finds she has not developed the instinct the role requires. The signal will be that the partner, when asked a question, gives an answer that is fluent and plausible and wrong in ways she cannot detect. Multiply this across a profession and across a generation, and you have something worse than a talent shortage. You have an expertise shortage masquerading as a talent surplus, because the people nominally qualified to hold senior positions will in fact hold them, only with less of the unarticulated judgement the positions were designed to deploy.

This is what the early-2026 analyses meant by epistemic severance. Not that the professions would stop functioning, but that their internal quality would subside over a long enough timeline that the subsidence would be difficult to attribute.

The Counter-Arguments Worth Taking Seriously

The sharpest critique of the thesis is that it presupposes a stable past that may never have existed. Every previous wave of professional automation, from dictation machines and typing pools to spreadsheets and document management systems, was greeted with the same set of anxieties, and the professions adapted. Senior lawyers in the 1990s worried that junior associates who had not spent their early careers on manual research in dusty volumes would be missing some crucial forensic sensibility. They were wrong, or at least mostly wrong. Spreadsheets did not hollow out financial analysis; they redefined what analysis was. Electronic discovery did not empty out junior legal practice; it shifted it. Perhaps generative AI is the same pattern at a larger scale.

There is force to this argument, and it should not be dismissed. But the analogy breaks down on the question of what, precisely, the new tools replace. Spreadsheets replaced the specific cognitive task of arithmetic; they left intact the interpretive, relational, and strategic work that constituted the junior analyst's actual development. Electronic discovery replaced the manual labour of sifting boxes of documents; it left intact the junior associate's exposure to the substantive law and the partner's reasoning. Generative AI, uniquely, is being applied directly to the cognitive and interpretive work itself. It does not merely automate the chore and leave the apprentice to do the thinking. It often does the first-pass thinking, leaving the apprentice to sign off on it. The replacement is categorically different from previous waves.

A second serious counter-argument is that the apprenticeship framing romanticises a learning system that worked poorly for many of the people in it. The old junior roles were exhausting, exclusionary, and often abusive. They selected for endurance and pedigree rather than for talent. If AI eliminates the worst of them, the argument runs, good riddance; design something better. This is a fair point, and it is entirely compatible with taking the epistemic concern seriously. The question is not whether the old system was optimal. It is whether what is replacing it has been designed with the transmission of expertise in mind, or whether it has been designed principally to reduce headcount, and whether the developmental function is a casualty of that redesign rather than an intentional part of it. At the moment, the evidence for design intent is thin.

A third argument, favoured by some AI optimists, is that the tools themselves will come to function as tutors and mentors. If an AI can produce a legal memo, it can also explain it; if it can generate code, it can walk a junior through the architecture. In principle this is possible; in practice, current systems are poorly suited, because they do not know what the learner does not know, and because the tacit dimension is almost by definition the dimension they cannot articulate. Beane himself has suggested AI could be part of the solution, coaching learners, teaching coaches when to mentor, connecting the two in smart ways. The ingredients exist. The question is whether anyone is building with them at scale, as opposed to selling productivity.

What AI-Assisted Work Preserving The Developmental Function Could Look Like

It is worth spending some time on the constructive question, because the destructive one is easier to describe. If generative AI is genuinely inescapable, and if the transmission of expertise still has to happen, what would a workflow that preserved the developmental function of early-career experience actually look like?

The first and most obvious shift is toward what might be called productive struggle by design. In the Beane framework, skill is built through proximate, near-the-edge work under light supervision. An AI-assisted workflow preserving this would not hand juniors finished outputs to review; it would hand them problems to solve, with AI available as a resource they can consult selectively rather than as a default producer. The principle is closer to the way a well-run graduate seminar operates than to the way a consulting pyramid traditionally operates. The junior does the work. The AI is not the competitor for the work; it is a reference consulted when the junior chooses. The senior reviews the work, but reviews it as a piece of the junior's developing capability, not as a piece of the firm's billable output.

A second shift is toward what a number of firms have begun calling visible reasoning. In a pure AI-augmented workflow, the junior's contribution often looks like a prompt, followed by a generated output, followed by edits. The reasoning is hidden inside the prompt and the edits. A developmental workflow would require the junior to make their reasoning explicit: to document what they asked the AI, why they asked it that way, what they kept, what they rejected, and why. This is not busywork. It is the externalisation of the tacit dimension, forced by the workflow itself, so that both the junior and the senior have something to review beyond the final product.

A third shift is a recovery of the master-apprentice relationship as an institutional priority rather than an informal luxury. In many professional environments, mentorship has for two decades been treated as something that happens around the edges of billable work, when the partner has time. The apprenticeship severance thesis implies that this is no longer survivable. If the developmental function has historically been embedded in the routine work, and that work is now being automated, then the developmental function needs to be relocated, explicitly, into structured relationships that are part of the firm's core design. This means paid mentoring time, mentor training, and developmental metrics that do not show up on the quarterly P&L. It is expensive. It is, in most industries, unusual.

A fourth, more speculative shift is the construction of domain-specific AI tools that model the tacit dimension rather than flatten it. The current generation of general-purpose assistants is engineered for confident plausibility. A developmental AI would be engineered for calibrated uncertainty, designed to say “I do not know”, to flag where senior judgement is required, to offer multiple framings rather than a single answer, and to build over time a model of what the specific junior user does and does not yet understand. Some of this is technically hard. Some is merely unfashionable, because the market for confident plausibility is much larger than the market for calibrated uncertainty.

None of these shifts is going to happen by accident. They will happen if they are prioritised and funded by the people who run firms and educational institutions, and they will fail to happen if the dominant logic of AI deployment continues to be headcount reduction. The evidence from early 2026 is mixed. Some firms are investing seriously. Many more are deploying AI as a substitution for the bottom rungs of the ladder without any plan for where the top rungs will come from in 2040.

The Long Horizon

The apprenticeship severance argument is difficult to win politically because its costs are invisible on any timeline a quarterly-driven organisation can see. A firm that cuts its junior headcount in 2026 will show improved operating margins in 2027. The cost, in missing expertise, arrives in 2040, when the cohort that was supposed to fill senior roles cannot fill them with the depth the roles require. By then, the executives who made the 2026 decisions will have retired; the foreshortened careers will have been foreshortened too long ago for anyone to connect the dots; the profession will settle into a new, subtly degraded normal, and most of its members will experience that normal as simply how things are.

This is the epistemic dimension the Guardian investigation could not quite reach, because the Guardian was reporting on a labour market, and labour markets do not have a vocabulary for this kind of intergenerational loss. It is the dimension the WEF session in January gestured at without naming. It is the dimension the early-2026 analyses tried, not always cleanly, to articulate: that the severance is not of workers from their jobs, though that is happening too, but of one generation of professionals from the accumulated tacit competence of the generation before, with no institutional arrangement yet in place to re-establish the link.

Whether the link can be re-established is an open question. It will depend on whether the version of AI-assisted work that preserves developmental function turns out to be a plausible design object, or merely a plausible essay. It will depend on whether firms are willing to treat the transmission of expertise as a first-order obligation rather than a nice-to-have. It will depend on whether regulators and professional bodies, who in theory exist to maintain standards across generations, decide the standards include the pathway, not just the endpoint.

It will also depend on whether the people at the bottom of the ladder, currently retraining as bakers, electricians, and therapists, are willing to persist in their professions long enough for new pathways to form, or whether the flight the Guardian documented accelerates, hollowing out the base of the knowledge economy from the other side. One quieter finding in the Guardian piece was how many of its subjects had chosen manual trades specifically because they perceived them as AI-resistant. If enough talent follows that logic, the problem is no longer theoretical. The senior experts of 2045 will not exist because the juniors of 2026 decided the risk of their existence was not worth running.

The people who keep saying that every previous wave of automation produced worse predictions than it justified may turn out to be correct. Generative AI may reshape entry-level work into something more developmentally rich than it ever was, and the junior cohort of 2030 may look back at the panics of 2026 with the same mild condescension that today's analysts reserve for the automation anxieties of the 1990s. That outcome is possible. It is not, on current evidence, being actively engineered. The difference between outcomes that arrive by good fortune and outcomes that arrive by design is, in the end, the difference between a profession that keeps its expertise and one that spends a generation rediscovering what it has lost.

The question posed at Davos, and in the Guardian's pages, and in the analyses that followed, was not principally about jobs. It was about whether any institution currently operating at scale has decided that the next generation of senior experts is worth building on purpose. The answer that emerges from the spring of 2026 is, in most places, not yet. Which is to say: not ruled out, but not being worked on either, and the work, if it is going to happen, is going to have to start from the recognition that the old system's developmental function was never visible on anyone's balance sheet, that its replacement will not be either, and that the absence of a line item has never, in any field, been a reliable argument for the absence of a cost.

References and Sources

  1. World Economic Forum, “Davos: What to know about jobs and skills transformation”, January 2026. https://www.weforum.org/stories/2026/01/davos-here-s-what-to-know-about-jobs-and-skills-transformation/
  2. World Economic Forum, “How AI is changing the nature of entry level work”, March 2026. https://www.weforum.org/stories/2026/03/how-ai-is-changing-the-nature-of-entry-level-work/
  3. World Economic Forum, “Four ways AI and talent trends could reshape jobs by 2030”, January 2026. https://www.weforum.org/stories/2026/01/here-are-four-ways-ais-impact-on-job-markets-might-take-shape/
  4. Lucy Knight with Sumaiya Motara, “The big AI job swap: why white-collar workers are ditching their careers”, The Guardian, 13 February 2026. Archived via Portside: https://portside.org/2026-02-13/big-ai-job-swap-why-white-collar-workers-are-ditching-their-careers
  5. Erik Brynjolfsson, Bharat Chandar, Ruyu Chen, “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence”, Stanford Digital Economy Lab, August 2025 (updated November 2025). https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/
  6. Michael Polanyi, The Tacit Dimension, University of Chicago Press, 1966.
  7. Harry Collins, Tacit and Explicit Knowledge, University of Chicago Press, 2010.
  8. Matthew Beane, “Shadow Learning: Building Robotic Surgical Skill When Approved Means Fail”, Administrative Science Quarterly, 2019.
  9. Matthew Beane, The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, 2024.
  10. Stanford Digital Economy Lab, “Canaries, Interest Rates, and Timing: More on the Recent Drivers of Employment Changes for Young Workers”, 2025. https://digitaleconomy.stanford.edu/news/canaries-interest-rates-and-timinga-more-on-recent-drivers-of-employment-changes-for-young-workers/
  11. Harvard Business Review, “AI and the Entry-Level Job”, March 2026. https://hbr.org/2026/03/ai-and-the-entry-level-job
  12. IESE Insight, “How AI is depressing entry-level wages and hiring”. https://www.iese.edu/insight/articles/artificial-intelligence-junior-employees-wages/
  13. Above the Law, “AI Training Is A Must At This Biglaw Firm, But Lawyers Won't Receive Any Billable Hours For It”, March 2026. https://abovethelaw.com/2026/03/ai-training-is-a-must-at-this-biglaw-firm-but-lawyers-wont-receive-any-billable-hours-for-it/
  14. InfoQ, “Anthropic Study: AI Coding Assistance Reduces Developer Skill Mastery by 17%“, February 2026. https://www.infoq.com/news/2026/02/ai-coding-skill-formation/
  15. Judy Hanwen Shen and Alex Tamkin, “How AI Impacts Skill Formation”, arXiv, February 2026. https://arxiv.org/pdf/2601.20245
  16. Fortune, “First-of-its-kind Stanford study says AI is starting to have a 'significant and disproportionate impact' on entry-level workers”, August 2025. https://fortune.com/2025/08/26/stanford-ai-entry-level-jobs-gen-z-erik-brynjolfsson/
  17. Time, “Who's Losing Jobs to AI? New Stanford Analysis Breaks It Down”, 2025. https://time.com/7312205/ai-jobs-stanford/
  18. CNBC, “AI is not just ending entry-level jobs. It's the end of the career ladder as we know it”, September 2025. https://www.cnbc.com/2025/09/07/ai-entry-level-jobs-hiring-careers.html
  19. UK Department for Education, report on AI impacts on UK occupations, 2023.
  20. King's College London, study on AI exposure in UK employment markets, October 2025.

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...