AI Shot The Analyst: But It Did Not Shoot The Plumber

In the gleaming offices of Google's DeepMind headquarters, researchers recently celebrated a remarkable achievement: their latest language model could process two million tokens of context—roughly equivalent to digesting the entire Lord of the Rings trilogy in a single gulp. Yet down the street, a master electrician named James Harrison was crawling through a Victorian-era building's ceiling cavity, navigating a maze of outdated wiring, asbestos insulation, and unexpected water damage that no training manual had ever described. The irony wasn't lost on him when his apprentice mentioned the AI breakthrough during their lunch break. “Two million tokens?” Harrison laughed. “I'd like to see it figure out why this 1960s junction box is somehow connected to the neighbour's doorbell.”

This disconnect between AI's expanding capabilities and the stubborn complexity of real-world work reveals a fundamental truth about the automation revolution: context isn't just data—it's the invisible scaffolding of human expertise. And whilst AI systems are becoming increasingly sophisticated at processing information, they're hitting a wall that technologists are calling the “context constraint.”

The Great Context Arms Race

The numbers are staggering. Since mid-2023, the longest context windows in large language models have grown by approximately thirty times per year. OpenAI's GPT-4 initially offered 32,000 tokens (about 24,000 words), whilst Anthropic's Claude Enterprise plan now boasts a 500,000-token window. Google's Gemini 1.5 Pro pushes the envelope further with up to two million tokens—enough to analyse a 250-page technical manual or an entire codebase. IBM has scaled its open-source Granite models to 128,000 tokens, establishing what many consider the new industry standard.

But here's the rub: these astronomical numbers come with equally astronomical costs. The computational requirements scale quadratically with context length, meaning a model with 4,096 tokens requires sixteen times more resources than one with 1,024 tokens. For enterprises paying by the token, summarising a lengthy annual report or maintaining context across a long customer service conversation can quickly become prohibitively expensive.

More troubling still is what researchers call the “lost in the middle” problem. A landmark 2023 study revealed that language models don't “robustly make use of information in long input contexts.” They perform best when crucial information appears at the beginning or end of their context window, but struggle to retrieve details buried in the middle—rather like a student who remembers only the introduction and conclusion of a lengthy textbook chapter.

Marina Danilevsky, an IBM researcher specialising in retrieval-augmented generation (RAG), puts it bluntly: “Scanning thousands of documents for each user query is cost inefficient. It would be much better to save up-to-date responses for frequently asked questions, much as we do in traditional search.”

Polanyi's Ghost in the Machine

Back in 1966, philosopher Michael Polanyi articulated a paradox that would haunt the dreams of AI researchers for decades to come: “We can know more than we can tell.” This simple observation—that humans possess vast reserves of tacit knowledge they cannot explicitly articulate—has proved to be AI's Achilles heel.

Consider a seasoned surgeon performing a complex operation. Years of training have taught them to recognise the subtle difference in tissue texture that signals the edge of a tumour, to adjust their grip based on barely perceptible resistance, to sense when something is “off” even when all the monitors show normal readings. They know these things, but they cannot fully explain them—certainly not in a way that could be programmed into a machine.

This tacit dimension extends far beyond medicine. MIT economist David Autor argues that Polanyi's paradox explains why the digital revolution hasn't produced the expected surge in labour productivity. “Human tasks that have proved most amenable to computerisation are those that follow explicit, codifiable procedures,” Autor notes. “Tasks that have proved most vexing to automate are those that demand flexibility, judgement and common sense.”

The recent success of AlphaGo in defeating world champion Lee Se-dol might seem to contradict this principle. After all, Go strategy relies heavily on intuition and pattern recognition that masters struggle to articulate. But AlphaGo's victory required millions of training games, vast computational resources, and a highly constrained environment with fixed rules. The moment you step outside that pristine digital board into the messy physical world, the context requirements explode exponentially.

The Plumber's Advantage

Geoffrey Hinton, the Nobel Prize-winning “Godfather of AI,” recently offered career advice that raised eyebrows in Silicon Valley: “I'd say it's going to be a long time before AI is as good at physical manipulation. So a good bet would be to be a plumber.”

The data backs him up. Whilst tech workers fret about their job security, applications to plumbing and electrical programmes have surged by 30 per cent amongst Gen Z graduates. The Tony Blair Institute's 2024 report specifically notes that “manual jobs in construction and skilled trades are less likely to be exposed to AI-driven time savings.”

Why? Because every plumbing job is a unique puzzle wrapped in decades of architectural history. A skilled plumber arriving at a job site must instantly process an overwhelming array of contextual factors: the age and style of the building (Victorian terraces have different pipe layouts than 1960s tower blocks), local water pressure variations, the likelihood of lead pipes or asbestos lagging, the homeowner's budget constraints, upcoming construction that might affect the system, and countless other variables that no training manual could fully capture.

“AI can write reports and screen CVs, but it can't rewire a building,” one electrician told researchers. The physical world refuses to be tokenised. When an electrician encounters a junction box where someone has “creatively” combined three different wiring standards from different decades, they're drawing on a vast reservoir of experience that includes not just technical knowledge but also an understanding of how different generations of tradespeople worked, what shortcuts they might have taken, and what materials were available at different times.

The Bureau of Labor Statistics projects over 79,900 job openings annually for electricians through 2033, with 11 per cent growth—significantly above average for all occupations. Plumbers face similar demand, with 73,700 new jobs expected by 2032. Currently, over 140,000 vacancies remain unfilled in construction, with forecasts indicating more than one million additional workers will be needed by 2032.

Healthcare's Context Paradox

The medical field presents a fascinating paradox in AI adoption. On one hand, diagnostic AI systems can now identify certain cancers with accuracy matching or exceeding human radiologists. IBM's Watson can process millions of medical papers in seconds. Yet walk into any hospital, and you'll find human doctors and nurses still firmly in charge of patient care.

The reason lies in what researchers call the “contextual health elements” that resist digitisation. Patient data might seem objective and quantifiable, but it represents only a fraction of the information needed for effective healthcare. A patient's tone of voice when describing pain, their reluctance to mention certain symptoms, the way they interact with family members, their cultural background's influence on treatment compliance—all these contextual factors profoundly impact diagnosis and treatment but resist capture in electronic health records.

California's Senate Bill 1120, adopted in 2024, codifies this reality into law. The legislation mandates that whilst AI can assist in making coverage determinations—predicting potential length of stay or treatment outcomes—a qualified human must review all medical necessity decisions. The Centers for Medicare and Medicaid Services reinforced this principle, stating that healthcare plans “cannot rely solely upon AI for making a determination of medical necessity.”

Dr. Sarah Mitchell, chief medical officer at a London teaching hospital, explains the challenge: “Patient care involves understanding not just symptoms but life circumstances. When an elderly patient presents with recurring infections, AI might recommend antibiotics. But a good clinician asks different questions: Are they managing their diabetes properly? Can they afford healthy food? Do they have support at home? Are they taking their medications correctly? These aren't just data points—they're complex, interrelated factors that require human understanding.”

The contextual demands multiply in specialised fields. A paediatric oncologist must not only treat cancer but also navigate family dynamics, assess a child's developmental stage, coordinate with schools, and make decisions that balance immediate medical needs with long-term quality of life. Each case brings unique ethical considerations that no algorithm can fully address.

The Investigative Reporter's Edge

Journalism offers another compelling case study in context resistance. Whilst AI can generate basic news reports from structured data—financial earnings, sports scores, weather updates—investigative journalism remains stubbornly human.

The Columbia Journalism Review's 2024 Tow Report notes that three-quarters of news organisations have adopted some form of AI, but primarily for routine tasks. When it comes to investigation, AI serves as an assistant rather than a replacement. Language models can scan thousands of documents for patterns, but they cannot cultivate sources, build trust with whistleblowers, or recognise when someone's carefully chosen words hint at a larger story.

“The relationship between a journalist and AI is not unlike the process of developing sources or cultivating fixers,” the report observes. “As with human sources, artificial intelligences may be knowledgeable, but they are not free of subjectivity in their design—they also need to be contextualised and qualified.”

Consider the Panama Papers investigation, which involved 2.6 terabytes of data—11.5 million documents. Whilst AI tools helped identify patterns and connections, the story required hundreds of journalists working for months to provide context: understanding local laws in different countries, recognising significant names, knowing which connections mattered and why. No AI system could have navigated the cultural, legal, and political nuances across dozens of jurisdictions.

The New York Times, in its May 2024 AI guidance, emphasised that whilst generative AI serves as a tool, it requires “human guidance and review.” The publication insists that editors explain how work was created and what steps were taken to “mitigate risk, bias and inaccuracy.”

The legal profession exemplifies how contextual requirements create natural barriers to automation. Whilst AI can search case law and draft standard contracts faster than any human, the practice of law involves navigating a maze of written rules, unwritten norms, local customs, and human relationships that resist digitisation.

A trial lawyer must simultaneously process multiple layers of context: the letter of the law, precedent interpretations, the judge's known preferences, jury psychology, opposing counsel's tactics, witness credibility, and countless subtle courtroom dynamics. They must adapt their strategy in real-time based on facial expressions, unexpected testimony, and the indefinable “feeling” in the room.

“There is a human factor involved when it comes down to considering all the various aspects of a trial and taking a final decision that could turn into years in prison,” notes one legal researcher. The stakes are too high, and the variables too complex, for algorithmic justice.

Contract negotiation provides another example. Whilst AI can identify standard terms and flag potential issues, successful negotiation requires understanding the human dynamics at play: What does each party really want? What are they willing to sacrifice? How can creative structuring satisfy both sides' unstated needs? These negotiations often hinge on reading between the lines, understanding industry relationships, and knowing when to push and when to compromise.

The Anthropologist's Irreplaceable Eye

Perhaps no field better illustrates the context constraint than anthropology and ethnography. These disciplines are built entirely on understanding context—the subtle, interconnected web of culture, meaning, and human experience that shapes behaviour.

Recent attempts at “automated digital ethnography” reveal both the potential and limitations of AI in qualitative research. Whilst AI can transcribe interviews, identify patterns in field notes, and even analyse visual data, it cannot perform the core ethnographic task: participant observation that builds trust and reveals hidden meanings.

An ethnographer studying workplace culture doesn't just record what people say in interviews; they notice who eats lunch together, how space is used, what jokes people tell, which rules are bent and why. They participate in daily life, building relationships that reveal truths no survey could capture. This “committed fieldwork,” as researchers call it, often requires months or years of embedded observation.

Dr. Rebecca Chen at MIT's Anthropology Department puts it succinctly: “AI can help us process data at scale, but ethnography is about understanding meaning, not just identifying patterns. When I study how people use technology, I'm not just documenting behaviour—I'm understanding why that behaviour makes sense within their cultural context.”

The Creative Context Challenge

Creative fields present a unique paradox for AI automation. Whilst AI can generate images, write poetry, and compose music, it struggles with the deeper contextual understanding that makes art meaningful. A graphic designer doesn't just create visually appealing images; they solve communication problems within specific cultural, commercial, and aesthetic contexts.

Consider brand identity design. An AI can generate thousands of logo variations, but selecting the right one requires understanding the company's history, market position, competitive landscape, cultural sensitivities, and future aspirations. It requires knowing why certain colours evoke specific emotions in different cultures, how design trends reflect broader social movements, and what visual languages resonate with particular audiences.

Film editing provides another example. Whilst AI can perform basic cuts and transitions, a skilled editor shapes narrative rhythm, builds emotional arcs, and creates meaning through juxtaposition. They understand not just the technical rules but when to break them for effect. They bring cultural knowledge, emotional intelligence, and artistic sensibility that emerges from years of watching, analysing, and creating.

The Education Imperative

Teaching represents perhaps the ultimate context-heavy profession. A teacher facing thirty students must simultaneously track individual learning styles, emotional states, social dynamics, and academic progress whilst adapting their approach in real-time. They must recognise when a student's poor performance stems from lack of understanding, problems at home, bullying, learning disabilities, or simple boredom.

The best teachers don't just transmit information; they inspire, mentor, and guide. They know when to push and when to support, when to maintain standards and when to show flexibility. They understand how cultural backgrounds influence learning, how peer relationships affect motivation, and how to create classroom environments that foster growth.

Recent experiments with AI tutoring systems show promise for personalised learning and homework help. But they cannot replace the human teacher who notices a usually cheerful student seems withdrawn, investigates sensitively, and provides appropriate support. They cannot inspire through personal example or provide the kind of mentorship that shapes lives.

The Network Effect of Context

What makes context particularly challenging for AI is its networked nature. Context isn't just information; it's the relationship between pieces of information, shaped by culture, history, and human meaning-making. Each additional variable doesn't just add complexity linearly—it multiplies it.

Consider a restaurant manager's daily decisions. They must balance inventory levels, staff schedules, customer preferences, seasonal variations, local events, supplier relationships, health regulations, and countless other factors. But these aren't independent variables. A local festival affects not just customer traffic but also staff availability, supply deliveries, and optimal menu offerings. A key employee calling in sick doesn't just create a staffing gap; it affects team dynamics, service quality, and the manager's ability to handle other issues.

This interconnectedness means that whilst AI might optimise individual components, it struggles with the holistic judgement required for effective management. The context isn't just vast—it's dynamic, interconnected, and often contradictory.

The Organisational Memory Problem

Large organisations face a particular challenge with context preservation. As employees leave, they take with them years of tacit knowledge about why decisions were made, how systems really work, and what approaches have failed before. This “organisational amnesia” creates opportunities for AI to serve as institutional memory, but also reveals its limitations.

A seasoned procurement officer knows not just the official vendor selection criteria but also the unofficial realities: which suppliers deliver on time despite their promises, which contracts have hidden pitfalls, how different departments really use products, and what past failures to avoid. They understand the political dynamics of stakeholder buy-in and the unwritten rules of successful negotiation.

Attempts to capture this knowledge in AI systems face the fundamental problem Polanyi identified: experts often cannot articulate what they know. The procurement officer might not consciously realise they always order extra supplies before certain holidays because experience has taught them about predictable delays. They might not be able to explain why they trust one sales representative over another.

The Small Business Advantage

Paradoxically, small businesses might be better positioned to weather the AI revolution than large corporations. Their operations often depend on local knowledge, personal relationships, and contextual understanding that resists automation.

The neighbourhood café owner who knows customers' names and preferences, adjusts offerings based on local events, and creates a community gathering space provides value that no AI-powered chain can replicate. The local accountant who understands family businesses' unique challenges, provides informal business advice, and navigates personality conflicts in partnership disputes offers services beyond number-crunching.

These businesses thrive on what economists call “relationship capital”—the accumulated trust, understanding, and mutual benefit built over time. This capital exists entirely in context, in the countless small interactions and shared experiences that create lasting business relationships.

The Governance Challenge

As AI systems become more prevalent, governance and compliance roles are emerging as surprisingly automation-resistant. These positions require not just understanding regulations but interpreting them within specific organisational contexts, anticipating regulatory changes, and managing the human dynamics of compliance.

A chief compliance officer must understand not just what the rules say but how regulators interpret them, what triggers scrutiny, and how to build credibility with oversight bodies. They must navigate the tension between business objectives and regulatory requirements, finding creative solutions that satisfy both. They must also understand organisational culture well enough to implement effective controls without destroying productivity.

The contextual demands multiply in international operations, where compliance officers must reconcile conflicting regulations, cultural differences in business practices, and varying enforcement approaches. They must know not just the letter of the law but its spirit, application, and evolution.

The Mental Health Frontier

Mental health services provide perhaps the starkest example of context's importance. Whilst AI chatbots can provide basic cognitive behavioural therapy exercises and mood tracking, effective mental health treatment requires deep contextual understanding.

A therapist must understand not just symptoms but their meaning within a person's life story. Depression might stem from job loss, relationship problems, trauma, chemical imbalance, or complex combinations. Treatment must consider cultural attitudes toward mental health, family dynamics, economic constraints, and individual values.

The therapeutic relationship itself—built on trust, empathy, and human connection—cannot be replicated by AI. The subtle art of knowing when to challenge and when to support, when to speak and when to listen, emerges from human experience and emotional intelligence that no algorithm can match.

The Innovation Paradox

Ironically, the jobs most focused on innovation might be most resistant to AI replacement. Innovation requires not just generating new ideas but understanding which ideas will work within specific contexts. It requires knowing not just what's technically possible but what's culturally acceptable, economically viable, and organisationally achievable.

A product manager launching a new feature must understand not just user needs but organisational capabilities, competitive dynamics, technical constraints, and market timing. They must navigate stakeholder interests, build consensus, and adapt plans based on shifting contexts. They must possess what one executive called “organisational intelligence”—knowing how to get things done within specific corporate cultures.

Context as Competitive Advantage

As AI capabilities expand, the ability to navigate complex contexts becomes increasingly valuable. The most secure careers will be those that require not just processing information but understanding its meaning within specific human contexts.

This doesn't mean AI won't transform these professions. Doctors will use AI diagnostic tools but remain essential for contextual interpretation. Lawyers will leverage AI for research but remain crucial for strategy and negotiation. Teachers will employ AI for personalised learning but remain vital for inspiration and mentorship.

The key skill for future workers isn't competing with AI's information processing capabilities but complementing them with contextual intelligence. This includes cultural fluency, emotional intelligence, creative problem-solving, and the ability to navigate ambiguity—skills that emerge from lived experience rather than training data.

Preparing for the Context Economy

Educational institutions are beginning to recognise this shift. Leading universities are redesigning curricula to emphasise critical thinking, cultural competence, and interdisciplinary understanding. Professional schools are adding courses on ethics, communication, and systems thinking.

Trade schools are experiencing unprecedented demand as young people recognise the value of embodied skills. Apprenticeship programmes are expanding, recognising that certain knowledge can only be transmitted through hands-on experience and mentorship.

Companies are also adapting, investing in programmes that develop employees' contextual intelligence. They're recognising that whilst AI can handle routine tasks, human judgement remains essential for complex decisions. They're creating new roles that bridge AI capabilities and human understanding—positions that require both technical knowledge and deep contextual awareness.

The Regulatory Response

Governments worldwide are grappling with AI's implications for employment and beginning to recognise context's importance. The European Union's AI Act includes provisions for human oversight in high-stakes decisions. California's healthcare legislation mandates human review of AI medical determinations. These regulations reflect growing awareness that certain decisions require human contextual understanding.

Labour unions are also adapting their strategies, focusing on protecting jobs that require contextual intelligence whilst accepting AI automation of routine tasks. They're pushing for retraining programmes that develop workers' uniquely human capabilities rather than trying to compete with machines on their terms.

The Context Constraint's Silver Lining

The context constraint might ultimately prove beneficial for both workers and society. By automating routine tasks whilst preserving human judgement for complex decisions, we might achieve a more humane division of labour. Workers could focus on meaningful, creative, and interpersonal aspects of their jobs whilst AI handles repetitive drudgery.

This transition won't be seamless. Many workers will need support in developing contextual intelligence and adapting to new roles. But the context constraint provides a natural brake on automation's pace, giving society time to adapt.

Moreover, preserving human involvement in contextual decisions maintains accountability and ethical oversight. When AI makes mistakes processing information, they're usually correctable. When humans make mistakes in contextual judgement, we at least understand why and can learn from them.

The Economic Implications of Context

The context constraint has profound implications for economic policy and workforce development. Economists are beginning to recognise that traditional models of automation—which assume a straightforward substitution of capital for labour—fail to account for the contextual complexity of many jobs.

Research from the International Monetary Fund suggests that over 40 per cent of workers will require significant upskilling by 2030, with emphasis on skills that complement rather than compete with AI capabilities. But this isn't just about learning new technical skills. It's about developing what researchers call “meta-contextual abilities”—the capacity to understand and navigate multiple overlapping contexts simultaneously.

Consider the role of a supply chain manager during a global disruption. They must simultaneously track shipping delays, geopolitical tensions, currency fluctuations, labour disputes, weather patterns, and consumer sentiment shifts. Each factor affects the others in complex, non-linear ways. An AI might optimise for cost or speed, but the human manager understands that maintaining relationships with suppliers during difficult times might be worth short-term losses for long-term stability.

The financial services sector provides another illuminating example. Whilst algorithmic trading dominates high-frequency transactions, wealth management for high-net-worth individuals remains stubbornly human. These advisers don't just allocate assets; they navigate family dynamics, understand personal values, anticipate life changes, and provide emotional support during market volatility. They know that a client's stated risk tolerance might change dramatically when their child is diagnosed with a serious illness or when they're going through a divorce.

The Cultural Dimension of Context

Perhaps nowhere is the context constraint more evident than in cross-cultural business operations. AI translation tools have become remarkably sophisticated, capable of converting text between languages with impressive accuracy. But translation is just the surface layer of cross-cultural communication.

A business development manager working across cultures must understand not just language but context: why direct communication is valued in Germany but considered rude in Japan, why a handshake means one thing in London and another in Mumbai, why silence in a negotiation might signal contemplation in one culture and disagreement in another. They must read between the lines of polite refusals, understand the significance of who attends meetings, and know when business discussions actually happen—sometimes over formal presentations, sometimes over informal dinners, sometimes on the golf course.

These cultural contexts layer upon professional contexts in complex ways. A Japanese automotive engineer and a German automotive engineer share technical knowledge but operate within different organisational cultures, decision-making processes, and quality philosophies. Successfully managing international technical teams requires understanding both the universal language of engineering and the particular contexts in which that engineering happens.

The Irreducible Human Element

As I finish writing this article, it's worth noting that whilst AI could have generated a superficial treatment of this topic, understanding its true implications required human insight. I drew on years of observing technological change, understanding cultural anxieties about automation, and recognising patterns across disparate fields. This synthesis—connecting plumbing to anthropology, surgery to journalism—emerges from distinctly human contextual intelligence.

The context constraint isn't just a temporary technical limitation waiting for the next breakthrough. It reflects something fundamental about knowledge, experience, and human society. We are contextual beings, shaped by culture, relationships, and meaning-making in ways that resist reduction to tokens and parameters.

This doesn't mean we should be complacent. AI will continue advancing, and many jobs will transform or disappear. But understanding the context constraint helps us focus on developing genuinely irreplaceable human capabilities. It suggests that our value lies not in processing information faster but in understanding what that information means within the rich, complex, irreducibly human contexts of our lives.

The master electrician crawling through that Victorian ceiling cavity possesses something no AI system can replicate: embodied knowledge gained through years of experience, cultural understanding of how buildings evolve, and intuitive grasp of physical systems. His apprentice, initially awed by AI's expanding capabilities, is beginning to understand that their trade offers something equally remarkable—the ability to navigate the messy, contextual reality where humans actually live and work.

In the end, the context constraint reveals that the most profound aspects of human work—understanding, meaning-making, and connection—remain beyond AI's reach. Not because our machines aren't sophisticated enough, but because these capabilities emerge from being human in a human world. And that, perhaps, is the most reassuring context of all.


References and Further Information

  1. IBM Research Blog. “Why larger LLM context windows are all the rage.” IBM Research, 2024.

  2. Epoch AI. “LLMs now accept longer inputs, and the best models can use them more effectively.” Epoch AI Research, 2024.

  3. Google Research. “Chain of Agents: Large language models collaborating on long-context tasks.” NeurIPS 2024 Conference Paper.

  4. Tony Blair Institute. “AI Impact on Employment: Manual Jobs and Skilled Trades Analysis.” Tony Blair Institute for Global Change, 2024.

  5. Bureau of Labor Statistics. “Occupational Outlook Handbook: Electricians and Plumbers.” U.S. Department of Labor, 2024.

  6. Columbia Journalism Review. “Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena.” Tow Center Report, 2024.

  7. California State Legislature. “Senate Bill 1120: AI Regulation in Healthcare Utilization Management.” California Legislative Information, 2024.

  8. Centers for Medicare and Medicaid Services. “2023 MA Policy Rule: Guidance on AI Use in Coverage Determinations.” CMS.gov, 2024.

  9. Nature Humanities and Social Sciences Communications. “Key points for an ethnography of AI: an approach towards crucial data.” Nature Publishing Group, 2024.

  10. Polanyi, Michael. “The Tacit Dimension.” University of Chicago Press, 1966.

  11. Autor, David. “Polanyi's Paradox and the Shape of Employment Growth.” MIT Economics Working Paper, 2023.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...