The Treaty of Detroit for AI: How Workers Tamed Automation Before

The nightmares have evolved. Once, workers feared the factory floor going silent as machines hummed to life. Today, the anxiety haunts conference rooms and home offices, where knowledge workers refresh job boards compulsively and wonder if their expertise will survive the next quarterly earnings call. The statistics paint a stark picture: around 37 per cent of employees now worry about automation threatening their jobs, a marked increase from just a decade ago.
This isn't unfounded paranoia. Anthropic CEO Dario Amodei recently predicted that AI could eliminate half of all entry-level white-collar jobs within five years. Meanwhile, 14 per cent of all workers have already been displaced by AI, though public perception inflates this dramatically. Those not yet affected believe 29 per cent have lost their jobs to automation, whilst those who have experienced displacement estimate the rate at 47 per cent. The gap between perception and reality reveals something crucial: the fear itself has become as economically significant as the displacement.
But history offers an unexpected comfort. We've navigated technological upheaval before, and certain policy interventions have demonstrably worked. The question isn't whether automation will reshape knowledge work (it will), but which protections can transform this transition from a zero-sum catastrophe into a managed evolution that preserves human dignity whilst unlocking genuine productivity gains.
The Ghost of Industrial Automation Past
To understand what might work for today's knowledge workers, we need to examine what actually worked for yesterday's factory workers. The 1950s through 1970s witnessed extraordinary automation across manufacturing. The term “automation” itself was coined in the 1940s at the Ford Motor Company, initially applied to automatic handling of parts in metalworking processes.
When Unions Made Automation Work
What made this transition manageable wasn't market magic or technological gradualism. It was policy, particularly the muscular collective bargaining agreements that characterised the post-war period. By the 1950s, more than a third of the American workforce belonged to a union. This union membership helped build the American middle class.
The so-called “Treaty of Detroit” between General Motors and the United Auto Workers in 1950 established a framework that would characterise US labour relations through the 1980s. In exchange for improved wages and benefits (including cost-of-living adjustments, pensions beginning at 125 dollars per month, and health care provisions), the company retained all managerial prerogatives. The compromise was explicit: workers would accept automation's march in exchange for sharing its productivity gains.
But the Treaty represented more than a simple exchange. It embodied a fundamentally different understanding of technological progress—one where automation's bounty wasn't hoarded by shareholders but distributed across the economic system. When General Motors installed transfer machines that could automatically move engine blocks through 500 machining operations, UAW members didn't riot. They negotiated. The company's profit margins soared, but so did workers' purchasing power. A factory worker in 1955 could afford a house, a car, healthcare, and college for their children. That wasn't market equilibrium—it was conscious policy design.
The Golden Age of Shared Prosperity
The numbers tell an extraordinary story. Critically, collective bargaining performed impressively after World War II, more than tripling weekly earnings in manufacturing between 1945 and 1970. It gained for union workers an unprecedented measure of security against old age, illness and unemployment. Real wages for production workers rose 75 per cent between 1947 and 1973, even as automation eliminated millions of manual tasks. The productivity gains from automation flowed downward, not just upward.
The system worked because multiple protections operated simultaneously. The Wagner Act of 1935 bolstered unions and minimum wage laws, which mediated automation's displacing effects by securing wage floors and benefits. By the mid-1950s, the UAW fought for a guaranteed annual wage, a demand met in 1956 through Supplemental Unemployment Benefits funded by automotive companies.
These mechanisms mattered because automation didn't arrive gradually. Between 1950 and 1960, the automobile industry's output per worker-hour increased by 60 per cent. Entire categories of work vanished—pattern makers, foundry workers, assembly line positions that had employed thousands. Yet unemployment in Detroit remained manageable because displaced workers received benefits, retraining and alternative placement. The social compact held.
The Unravelling
Yet this system contained the seeds of its own decline. The National Labor Relations Act enshrined the right to unionise, but the system meant that unions had to organise each new factory individually rather than by industry. In many European countries, collective bargaining agreements extended automatically to other firms in the same industry, but in the United States, they usually reached no further than a plant's gates.
This structural weakness became catastrophic when globalisation arrived. Companies could simply build new factories in right-to-work states or overseas, beyond the reach of existing agreements. The institutional infrastructure that had made automation manageable began fragmenting. Between 1975 and 1985, union membership fell by 5 million. By the end of the 1980s, less than 17 per cent of American workers were organised, half the proportion of the early 1950s. The climax came when President Ronald Reagan broke the illegal Professional Air Traffic Controllers Organisation strike in 1981, dealing a major blow to unions.
What followed was predictable. As union density collapsed, productivity and wages decoupled. Between 1973 and 2014, productivity increased by 72.2 per cent whilst median compensation rose only 8.7 per cent. The automation that had once enriched workers now enriched only shareholders. The social compact shattered.
The lesson from this history isn't that industrial automation succeeded. Rather, it's that automation's harms were mitigated when workers possessed genuine structural power, and those harms accelerated when that power eroded. Union decline occurred in every sector within the private sector, not just manufacturing. When the institutional mechanisms that had distributed automation's gains disappeared, so did automation's promise.
The Knowledge Worker Predicament
Today's knowledge workers face automation without the institutional infrastructure that cushioned industrial workers. A Forbes Advisor Survey undertaken in 2023 found that 77 per cent of respondents were “concerned” that AI will cause job loss within the next 12 months, with 44 per cent “very concerned”. A Reuters/Ipsos poll in 2025 found 71 per cent of US adults fear that AI could permanently displace workers. The World Economic Forum's 2025 Future of Jobs Report indicates that 41 per cent of employers worldwide intend to reduce their workforce in the next five years due to AI automation.
The Anxiety Is Visceral and Immediate
The fear permeates every corner of knowledge work. Copywriters watch ChatGPT produce adequate marketing copy in seconds. Paralegals see document review systems that once required teams now handled by algorithms. Junior financial analysts discover that AI can generate investment reports indistinguishable from human work. Customer service representatives receive termination notices as conversational AI systems assume their roles. The anxiety isn't abstract—it's visceral and immediate.
Goldman Sachs predicted in 2023 that 300 million jobs across the United States and Europe could be lost or degraded as a result of AI adoption. McKinsey projects that 30 per cent of work hours could be automated by 2030, with 70 per cent of job skills changing during that same period.
Importantly, AI agents automate tasks, not jobs. Knowledge-work positions are combinations of tasks (some focused on creativity, context and relationships, whilst others are repetitive). Agents can automate repetitive tasks but struggle with tasks requiring judgement, deep domain knowledge or human empathy. If businesses capture all productivity gains from AI without sharing, workers may only produce more for the same pay, perpetuating inequality.
The Pipeline Is Constricting
Research from SignalFire shows Big Tech companies reduced new graduate hiring by 25 per cent in 2024 compared to 2023. The pipeline that once fed young talent into knowledge work careers has begun constricting. Entry-level positions that provided training and advancement now disappear entirely, replaced by AI systems supervised by a skeleton crew of senior employees. The ladder's bottom rungs are being sawn off.
Within specific industries, anxiety correlates with exposure: 81.6 per cent of digital marketers hold concerns about content writers losing their jobs due to AI's influence. The International Monetary Fund found that 79 per cent of employed women in the US work in jobs at high risk of automation, compared to 58 per cent of men. The automation wave doesn't strike evenly—it targets the most vulnerable first.
The Institutional Vacuum
Yet knowledge workers lack the collective bargaining infrastructure that once protected industrial workers. Private sector union density in the United States hovers around 6 per cent. The structural power that enabled the Treaty of Detroit has largely evaporated. When a software engineer receives a redundancy notice, there's no union representative negotiating severance packages or alternative placement. There's no supplemental unemployment benefit fund. There's an outdated résumé and a LinkedIn profile that suddenly needs updating.
The contrast with industrial automation couldn't be starker. When automation arrived at GM's factories, workers had mechanisms to negotiate their futures. When automation arrives at today's corporations, workers have non-disclosure agreements and non-compete clauses. The institutional vacuum is nearly total.
This absence creates a particular cruelty. Knowledge workers invested heavily in their human capital—university degrees, professional certifications, years of skill development. They followed the social script: educate yourself, develop expertise, secure middle-class stability. Now that expertise faces obsolescence at a pace that makes retraining feel futile. A paralegal who spent three years mastering document review discovers their skillset has a half-life measured in months, not decades.
Three Policy Pillars That Actually Work
Despite this bleak landscape, certain policy interventions have demonstrated genuine effectiveness in managing technological transitions.
Re-skilling Guarantees
The least effective approach to worker displacement is the one that dominates American policy discourse: underfunded, voluntary training programmes. The Trade Adjustment Assistance programme, designed to help US workers displaced by trade liberalisation, offers a cautionary tale.
Why American Retraining Fails
Research from Mathematica Policy Research found that the TAA is not effective in terms of increasing employability. TAA participation significantly increased receipt of reemployment services and education, but impacts on productive activity were small. Labour market outcomes for participants were significantly worse during the first two years than for their matched comparison group. In the final year, TAA participants earned about 3,300 dollars less than their comparisons.
The failures run deeper than poor outcomes. The programme operated on a fundamentally flawed assumption: that workers displaced by economic forces could retrain themselves whilst managing mortgage payments, childcare costs and medical bills. The cognitive load of financial precarity makes focused learning nearly impossible. When you're worried about keeping the lights on, mastering Python becomes exponentially harder.
Coverage proved equally problematic. Researchers found that the TAA covered only 6 per cent of the government assistance provided to workers laid off due to increased Chinese import competition from 1990 to 2007. Of the 88,001 workers eligible in 2019, only 32 per cent received its benefits and services. The programme helped a sliver of those who needed it, leaving the vast majority to navigate displacement alone.
Singapore's Blueprint for Success
Effective reskilling requires a fundamentally different architecture. The most successful models share several characteristics: universal coverage, immediate intervention, substantial funding, employer co-investment and ongoing income support.
Singapore's SkillsFuture programme demonstrates what comprehensive reskilling can achieve. In 2024, 260,000 Singaporeans used their SkillsFuture Credit, a 35 per cent increase from 192,000 in 2023. Singaporeans aged 40 and above receive a SkillsFuture Credit top-up of 4,000 Singapore dollars that will not expire. This is in addition to the Mid-Career Enhanced Subsidy, which offers subsidies of up to 90 per cent of course fees.
The genius of SkillsFuture lies in its elimination of friction. Workers don't navigate byzantine application processes or prove eligibility through exhaustive documentation. The credit exists in their accounts, immediately available. Training providers compete for learners, creating a market dynamic that ensures quality and relevance. The government absorbs the financial risk, freeing workers to focus on learning rather than budgeting.
The programme measures outcomes rigorously. The Training Quality and Outcomes Measurement survey is administered at course completion and six months later. The results speak for themselves. The number of Singaporeans taking up courses designed with employment objectives increased by approximately 20 per cent, from 95,000 in 2023 to 112,000 in 2024. SkillsFuture Singapore-supported learners taking IT-related courses surged from 34,000 in 2023 to 96,000 in 2024. About 1.05 million Singaporeans, or 37 per cent of all Singaporeans, have used their SkillsFuture Credit since 2016.
These aren't workers languishing in training programmes that lead nowhere. They're making strategic career pivots backed by state support, transitioning from declining industries into emerging ones with their economic security intact.
Denmark's Safety Net for Learning
Denmark's flexicurity model offers another instructive example. The Danish system combines high job mobility with a comprehensive income safety net and active labour market policy. Unemployment benefit is accessible for two years, with compensation rates reaching up to 90 per cent of previous earnings for lower-paid workers.
The Danish approach recognises a truth that American policy ignores: people can't retrain effectively whilst terrified of homelessness. The generous unemployment benefits create psychological space for genuine skill development. A worker displaced from a manufacturing role can take eighteen months to retrain as a software developer without choosing between education and feeding their family.
Denmark achieves this in combination with low inequality, low unemployment and high-income security. However, flexicurity alone is insufficient. The policy also needs comprehensive active labour market programmes with compulsory participation for unemployment compensation recipients. Denmark spends more on active labour market programmes than any other OECD country.
Success stems from tailor-made initiatives to individual displaced workers and stronger coordination between local level actors. The Danish government runs education and retraining programmes and provides counselling services, in collaboration with unions and employers. Unemployed workers get career counselling and paid courses, promoting job mobility over fixed-position security.
This coordination matters enormously. A displaced worker doesn't face competing bureaucracies with conflicting requirements. There's a single pathway from displacement to reemployment, with multiple institutions working in concert rather than at cross-purposes. The system treats worker transition as a collective responsibility, not an individual failing.
France's Cautionary Tale
France's Compte Personnel de Formation provides another model, though with mixed results. Implemented in 2015, the CPF is the only example internationally of an individual learning account in which training rights accumulate over time. However, in 2023, 1,335,900 training courses were taken under the CPF, down 28 per cent from 2022. The decline was most marked among users with less than a baccalauréat qualification.
The French experience reveals a critical design flaw. Individual learning accounts without adequate support services often benefit those who need them least. Highly educated workers already possess the cultural capital to navigate training systems, identify quality programmes and negotiate with employers. Less educated workers face information asymmetries and status barriers that individual accounts can't overcome alone.
The divergence in outcomes reveals a critical insight: reskilling guarantees only work when they're adequately funded, easily accessible, immediately available and integrated with income support. Programmes that require workers to navigate bureaucratic mazes whilst their savings evaporate tend to serve those who need them least.
Collective Bargaining Clauses
The second pillar draws directly from industrial automation's most successful intervention: collective bargaining that gives workers genuine voice in how automation is deployed.
Hollywood's Blueprint
The most prominent recent example comes from Hollywood. In autumn 2023, the Writers Guild of America ratified a new agreement with the Alliance of Motion Picture and Television Producers after five months of stopped work. The contract may be the first major union-management agreement regulating artificial intelligence across an industry.
The WGA agreement establishes several crucial principles. Neither traditional AI nor generative AI is a writer, so no AI-produced material can be considered literary material. If a company provides generative AI content to a writer as the basis for a script, the AI content is not considered “assigned materials” or “source material” and would not disqualify the writer from eligibility for separated rights. This means the writer will be credited as the first writer, affecting writing credit, residuals and compensation.
These provisions might seem technical, but they address something fundamental: who owns the value created through human-AI collaboration? In the absence of such agreements, studios could have generated AI scripts and paid writers minimally to polish them, transforming high-skill creative work into low-paid editing. The WGA prevented this future by establishing that human creativity remains primary.
Worker Agency in AI Deployment
Critically, the agreement gives writers genuine agency. A producing company cannot require writers to use AI software. A writer can choose to use generative AI, provided the company consents and the writer follows company policies. The company must disclose if any materials given to the writer were AI-generated.
This disclosure requirement matters enormously. Without it, writers might unknowingly build upon AI-generated foundations, only to discover later that their work's legal status is compromised. Transparency creates the foundation for genuine choice.
The WGA reserved the right to assert that exploitation of writers' material to train AI is prohibited. In addition, companies agreed to meet with the Guild to discuss their use of AI. These ongoing conversation mechanisms prevent AI deployment from becoming a unilateral management decision imposed on workers after the fact.
As NewsGuild president Jon Schleuss noted, “The Writers Guild contract helps level up an area that previously no one really has dealt with in a union contract. It's a really good first step in what's probably going to be a decade-long battle to protect creative individuals from having their talent being misused or replaced by generative AI.”
European Innovations in Worker Protection
Denmark provides another model through the Hilfr2 agreement concluded in 2024 between cleaning platform Hilfr and trade union 3F. The agreement explicitly addresses concerns arising from AI use, including transparency, accountability and workers' rights. Platform workers—often excluded from traditional labour protections—gained concrete safeguards through collective action.
The Teamsters agreement with UPS in 2023 curtails surveillance in trucks and prevents potential replacement of workers with automated technology. The contract doesn't prohibit automation, but establishes that management cannot deploy it unilaterally. Before implementing driver-assistance systems or route optimisation algorithms, UPS must negotiate impacts with the union. Workers get advance notice, training and reassignment rights.
These agreements share a common structure: they don't prohibit automation, but establish clear guardrails around its deployment and ensure workers share in productivity gains. They transform automation from something done to workers into something negotiated with them.
Regulatory Frameworks Create Leverage
In Europe, broader regulatory frameworks support collective bargaining on AI. The EU's AI Act entered into force in August 2024, classifying AI in “employment, work management and access to self-employment” as a high-risk AI system. This classification triggers stringent requirements around risk management, data governance, transparency and human oversight.
The regulatory designation creates legal leverage for unions. When AI in employment contexts is classified as high-risk, unions can demand documentation about how systems operate, what data they consume and what impacts they produce. The information asymmetry that typically favours management narrows substantially.
In March 2024, UNI Europa and Friedrich-Ebert-Stiftung created a database of collective agreement clauses regarding AI and algorithmic management negotiation. The database catalogues approaches from across Europe, allowing unions to learn from each other's innovations. A clause that worked in German manufacturing might adapt to French telecommunications or Spanish logistics.
At the end of 2023, the American Federation of Labor and Congress of Industrial Organizations and Microsoft announced a partnership to discuss how AI should address workers' needs and include their voices in its development. This represents the first agreement focused on AI between a labour organisation and a technology company.
The Microsoft-AFL-CIO partnership remains more aspirational than binding, but it signals recognition from a major technology firm that AI deployment requires social license. Microsoft gains legitimacy; unions gain influence over AI development trajectories. Whether this partnership produces concrete worker protections remains uncertain, but it acknowledges that AI isn't purely a technical question—it's a labour question.
Germany's Institutional Worker Voice
Germany's Works Constitution Act demonstrates how institutional mechanisms can give workers voice in automation decisions. Worker councils have participation rights in decisions about working conditions or dismissals. Proposals to alter production techniques by introducing automation must pass through worker representatives who evaluate impacts on workers.
If a company intends to implement AI-based software, it must consult with the works council and find agreement prior to going live, under Section 87 of the German Works Constitution Act. According to Section 102, the works council must be consulted before any dismissal. A notice of termination given without the works council being heard is invalid.
These aren't advisory consultations that management can ignore. They're legally binding processes that give workers substantive veto power over automation decisions. A German manufacturer cannot simply announce that AI will replace customer service roles. The works council must approve, and if approval isn't forthcoming, the company must modify its plans.
Sweden's Transition Success Story
Sweden's Job Security Councils offer perhaps the most comprehensive model of social partner collaboration on displacement. The councils are bi-partite social partner bodies in charge of transition agreements, career guidance and training services under strict criteria set in collective agreements, without government involvement. About 90 per cent of workers who receive help from the councils find new jobs within six months to two years.
Trygghetsfonden covers blue-collar workers, whilst TRR Trygghetsrådet covers 850,000 white-collar employees. According to TRR, in 2016, 88 per cent of redundant employees using TRR services found new jobs. As of 2019, 9 out of 10 active job-seeking clients found new jobs, studies or became self-employed within seven months. Among the clients, 68 per cent have equal or higher salaries than the jobs they were forced to leave.
These outcomes dwarf anything achieved by market-based approaches. Swedish workers displaced by automation don't compete individually for scarce positions. They receive coordinated support from institutions designed explicitly to facilitate transitions. The councils work because they intervene immediately after layoffs and have financial resources that public re-employment offices cannot provide. Joint ownership by unions and employers lends the councils high legitimacy. They cooperate with other institutions and can offer education, training, career counselling and financial aid, always tailored to individual needs.
The Swedish model reveals something crucial: when labour and capital jointly manage displacement, outcomes improve dramatically for both. Companies gain workforce flexibility without social backlash. Workers gain security without employment rigidity. It's precisely the bargain that made the Treaty of Detroit function.
AI Usage Covenants
The third pillar involves establishing clear contractual and regulatory frameworks governing how AI is deployed in employment contexts.
US Federal Contractor Guidance
On 29 April 2024, the Department of Labour's Office of Federal Contract Compliance Programmes released guidance to federal contractors regarding AI use in employment practices. The guidance reminds contractors of existing legal obligations and potentially harmful effects of AI on employment decisions if used improperly.
The guidance informs federal contractors that using automated systems, including AI, does not prevent them from violating federal equal employment opportunity and non-discrimination obligations. Recognising that “AI has the potential to embed bias and discrimination into employment decision-making processes,” the guidance advises contractors to ensure AI systems are designed and implemented properly to prevent and mitigate inequalities.
This represents a significant shift in regulatory posture. For decades, employment discrimination law focused on intentional bias or demonstrable disparate impact. AI systems introduce a new challenge: discrimination that emerges from training data or algorithmic design choices, often invisible to the employers deploying the systems. The Department of Labour's guidance establishes that ignorance provides no defence—contractors remain liable for discriminatory outcomes even when AI produces them.
Europe's Comprehensive AI Act
The EU's AI Act, which entered into force on 1 August 2024, takes a more comprehensive approach. Developers of AI technologies are subject to stringent risk management, data governance, transparency and human oversight obligations. The Act classifies AI in employment as a high-risk AI system, triggering extensive compliance requirements.
These requirements aren't trivial. Developers must conduct conformity assessments, maintain technical documentation, implement quality management systems and register their systems in an EU database. Deployers must conduct fundamental rights impact assessments, ensure human oversight and maintain logs of system operations. The regulatory burden creates incentives to design AI systems with worker protections embedded from inception.
State-Level Innovation in America
Colorado's Anti-Discrimination in AI Law imposes different obligations on developers and deployers of AI systems. Developers and deployers using AI in high-risk use cases are subject to higher standards, with high-risk areas including consequential decisions in education, employment, financial services, healthcare, housing and insurance.
Colorado's law introduces another innovation: an obligation to conduct impact assessments before deploying AI in high-risk contexts. These assessments must evaluate potential discrimination, establish mitigation strategies and document decision-making processes. The law creates an audit trail that regulators can examine when discrimination claims emerge.
California's Consumer Privacy Protection Agency issued draft regulations governing automated decision-making technology under the California Consumer Privacy Act. The draft regulations propose granting consumers (including employees) the right to receive pre-use notice regarding automated decision-making technology and to opt out of certain activities.
The opt-out provision potentially transforms AI deployment in employment. If workers can refuse algorithmic management, employers must maintain parallel human-centred processes. This requirement prevents total algorithmic domination whilst creating pressure to design AI systems that workers actually trust.
Building Corporate Governance Structures
Organisations should implement governance structures assigning responsibility for AI oversight and compliance, develop AI policies with clear guidelines, train staff on AI capabilities and limitations, establish audit procedures to test AI systems for bias, and plan for human oversight of significant AI-generated decisions.
These governance structures work best when they include worker representation. An AI ethics committee populated entirely by executives and technologists will miss impacts that workers experience daily. Including union representatives or worker council members in AI governance creates feedback loops that surface problems before they metastasise.
More than 200 AI-related laws have been introduced in state legislatures across the United States. The proliferation creates a patchwork that can be difficult to navigate, but it also represents genuine experimentation with different approaches to AI governance. California's focus on transparency, Colorado's emphasis on impact assessments, and Illinois's regulations around AI in hiring each test different mechanisms for protecting workers. Eventually, successful approaches will influence federal legislation.
What Actually Mitigates the Fear
Having examined the evidence, we can now answer the question posed at the outset: which policies best mitigate existential fears among knowledge workers whilst enabling responsible automation?
Piecemeal Interventions Don't Work
The data points to an uncomfortable truth: piecemeal interventions don't work. Voluntary training programmes with poor funding fail. Individual employment contracts without collective bargaining power fail. Regulatory frameworks without enforcement mechanisms fail. What works is a comprehensive system operating on multiple levels simultaneously.
The most effective systems share several characteristics. First, they provide genuine income security during transitions. Danish flexicurity and Swedish Job Security Councils demonstrate that workers can accept automation when they won't face destitution whilst retraining. The psychological difference between retraining with a safety net and retraining whilst terrified of poverty cannot be overstated. Fear shrinks cognitive capacity, making learning exponentially harder.
Procedural Justice Matters
Second, they ensure workers have voice in automation decisions through collective bargaining or worker councils. The WGA contract and German works councils show that procedural justice matters as much as outcomes. Workers can accept significant workplace changes when they've participated in shaping those changes. Unilateral management decisions breed resentment and resistance even when objectively reasonable.
Third, they make reskilling accessible, immediate and employer-sponsored. Singapore's SkillsFuture demonstrates that when training is free, immediate and tied to labour market needs, workers actually use it. Programmes that require workers to research training providers, evaluate programme quality, arrange financing and coordinate schedules fail because they demand resources that displaced workers lack.
Legal Frameworks Prevent the Worst Abuses
Fourth, they establish clear legal frameworks around AI deployment in employment contexts. The EU AI Act and various US state laws create baseline standards that prevent the worst abuses. Without such frameworks, AI deployment becomes a race to the bottom, with companies competing on how aggressively they can eliminate labour costs.
Fifth, and perhaps most importantly, they ensure workers share in productivity gains. If businesses capture all productivity gains from AI without sharing, workers will only produce more for the same pay. The Treaty of Detroit's core bargain (accept automation in exchange for sharing gains) remains as relevant today as it was in 1950.
Workers Need Stake in Automation's Upside
This final point deserves emphasis. When automation increases productivity by 40 per cent but wages remain flat, workers experience automation as pure extraction. They produce more value whilst receiving identical compensation—a transfer of wealth from labour to capital. No amount of retraining programmes or worker councils will make this palatable. Workers need actual stake in automation's upside.
The good news is that 74 per cent of workers say they're willing to learn new skills or retrain for future jobs. Nine in 10 companies planning to use AI in 2024 stated they were likely to hire more workers as a result, with 96 per cent favouring candidates demonstrating hands-on experience with AI. The demand for AI-literate workers exists; what's missing is the infrastructure to create them.
The Implementation Gap
Yet a 2024 Boston Consulting Group study demonstrates the difficulties: whilst 89 per cent of respondents said their workforce needs improved AI skills, only 6 per cent said they had begun upskilling in “a meaningful way.” The gap between intention and implementation remains vast.
Why the disconnect? Because corporate reskilling requires investment, coordination and patience—all scarce resources in shareholder-driven firms obsessed with quarterly earnings. Training workers for AI-augmented roles might generate returns in three years, but executives face performance reviews in three months. The structural incentives misalign catastrophically.
Corporate Programmes Aren't Enough
Corporate reskilling programmes provide some hope. PwC has implemented a 3 billion dollar programme for upskilling and reskilling. Amazon launched an optional upskilling programme investing over 1.2 billion dollars. AT&T's partnership with universities has retrained hundreds of thousands of employees. Siemens' digital factory training programmes combine conventional manufacturing knowledge with AI and robotics expertise.
These initiatives matter, but they're insufficient. They reach workers at large, prosperous firms with margins sufficient to fund extensive training. Workers at small and medium enterprises, in declining industries or in precarious employment receive nothing. The pattern replicates the racial and geographic exclusions that limited the Treaty of Detroit's benefits to a privileged subset.
However, relying solely on voluntary corporate programmes recreates the inequality that characterised industrial automation's decline. Workers at large, profitable technology companies receive substantial reskilling support. Workers at smaller firms, in declining industries or in precarious employment receive nothing. The pattern replicates the racial and geographic exclusions that limited the Treaty of Detroit's benefits to a privileged subset.
The Two-Tier System
We're creating a two-tier system: knowledge workers at elite firms who surf the automation wave successfully, and everyone else who drowns. This isn't just unjust—it's economically destructive. An economy where automation benefits only a narrow elite will face consumption crises as the mass market hollows out.
Building the Infrastructure of Managed Transition
Today's knowledge workers face challenges that industrial workers never encountered. The pace of technological change is faster. The geographic dispersion of work is greater. The decline of institutional labour power is more advanced. Yet the fundamental policy challenge remains the same: how do we share the gains from technological progress whilst protecting human dignity during transitions?
Multi-Scale Infrastructure
The answer requires building institutional infrastructure that currently doesn't exist. This infrastructure must operate at multiple scales simultaneously—individual, organisational, sectoral and national.
At the individual level, workers need portable benefits that travel with them regardless of employer. Health insurance, retirement savings and training credits should follow workers through career transitions rather than evaporating at each displacement. Singapore's SkillsFuture Credit provides one model; several US states have experimented with portable benefit platforms that function regardless of employment status.
At the organisational level, companies need frameworks for responsible AI deployment. These frameworks should include impact assessments before implementing AI in employment contexts, genuine worker participation in automation decisions, and profit-sharing mechanisms that distribute productivity gains. The WGA contract demonstrates what such frameworks might contain; Germany's Works Constitution Act shows how to institutionalise them.
Sectoral and National Solutions
At the sectoral level, industries need collective bargaining structures that span employers. The Treaty of Detroit protected auto workers at General Motors, but it didn't extend to auto parts suppliers or dealerships. Today's knowledge work increasingly occurs across firm boundaries—freelancers, contractors, gig workers, temporary employees. Protecting these workers requires sectoral bargaining that covers everyone in an industry regardless of employment classification.
At the national level, countries need comprehensive active labour market policies that treat displacement as a collective responsibility. Denmark and Sweden demonstrate what's possible when societies commit resources to managing transitions. These systems aren't cheap—Denmark spends more on active labour market programmes than any OECD nation—but they're investments that generate returns through social stability and economic dynamism.
Concrete Policy Proposals
Policymakers could consider extending unemployment insurance for all AI-displaced workers to allow sufficient time for workers to acquire new certifications. The current 26-week maximum in most US states barely covers job searching, let alone substantial retraining. Extending benefits to 18 or 24 months for workers pursuing recognised training programmes would create space for genuine skill development.
Wage insurance, especially for workers aged 50 and older, could support workers where reskilling isn't viable. A 58-year-old mid-level manager displaced by AI might reasonably conclude that retraining as a data scientist isn't practical. Wage insurance that covers a portion of earnings differences when taking a lower-paid position acknowledges this reality whilst keeping workers attached to the labour force.
An “AI Adjustment Assistance” programme would establish eligibility for workers affected by AI. This would mirror the Trade Adjustment Assistance programme for trade displacement but with the design failures corrected: universal coverage for all AI-displaced workers, immediate benefits without complex eligibility determinations, generous income support during retraining, and employer co-investment requirements.
Apprenticeships and Legal Protections
AI response legislation could encourage registered apprenticeships that align with good jobs. Registered apprenticeships appear to be the strategy most poised to train workers for new AI jobs. South Carolina's simplified 1,000 dollar per apprentice per year tax incentive has helped boost apprenticeships with potential for national scale. Expanding this model nationally whilst ensuring apprenticeships lead to family-sustaining wages would create pathways from displacement to reemployment.
The No Robot Bosses Act, proposed in the United States, would prohibit employers from relying exclusively on automated decision-making systems in employment decisions such as hiring or firing. The bill would require testing and oversight of decision-making systems to ensure they do not have discriminatory impact on workers. This legislation addresses a crucial gap: current anti-discrimination law struggles with algorithmic bias because traditional doctrines assume human decision-makers.
Enforcement Must Have Teeth
Critically, these policies must include enforcement mechanisms with real teeth. Regulations without enforcement become suggestions. The EU AI Act creates substantial penalties for non-compliance—up to 7 per cent of global revenue for the most serious violations. These penalties matter because they change corporate calculus. A fine large enough to affect quarterly earnings forces executives to take compliance seriously.
The World Economic Forum estimates that by 2025, 50 per cent of all employees will need reskilling due to adopting new technology. The Society for Human Resource Management's 2025 research estimates that 19.2 million US jobs face high or very high risk of automation displacement. The scale of the challenge demands policy responses commensurate with its magnitude.
The Growing Anxiety-Policy Gap
Yet current policy remains woefully inadequate. A 2024 Gallup poll found that nearly 25 per cent of workers worry that their jobs can become obsolete because of AI, up from 15 per cent in 2021. In the same study, over 70 per cent of chief human resources officers predicted AI would replace jobs within the next three years. The gap between worker anxiety and policy response yawns wider daily.
A New Social Compact
What's needed is nothing short of a new social compact for the age of AI. This compact must recognise that automation isn't inevitable in its current form; it's a choice shaped by policy, power and institutional design. The Treaty of Detroit wasn't a natural market outcome; it was the product of sustained organising, political struggle and institutional innovation. Today's knowledge workers need similar infrastructure.
This infrastructure must include universal reskilling guarantees that don't require workers to bankrupt themselves whilst retraining. It must include collective bargaining rights that give workers genuine voice in how AI is deployed. It must include AI usage covenants that establish clear legal frameworks around employment decisions. And it must include mechanisms to ensure workers share in the productivity gains that automation generates.
Political Will Over Economic Analysis
The pathway forward requires political courage. Extending unemployment benefits costs money. Supporting comprehensive reskilling costs money. Enforcing AI regulations costs money. These investments compete with other priorities in constrained budgets. Yet the alternative—allowing automation to proceed without institutional guardrails—costs far more through social instability, wasted human potential and economic inequality that undermines market functionality.
The existential fear that haunts today's knowledge workers isn't irrational. It's a rational response to a system that currently distributes automation's costs to workers whilst concentrating its benefits with capital. The question isn't whether we can design better policies; we demonstrably can, as the evidence from Singapore, Denmark, Sweden and even Hollywood shows. The question is whether we possess the political will to implement them before the fear itself becomes as economically destructive as the displacement it anticipates.
The Unavoidable First Step
History suggests the answer depends less on economic analysis than on political struggle. The Treaty of Detroit emerged not from enlightened management but from workers who shut down production until their demands were met. The WGA contract came after five months of picket lines, not conference room consensus. The Danish flexicurity model reflects decades of social democratic institution-building, not technocratic optimisation.
Knowledge workers today face a choice: organise collectively to demand managed transition, or negotiate individually from positions of weakness. The policies that work share a common prerequisite: workers powerful enough to demand them. Building that power remains the unavoidable first step toward taming automation's storm. Everything else is commentary.
References & Sources
AIPRM. (2024). “50+ AI Replacing Jobs Statistics 2024.” https://www.aiprm.com/ai-replacing-jobs-statistics/
Center for Labor and a Just Economy at Harvard Law School. (2024). “Worker Power and the Voice in the AI Response Report.” https://clje.law.harvard.edu/app/uploads/2024/01/Worker-Power-and-the-Voice-in-the-AI-Response-Report.pdf
Computer.org. (2024). “Reskilling for the Future: Strategies for an Automated World.” https://www.computer.org/publications/tech-news/trends/reskilling-strategies
CORE-ECON. “Application: Employment security and labour market flexibility in Denmark.” https://books.core-econ.org/the-economy/macroeconomics/02-unemployment-wages-inequality-10-application-labour-market-denmark.html
Emerging Tech Brew. (2023). “The WGA contract could be a blueprint for workers fighting for AI rules.” https://www.emergingtechbrew.com/stories/2023/10/06/wga-contract-ai-unions
Encyclopedia.com. “General Motors-United Auto Workers Landmark Contracts.” https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/general-motors-united-auto-workers-landmark-contracts
Equal Times. (2024). “Trade union strategies on artificial intelligence and collective bargaining on algorithms.” https://www.equaltimes.org/trade-union-strategies-on?lang=en
Eurofound. (2024). “Collective bargaining on artificial intelligence at work.” https://www.eurofound.europa.eu/en/publications/all/collective-bargaining-on-artificial-intelligence-at-work
European Parliament. (2024). “Addressing AI risks in the workplace.” https://www.europarl.europa.eu/RegData/etudes/BRIE/2024/762323/EPRS_BRI(2024)762323_EN.pdf
Final Round AI. (2025). “AI Job Displacement 2025: Which Jobs Are At Risk?” https://www.finalroundai.com/blog/ai-replacing-jobs-2025
Growthspace. (2024). “Upskilling and Reskilling in 2024.” https://www.growthspace.com/post/future-of-work-upskilling-and-reskilling
National University. (2024). “59 AI Job Statistics: Future of U.S. Jobs.” https://www.nu.edu/blog/ai-job-statistics/
OECD. (2015). “Back to Work Sweden: Improving the Re-employment Prospects of Displaced Workers.” https://www.oecd.org/content/dam/oecd/en/publications/reports/2015/12/back-to-work-sweden_g1g5efbd/9789264246812-en.pdf
OECD. (2024). “Individualising training access schemes: France – the Compte Personnel de Formation.” https://www.oecd.org/en/publications/individualising-training-access-schemes-france-the-compte-personnel-de-formation-personal-training-account-cpf_301041f1-en.html
SEO.ai. (2025). “AI Replacing Jobs Statistics: The Impact on Employment in 2025.” https://seo.ai/blog/ai-replacing-jobs-statistics
SkillsFuture Singapore. (2024). “SkillsFuture Year-In-Review 2024.” https://www.ssg.gov.sg/newsroom/skillsfuture-year-in-review-2024/
TeamStage. (2024). “Jobs Lost to Automation Statistics in 2024.” https://teamstage.io/jobs-lost-to-automation-statistics/
TUAC. (2024). “The Swedish Job Security Councils – A case study on social partners' led transitions.” https://tuac.org/news/the-swedish-job-security-councils-a-case-study-on-social-partners-led-transitions/
U.S. Department of Labor. “Chapter 3: Labor in the Industrial Era.” https://www.dol.gov/general/aboutdol/history/chapter3
U.S. Government Accountability Office. (2001). “Trade Adjustment Assistance: Trends, Outcomes, and Management Issues.” https://www.gao.gov/products/gao-01-59
U.S. Government Accountability Office. (2012). “Trade Adjustment Assistance: Changes to the Workers Program.” https://www.gao.gov/products/gao-12-953
Urban Institute. (2024). “How Government Can Embrace AI and Workers.” https://www.urban.org/urban-wire/how-government-can-embrace-ai-and-workers
Writers Guild of America. (2023). “Artificial Intelligence.” https://www.wga.org/contracts/know-your-rights/artificial-intelligence
Writers Guild of America. (2023). “Summary of the 2023 WGA MBA.” https://www.wgacontract2023.org/the-campaign/summary-of-the-2023-wga-mba
Center for American Progress. (2024). “Unions Give Workers a Voice Over How AI Affects Their Jobs.” https://www.americanprogress.org/article/unions-give-workers-a-voice-over-how-ai-affects-their-jobs/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk