Silicon Valley Heard 81,000 People: It Chose Not to Listen

Eighty thousand people walked into a room, metaphorically speaking, and told one of the world's most prominent artificial intelligence companies exactly what frightens them. The question now is whether anyone on the other side of the screen was genuinely listening.

In December 2025, Anthropic opened its Claude chatbot to a sweeping conversational experiment. Over one week, 80,508 users across 159 countries and 70 languages sat down with an AI-powered interviewer and answered open-ended questions about what they wanted from artificial intelligence, and what kept them awake at night. The result is what Anthropic calls the largest multilingual qualitative study on AI aspirations ever conducted. It is also, depending on how you read the data, either a roadmap for the industry or a warning siren.

The findings landed with a paradox at their centre. The features that draw people to AI are the same features that terrify them. Productivity gains? Yes, please, said 32% of respondents who reported AI had already helped them work faster. But 22.2% named job displacement and economic anxiety as a primary fear, while 21.9% worried about losing their autonomy and agency. Perhaps most striking was the 16% who expressed concern about losing the ability to think critically; a fear of cognitive atrophy that suggests people are not merely worried about their livelihoods, but about their minds.

This is not an abstract policy debate. It is a massive, real-time expression of ambivalence from the very people who are already using the technology. And it lands at a moment when the gap between what AI companies say and what the public feels has never been wider.

The Light, the Shade, and the Space Between

Anthropic branded the study “Light and Shade,” a title that captures the contradictory landscape the data reveals. On the light side, 67% of respondents held a broadly positive view of AI. The top three aspirations, professional excellence at 18.8%, personal transformation at 13.7%, and life management at 13.5%, accounted for 46% of all responses. People were not asking AI to do their jobs. They wanted it to handle the repetitive, soul-draining tasks so they could focus on strategy, creativity, and, quite simply, leaving work on time. Time freedom itself ranked as the fourth most cited aspiration at 11.1%, followed by financial independence, societal transformation, and entrepreneurship.

But the shade is thick. Unreliability topped the list of concerns at 26.7%, ahead of both job fears and autonomy worries. The fifth major concern, cited by 15% of respondents, was the absence of adequate regulation and unclear accountability when things go wrong. On average, each respondent voiced 2.3 distinct concerns. Only 11% said they had zero fears about AI. The remaining 89% carried a mixture of hope and dread that defies the neat narratives preferred by corporate communications departments.

Regional differences added further complexity. Users in Sub-Saharan Africa and Latin America expressed 10 to 12% lower rates of negative sentiment compared with those in Western Europe and North America. In emerging economies, AI is framed less as a threat and more as a “capital bypass mechanism,” a way to start businesses without the traditional infrastructure of funding, hiring, and physical premises. The vision of AI for entrepreneurship resonated most strongly in Africa, South and Central Asia, the Middle East, and Latin America, where respondents described AI as a way to circumvent the capital barriers that have historically prevented economic participation. In East Asian markets, by contrast, the fear of cognitive degradation ran notably higher, with 18% expressing concern about cognitive atrophy and 13% worried about loss of meaning, a culturally distinct set of anxieties compared with the West's emphasis on regulatory concerns.

When asked whether AI had already taken steps towards their goals, 81% of respondents said yes. Productivity gains came first at 32%, but unmet expectations came second at 18.9%, ahead of cognitive partnership at 17.2%, learning support at 9.9%, and emotional support at 6.1%. That nearly one in five respondents reported that AI had failed to meet their expectations is itself a data point worth pausing on. The technology's most enthusiastic adopters are already encountering its limits, and that experience is shaping their anxieties about the future.

The study has limitations that deserve acknowledgement. Its 80,508 respondents were all existing Claude users, not a random cross-section of humanity. Self-selection bias is real. But the sheer scale, the linguistic diversity, and the open-ended methodology give it a weight that smaller, more structured surveys often lack. And its findings are remarkably consistent with independent research from institutions with no commercial stake in the outcome.

A Perception Gap Wide Enough to Drive a Data Centre Through

If Anthropic's study tells us what users feel, a constellation of other research tells us how dramatically those feelings diverge from the boardroom consensus.

In late 2025, nonprofit organisation JUST Capital, in partnership with The Harris Poll and the Robin Hood Foundation, surveyed corporate executives, institutional investors, and the American public about AI. The results exposed a chasm. Roughly 93% of corporate leaders and 80% of investors said they believed AI would have a net positive impact on society within five years. Among the general public, that figure dropped to 58%. On productivity, the gap was even starker: 98% of corporate leaders believed AI would boost worker productivity, compared with 47% of the public.

Nearly half of Americans surveyed by JUST Capital expected AI to replace workers and eliminate jobs outright. Only 20% of executives shared that expectation. Flip the lens: 64% of executives said AI would help workers be more productive in their current roles. Just 23% of the public agreed. On the question of how AI profits should be distributed, the public favoured spreading gains across lower prices for customers, workforce support for displaced workers, and investments in safety and security. Investors, predictably, believed the majority of gains should flow to shareholders.

The safety spending divide was equally revealing. Roughly 60% of investors and half of the public said companies should spend more than 5% of their total AI investment on safety. Meanwhile, 59% of corporate leaders said spending should be capped at 5%. When the people building AI want to spend less on safety than the people using it, the trust implications are difficult to overstate.

Pew Research Centre has been tracking American sentiment on AI with growing urgency. In a June 2025 survey, 50% of US adults said the increased use of AI in daily life made them feel more concerned than excited, up from 37% in 2021, a 13-percentage-point increase in roughly four years. Only 10% said they were more excited than concerned. More than half, 53%, said AI would worsen people's ability to think creatively. Fifty per cent said the same about forming meaningful relationships. More than 56% of the public expressed extreme or very high concern about AI eliminating jobs, more than double the 25% of AI experts who shared that level of worry. On the question of whether they trusted the US government to regulate AI effectively, Americans were nearly evenly split: 44% expressed some trust, while 47% had little to none.

The partisan dimension is worth noting. Pew found that nearly identical shares of Republicans and Democrats, 50% and 51% respectively, said they were more concerned than excited about AI's growing use in daily life. This bipartisan unease represents a notable shift; in previous years, Republicans had been consistently more concerned. The convergence suggests that AI anxiety has transcended the familiar left-right divides of American politics.

The 2025 Edelman Trust Barometer added an international dimension. Trust in AI ranged from 87% in China and 67% in Brazil down to 39% in Germany, 36% in the United Kingdom, and just 32% in the United States. Three times as many Americans rejected the growing use of AI (49%) as embraced it (17%). In the UK, 71% of the bottom income quartile felt they would be left behind rather than realise any advantages from generative AI. Two-thirds of respondents in developed nations believed business leaders would not be fully honest with employees about the impact of AI on jobs. Edelman also found a significant class divide within the workplace: only one in four non-managers regularly used AI, compared with nearly two-thirds of managers, suggesting that the benefits of AI are accruing unevenly even within organisations.

The Stanford Human-Centred Artificial Intelligence Institute's 2025 AI Index Report confirmed a global trust paradox: countries with the highest AI investment and the most advanced AI ecosystems expressed the most scepticism about AI products and services. In the United States, only 39% of people surveyed believed AI products were more beneficial than harmful, compared with 80% in Indonesia and 83% in China. Confidence that AI companies protect personal data fell globally from 50% in 2023 to 47% in 2024.

These are not marginal findings from obscure polls. They represent the most comprehensive body of public opinion data on artificial intelligence ever assembled, and they all point in the same direction: the public is significantly more worried about AI than the people building it believe them to be.

Warnings from Within the Cathedral

What makes this moment unusual is that some of the loudest warnings are coming from inside the industry itself. Anthropic's chief executive, Dario Amodei, has been remarkably blunt for a man running a company valued in the tens of billions for its AI technology. In May 2025, Amodei warned that rapid advances in AI could eliminate up to 50% of all entry-level white-collar jobs within five years, potentially pushing unemployment to 10 to 20%, the highest rates since the Great Depression.

“We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” Amodei told CNN. “I don't think this is on people's radar.” He proposed a “token tax” requiring AI companies to contribute 3% of revenues to government redistribution programmes to compensate displaced workers, a suggestion that, as he freely acknowledged, ran against his own economic interest. By September 2025, Amodei had doubled down on his warnings, telling CNN that AI was advancing “very quickly” and had already begun replacing jobs. He noted that Anthropic tracks how people use its AI models, currently about 60% for augmentation and 40% for automation, with the latter growing.

Microsoft AI chief Mustafa Suleyman went further in early 2026, telling the Financial Times that AI would automate most professional tasks within 12 to 18 months, including work performed by lawyers, accountants, marketers, and project managers. “I think that we're going to have a human-level performance on most, if not all, professional tasks,” he said, specifically referring to work where people are “sitting down at a computer.” He pointed to software engineering as evidence the shift was already underway, noting that many software engineers were now using AI-assisted coding for the vast majority of their code production.

Not everyone in the industry agrees. At VivaTech 2025 in Paris, Nvidia chief executive Jensen Huang offered a sharp rebuttal to Amodei's predictions. “I pretty much disagree with almost everything” Amodei says, Huang told the audience. His argument rested on historical precedent: “Whenever companies are more productive, they hire more people.” Huang also took a pointed swipe at Anthropic's positioning: “One, he believes that AI is so scary that only they should do it. Two, that AI is so expensive, nobody else should do it. And three, AI is so incredibly powerful that everyone will lose their jobs, which explains why they should be the only company building it.”

The clash between Huang and Amodei captures the industry's internal schism with unusual clarity. One camp insists AI will create more jobs than it destroys, citing historical patterns of technological change. The other argues that the speed and scale of AI advancement makes historical analogies unreliable, that this time genuinely is different. Both positions carry real consequences for how the public's concerns are addressed, or dismissed. And as one commentator observed of the broader dynamic, “the people making the most aggressive predictions about AI wiping out white-collar work are the same people selling the tools to do it.” That does not make them wrong, but it does raise questions about the line between warning and marketing.

The Layoff Ledger

The debate might feel more academic if it were not for the numbers already appearing in employment data. According to outplacement firm Challenger, Gray & Christmas, nearly 55,000 job cuts in 2025 were directly attributed to AI, out of a total 1.17 million layoffs, the highest level since the pandemic year of 2020.

In the first two months of 2026, the pace accelerated. Artificial intelligence was cited in 12,304 US job cuts announced between January and February, representing 8% of the layoff total during that period. A March 2026 working paper from the National Bureau of Economic Research, based on the Duke CFO Survey of 750 US chief financial officers, found that 44% of firms planned AI-related job cuts this year. When extrapolated across the broader economy, that amounts to approximately 502,000 roles, roughly a ninefold increase from 2025.

The headline layoffs tell their own story. In February 2026, Jack Dorsey's fintech company Block announced it was cutting approximately 4,000 employees, roughly 40% of its workforce, explicitly citing AI. “Intelligence tools have changed what it means to build and run a company,” Dorsey wrote to shareholders. “A significantly smaller team, using the tools we're building, can do more and do it better.” Block's share price surged up to 24% on the news. The market's reaction was instructive: investors celebrated the human cost of AI-driven efficiency with the same enthusiasm they might greet a new product launch.

Amazon eliminated 16,000 corporate roles, with leadership explicitly citing AI and automation as drivers. Atlassian cut 10% of its workforce. Meta was reportedly planning to cut 20% of jobs. These are not struggling companies desperately cutting costs. They are among the most profitable technology enterprises in history, and they are telling the world that AI allows them to do more with fewer people.

The impact falls disproportionately on the young. Workers aged 22 to 25 in the most AI-exposed roles saw a 6% drop in employment from late 2022 to September 2025. Software developers in that age bracket experienced an almost 20% decline from their late-2022 peak. Among 20 to 30-year-olds in tech-exposed roles more broadly, unemployment has risen by nearly three percentage points since early 2025. Workers aged 18 to 24 are 129% more likely than older workers to fear AI could make their jobs obsolete, and 49% of Generation Z job seekers believe AI has already diminished the value of their university education.

The Duke CFO Survey's co-author, John Graham, cautioned against catastrophic interpretations. The projected 502,000 job losses represent just 0.4% of approximately 125 million US roles, “not the doomsday job scenario that you might sometimes see in the headlines,” he told Fortune. But for the workers in that 0.4%, particularly those at the beginning of their careers, the statistics offer cold comfort. And as a February 2026 Fortune report noted, thousands of chief executives admitted that AI had produced no measurable impact on employment or productivity at their firms, resurrecting the productivity paradox that economist Robert Solow identified forty years ago: organisations can see AI everywhere except in the productivity statistics.

The Reskilling Promise and its Discontents

The standard corporate response to AI displacement anxiety follows a well-rehearsed script: we will retrain workers for the jobs of tomorrow. OpenAI published its “AI at Work: Workforce Blueprint” in October 2025 and convened labour leaders in Washington, DC to discuss the technology's impact on jobs and skills. Chief executive Sam Altman, speaking in Chennai in February 2026, called for “policies that help people adapt to these changes, including lifelong learning and reskilling programs.” The company is reportedly developing a jobs platform and certification programme, with secondary reporting suggesting a goal of certifying up to 10 million Americans by 2030. OpenAI is also collaborating with North America's Building Trades Unions to accelerate data centre construction, committing funding to union training and recruitment initiatives.

The rhetoric is appealing. The execution is another matter entirely. A 2025 PwC survey found that 74% of workers were willing to learn new skills or retrain entirely to remain employable, but access to affordable training remains a barrier, particularly in developing economies. PwC's Global AI Jobs Barometer found that workers with advanced AI skills earn 56% more than peers in the same roles without those skills, creating a powerful incentive to upskill, but also a widening gap between those who can access training and those who cannot.

Deloitte's 2026 State of AI in the Enterprise survey found that the most common organisational response to AI talent strategy was educating the broader workforce to raise AI fluency, cited by 53% of companies, followed by designing and implementing reskilling strategies at 48%. But as workforce researchers have repeatedly observed, most enterprise reskilling programmes fail to deliver because they treat learning as something separate from work. When employees must choose between doing their job and doing their training, the job wins every time. The reskilling programmes that actually work start with a task-level skills assessment, understanding exactly which tasks are being automated, which are being elevated, and which entirely new categories are emerging.

The structural problem runs deeper still. Harvard researcher Rachel Lipson has noted that workforce development in the United States remains “chronically underfunded compared to peer nations,” despite no shortage of innovative training models or motivated workers. The gap between corporate reskilling promises and government investment in workforce infrastructure suggests that the burden of adaptation is being quietly shifted onto the workers least equipped to bear it.

There is also a fundamental tension in the reskilling narrative. If AI can automate entry-level tasks, and the industry's own leaders say it will do so within one to five years, then retraining workers for AI-adjacent roles only works if those roles exist in sufficient numbers and remain resistant to further automation. The World Economic Forum's Future of Jobs Report 2025, which drew on surveys of more than 1,000 leading global employers, projected 170 million new roles created and 92 million displaced between 2025 and 2030, a net gain of 78 million jobs. The Information Technology and Innovation Foundation's December 2025 analysis offered a more optimistic assessment, finding that through 2024, AI's job creation effects were outpacing its displacement effects, primarily because the AI boom generated significant employment in data centre construction, hardware manufacturing, and AI development itself. Construction jobs exposed to the data centre build-out increased by 216,000 since 2022. Whether this infrastructure-driven job creation can absorb the white-collar workers being displaced remains the central uncertainty of the decade.

Governance, Regulation, and the Question of Who Decides

The European Union's AI Act represents the most ambitious attempt yet to regulate artificial intelligence comprehensively. Its phased enforcement timeline began with prohibited AI practices taking effect in February 2025, followed by general-purpose AI transparency requirements in August 2025, with the bulk of remaining obligations due by 2 August 2026. Penalties for non-compliance are severe: up to 35 million euros or 7% of global annual turnover for the most serious violations.

But regulation alone cannot bridge the trust deficit revealed by the survey data. The Edelman Trust Barometer found that people place greater confidence in business than in government to use AI responsibly; across five markets surveyed, only 34% of respondents were comfortable with government's use of AI, compared with 46% for business overall and 56% for their own employer. Employees are 2.5 times more motivated to embrace AI when they feel their job security is increasing rather than decreasing. In the United Kingdom and the United States, two in three AI distrusters feel the technology is being forced upon them.

The JUST Capital survey found that 56% of the American public did not think companies should determine AI standards on their own, with majorities favouring co-regulation involving government, industry, universities, and civil society. In the United States, 73.7% of local policymakers agreed that AI should be regulated, up from 55.7% in 2022, according to the Stanford HAI AI Index. Support was stronger among Democrats (79.2%) than Republicans (55.5%), though both registered notable increases. The strongest backing was for stricter data privacy rules (80.4%), retraining for the unemployed (76.2%), and AI deployment regulations (72.5%).

What the public appears to want is not a choice between corporate self-governance and heavy-handed state regulation, but a model in which multiple stakeholders share responsibility. The EU AI Act, with its requirement that each member state establish at least one AI regulatory sandbox by August 2026, gestures toward this approach. Whether it will prove sufficient remains deeply uncertain, particularly given that the European standardisation bodies CEN and CENELEC have been unable to develop the required technical standards within the original timeline.

The Listening Deficit

Return to the original question: are the companies building AI actually listening? The evidence suggests a complicated answer.

Anthropic's decision to conduct the 81,000-person study in the first place represents a form of listening that few competitors have matched. The company's willingness to publish findings that include substantial criticism of AI, including fears about dependency, cognitive degradation, and economic displacement, suggests a genuine interest in understanding user sentiment, not merely managing it. Amodei's repeated public warnings about job displacement, however self-serving critics may find them, place Anthropic in the unusual position of sounding the alarm about the very product it sells.

But listening and acting are different things. Anthropic continues to develop increasingly capable AI models, including systems that can work independently for nearly seven hours. The company tracks usage patterns showing a gradual shift from augmentation, where AI assists human workers, to automation, where AI replaces them. Currently, approximately 60% of Claude usage falls under augmentation and 40% under automation, but the latter is growing. Acknowledging a problem and accelerating the technology that causes it is a particular kind of cognitive dissonance.

The broader industry picture is less encouraging. The JUST Capital data showing that 98% of corporate leaders believe AI will boost productivity, against 47% of the public, suggests not a listening problem but a hearing problem: executives receive the information and discount it. The Harvard Business Review reported in November 2025 that leaders assume employees are excited about AI, and they are wrong. The Edelman finding that “someone like me” is on average twice as trusted as a chief executive or government leader to tell the truth about AI suggests that top-down corporate communications about AI's benefits are falling on increasingly deaf ears. Employees want to feel that their embrace of AI is voluntary, not mandatory; in the UK and the US, two in three AI distrusters feel it is being forced upon them.

There is also the matter of incentive structures. Block's share price soaring 24% after announcing AI-driven layoffs of 4,000 people sends an unmistakable signal to every public company: the market rewards efficiency gains, regardless of human cost. When Goldman Sachs economist Joseph Briggs says “the big story in 2026 in labor will be AI,” and projects that 6 to 7% of workers could be displaced over a decade-long adoption cycle, the framing remains fundamentally economic. The 81,000 voices in Anthropic's study were talking about something different. They were talking about meaning, agency, cognitive independence, and the fear that the tools designed to liberate them might instead diminish them.

What Real Listening Would Look Like

If the industry were genuinely responsive to the concerns raised by its own users and the broader public, several things would need to change.

First, companies would need to move beyond the rhetoric of reskilling and invest directly in workforce transition infrastructure, not as a public relations exercise, but as a core business obligation. Amodei's proposed token tax of 3% of AI revenues directed toward displaced worker support represents one model. Whether a voluntary industry fund or a mandatory levy, the principle of producers bearing responsibility for displacement costs has precedent in industries from mining to pharmaceuticals.

Second, transparency about automation rates would need to become standard practice, not an occasional research publication. If companies know how much of their AI usage is augmenting human work versus replacing it, that data should be disclosed regularly, with the same rigour applied to financial reporting. The Anthropic study's 60/40 augmentation-to-automation split is valuable precisely because it is rare. Making such disclosures routine would give workers, policymakers, and the public the information they need to prepare.

Third, governance structures would need to include genuine public representation, not merely expert advisory boards populated by academics and industry insiders. The JUST Capital finding that the public wants AI profits distributed across lower prices, workforce support, and safety investment, rather than concentrated in shareholder returns, represents a fundamentally different vision of AI's purpose than the one currently driving corporate strategy.

Fourth, the industry would need to take the fear of cognitive dependency seriously, not as a communications challenge to be managed, but as a design challenge to be solved. The 16% of Anthropic's respondents who worried about losing the ability to think critically were articulating something profound: a suspicion that convenience and capability come at a cost that has not been honestly accounted for. Building AI systems that explicitly preserve and strengthen human cognitive skills, rather than gradually replacing them, would require a different approach to product design, one that prioritises human flourishing over engagement metrics.

None of these changes would be easy. None of them are inevitable. And therein lies the deeper lesson of the 81,000-voice study. The public is not anti-AI. Sixty-seven per cent of Anthropic's respondents viewed the technology positively. They are using it, benefiting from it, and simultaneously afraid of where it is heading. They are, in the study's own framing, living in the light and the shade at once.

The question is whether the companies that have collected this extraordinary data will treat it as a genuine mandate for change, or as another data point in a quarterly report. If the industry's response to 81,000 voices expressing fear about dependency, displacement, and diminished cognition is to build faster, automate more, and promise reskilling programmes that chronically underfunded governments cannot deliver, then the answer to the original question is clear. They heard the words. They simply chose not to listen.


References and Sources

  1. Anthropic, “What 81,000 People Want and Don't Want from AI,” published March 2026. Available at: https://www.anthropic.com/81k-interviews

  2. JUST Capital, in partnership with The Harris Poll and Robin Hood Foundation, “AI Sentiment Survey,” published December 2025. Reported by CNBC, 9 December 2025.

  3. Pew Research Center, “How Americans View AI and Its Impact on Human Abilities, Society,” published September 2025. Available at: https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/

  4. Pew Research Center, “What the Data Says About Americans' Views of Artificial Intelligence,” published March 2026. Available at: https://www.pewresearch.org/short-reads/2026/03/12/key-findings-about-how-americans-view-artificial-intelligence/

  5. Pew Research Center, “Republicans, Democrats Now Equally Concerned About AI in Daily Life,” published November 2025. Available at: https://www.pewresearch.org/short-reads/2025/11/06/republicans-democrats-now-equally-concerned-about-ai-in-daily-life-but-views-on-regulation-differ/

  6. Edelman, “2025 Trust Barometer Flash Poll: Trust and Artificial Intelligence at a Crossroads,” published November 2025. Available at: https://www.edelman.com/trust/2025/trust-barometer/flash-poll-trust-artifical-intelligence

  7. Stanford Human-Centred Artificial Intelligence Institute, “AI Index Report 2025: Public Opinion Chapter.” Available at: https://hai.stanford.edu/ai-index/2025-ai-index-report/public-opinion

  8. World Economic Forum, “Future of Jobs Report 2025,” published January 2025.

  9. Fortune, “CFOs Admit Privately That AI Layoffs Will Be 9x Higher This Year,” published 24 March 2026. Reporting on NBER working paper based on Duke CFO Survey.

  10. CNN Business, “Why This Leading AI CEO Is Warning the Tech Could Cause Mass Unemployment,” Dario Amodei interview, published May 2025.

  11. CNN Business, “Anthropic CEO: AI Is Advancing 'Very Quickly,' Could Soon Replace More Jobs,” published September 2025.

  12. Fortune, “Microsoft AI Chief Gives It 18 Months for All White-Collar Work to Be Automated by AI,” Mustafa Suleyman interview, published February 2026.

  13. Fortune, “Nvidia's Jensen Huang Says He Disagrees with Almost Everything Anthropic CEO Dario Amodei Says,” VivaTech 2025 coverage, published June 2025.

  14. CNN Business, “Block Lays Off Nearly Half Its Staff Because of AI,” published February 2026.

  15. Fortune, “Thousands of CEOs Just Admitted AI Had No Impact on Employment or Productivity,” published February 2026.

  16. Challenger, Gray & Christmas, AI-related layoff data for 2025 and early 2026, reported across multiple outlets.

  17. OpenAI, “AI at Work: Workforce Blueprint,” published October 2025. Available at: https://cdn.openai.com/global-affairs/f319686f-cf21-4b8e-b8bc-84dd9bbfb999/oai-workforce-blueprint-oct-2025.pdf

  18. PwC, “Global AI Jobs Barometer 2025.”

  19. Deloitte, “State of AI in the Enterprise Survey 2026.”

  20. Harvard Business Review, “Leaders Assume Employees Are Excited About AI. They're Wrong,” published November 2025.

  21. European Commission, “AI Act: Regulatory Framework for Artificial Intelligence.” Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  22. Lloyd's Register Foundation and Gallup, “World Risk Poll 2024: Resilience in a Changing World.”

  23. Ipsos, global AI sentiment surveys conducted in 2022 and 2024, as reported in the Stanford HAI AI Index 2025.

  24. Information Technology and Innovation Foundation, AI job creation analysis, published December 2025.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...