The AI Hype Crisis: Why Public Trust Is Collapsing Fast

In June 2024, Goldman Sachs published a research note that rattled Silicon Valley's most cherished assumptions. The report posed what it called the “$600 billion question”: would the staggering investment in artificial intelligence infrastructure ever generate proportional returns? The note featured analysis from MIT economist Daron Acemoglu, who had recently calculated that AI would produce no more than a 0.93 to 1.16 percent increase in US GDP over the next decade, a figure dramatically lower than the techno-utopian projections circulating through investor presentations and conference keynotes. “Much of what we hear from the industry now is exaggeration,” Acemoglu stated plainly. Two months later, he was awarded the 2024 Nobel Memorial Prize in Economic Sciences, alongside his MIT colleague Simon Johnson and University of Chicago economist James Robinson, for research on the relationship between political institutions and economic growth.

That gap between what AI is promised to deliver and what it actually does is no longer an abstract concern for economists and technologists. It is reshaping public attitudes toward technology at a speed that should alarm anyone who cares about the long-term relationship between innovation and democratic society. When governments deploy algorithmic systems to deny healthcare coverage or detect welfare fraud, when corporations invest billions in tools that fail 95 percent of the time, and when the public is told repeatedly that superintelligence is just around the corner while chatbots still fabricate legal citations, something fundamental breaks in the social contract around technological progress.

The question is not whether AI is useful. It plainly is, in specific, well-defined applications. The question is what happens when an entire civilisation makes strategic decisions based on capabilities that do not yet exist and may never materialise in the form being sold.

The Great Correction Arrives

By late 2025, the AI industry had entered what Gartner's analysts formally classified as the “Trough of Disillusionment.” Generative AI, which had been perched at the Peak of Inflated Expectations just one year earlier, had slid into the territory where early adopters report performance issues, low return on investment, and a growing sense that the technology's capabilities had been systematically overstated. The positioning reflected difficulties organisations face when attempting to move generative AI from pilot projects to production systems. Integration with existing infrastructure presented technical obstacles, while concerns about data security caused some companies to limit deployment entirely.

The numbers told a damning story. According to MIT's “The GenAI Divide: State of AI in Business 2025” report, published in July 2025 and based on 52 executive interviews, surveys of 153 leaders, and analysis of 300 public AI deployments, 95 percent of generative AI pilot projects delivered no measurable profit-and-loss impact. American enterprises had spent an estimated $40 billion on artificial intelligence systems in 2024, yet the vast majority saw zero measurable bottom-line returns. Only five percent of integrated systems created significant value.

The study's authors, from MIT's NANDA initiative, identified what they termed the “GenAI Divide”: a widening split between high adoption and low transformation. Companies were enthusiastically purchasing and deploying AI tools, but almost none were achieving the business results that had been promised. “The 95% failure rate for enterprise AI solutions represents the clearest manifestation of the GenAI Divide,” the report stated. The core barrier, the authors concluded, was not infrastructure, regulation, or talent. It was that most generative AI systems “do not retain feedback, adapt to context, or improve over time,” making them fundamentally ill-suited for the enterprise environments into which they were being thrust.

This was not an outlier finding. A 2024 NTT DATA analysis concluded that between 70 and 85 percent of generative AI deployment efforts were failing to meet their desired return on investment. The Autodesk State of Design & Make 2025 report found that sentiment toward AI had dropped significantly year over year, with just 69 percent of business leaders saying AI would enhance their industry, representing a 12 percent decline from the previous year. Only 40 percent of leaders said they were approaching or had achieved their AI goals, a 16-point decrease that represented a 29 percent drop. S&P Global data revealed that 42 percent of companies scrapped most of their AI initiatives in 2025, up sharply from 17 percent the year before.

The infrastructure spending, meanwhile, continued accelerating even as returns failed to materialise. Meta, Microsoft, Amazon, and Google collectively committed over $250 billion to AI infrastructure during 2025. Amazon alone planned $125 billion in capital expenditure, up from $77 billion in 2024, a 62 percent increase. Goldman Sachs CEO David Solomon publicly acknowledged that he expected “a lot of capital that was deployed that doesn't deliver returns.” Amazon founder Jeff Bezos called the environment “kind of an industrial bubble.” Even OpenAI CEO Sam Altman conceded that “people will overinvest and lose money.”

Trust in Freefall

The gap between AI's promises and its performance is not occurring in a vacuum. It is landing on a public already growing sceptical of the technology industry's claims, and it is accelerating a decline in trust that carries profound implications for democratic governance.

The 2025 Edelman Trust Barometer, based on 30-minute online interviews conducted between October and November 2024, revealed a stark picture. Globally, only 49 percent of respondents trusted artificial intelligence as a technology. In the United States, that figure dropped to just 32 percent. Three times as many Americans rejected the growing use of AI (49 percent) as embraced it (17 percent). In the United Kingdom, trust stood at just 36 percent. In Germany, 39 percent. The Chinese public, by contrast, reported 72 percent trust in AI, a 40-point gap that reflects not just different regulatory environments but fundamentally different cultural relationships with technology and state authority.

These figures represent a significant deterioration. A decade ago, 73 percent of Americans trusted technology companies. By 2025, that number had fallen to 63 percent. Technology, which was the most trusted sector in 90 percent of the countries Edelman studies eight years ago, now held that position in only half. The barometer also found that 59 percent of global employees feared job displacement due to automation, and nearly one in two were sceptical of business use of artificial intelligence.

The Pew Research Center's findings painted an even more granular picture of public anxiety. In an April 2025 report examining how the US public and AI experts view artificial intelligence, Pew found that 50 percent of American adults said they were more concerned than excited about the increased use of AI in daily life, up from 37 percent in 2021. More than half (57 percent) rated the societal risks of AI as high, compared with only 25 percent who said the benefits were high. Over half of US adults (53 percent) believed AI did more harm than good in protecting personal privacy, and 53 percent said AI would worsen people's ability to think creatively.

Perhaps most revealing was the chasm between expert optimism and public unease. While 56 percent of AI experts believed AI would have a positive effect on the United States over the next 20 years, only 17 percent of the general public agreed. While 47 percent of experts said they were more excited than concerned, only 11 percent of ordinary citizens felt the same. And despite their divergent levels of optimism, both groups shared a common scepticism about institutional competence: roughly 60 percent of both experts and the public said they lacked confidence that US companies would develop AI responsibly.

The Stanford HAI AI Index 2025 Report reinforced these trends globally. Across 26 nations surveyed by Ipsos, confidence that AI companies protect personal data fell from 50 percent in 2023 to 47 percent in 2024. Fewer people believed AI systems were unbiased and free from discrimination compared to the previous year. While 18 of 26 nations saw an increase in the proportion of people who believed AI products offered more benefits than drawbacks, the optimism was concentrated in countries like China (83 percent), Indonesia (80 percent), and Thailand (77 percent), while the United States (39 percent), Canada (40 percent), and the Netherlands (36 percent) remained deeply sceptical.

When Algorithms Replace Judgement

The erosion of public trust in AI would be concerning enough if it were merely a matter of consumer sentiment. But the stakes become existential when governments and corporations use overestimated AI capabilities to make decisions that fundamentally alter people's lives, and when those decisions carry consequences that cannot be undone.

Consider healthcare. In November 2023, a class action lawsuit was filed against UnitedHealth Group and its subsidiary, alleging that the company illegally used an AI algorithm called nH Predict to deny rehabilitation care to seriously ill elderly patients enrolled in Medicare Advantage plans. The algorithm, developed by a company called Senior Metrics and later acquired by UnitedHealth's Optum subsidiary in 2020, was designed to predict how long patients would need post-acute care. According to the lawsuit, UnitedHealth deployed the algorithm knowing it had a 90 percent error rate on appeals, meaning that nine out of ten times a human reviewed the AI's denial, they overturned it. UnitedHealth also allegedly knew that only 0.2 percent of denied patients would file appeals, making the error rate commercially inconsequential for the insurer despite being medically devastating for patients.

The human cost was documented in court filings. Gene Lokken, a 91-year-old Wisconsin resident named in the lawsuit, fractured his leg and ankle in May 2022. After his doctor approved physical therapy, UnitedHealth paid for only 19 days before the algorithm determined he was safe to go home. His doctors appealed, noting his muscles were “paralysed and weak,” but the insurer denied further coverage. His family paid approximately $150,000 over the following year until he died in July 2023. In February 2025, a federal court allowed the case to proceed, denying UnitedHealth's attempt to dismiss the claims and waiving the exhaustion of administrative remedies requirement, noting that patients faced irreparable harm.

The STAT investigative series “Denied by AI,” which broke the UnitedHealth story, was a 2024 Pulitzer Prize finalist in investigative reporting. A US Senate report released in October 2024 found that UnitedHealthcare's prior authorisation denial rate for post-acute care had jumped to 22.7 percent in 2022 from 10.9 percent in 2020. The healthcare AI problem extends far beyond a single insurer. ECRI, a patient safety organisation, ranked insufficient governance of artificial intelligence as the number two patient safety threat in 2025, warning that medical errors generated by AI could compromise patient safety through misdiagnoses and inappropriate treatment decisions. Yet only about 16 percent of hospital executives surveyed said they had a systemwide governance policy for AI use and data access.

The pattern repeats across domains where algorithmic systems are deployed to process vulnerable populations. In the Netherlands, the childcare benefits scandal stands as perhaps the most devastating example of what happens when governments trust flawed algorithms with life-altering decisions. The Dutch Tax and Customs Administration deployed a machine learning model to detect welfare fraud that illegally used dual nationality as a risk characteristic. The system falsely accused over 20,000 parents of fraud, resulting in benefits termination and forced repayments. Families were driven into bankruptcy. Children were removed from their homes. Mental health crises proliferated. Seventy percent of those affected had a migration background, and fifty percent were single-person households, mostly mothers. In January 2021, the Dutch government was forced to resign after a parliamentary investigation concluded that the government had violated the foundational principles of the rule of law.

The related SyRI (System Risk Indication) system, which cross-referenced citizens' employment, benefits, and tax data to flag “unlikely citizen profiles,” was deployed exclusively in neighbourhoods with high numbers of low-income households and disproportionately many residents from immigrant backgrounds. In February 2020, the Hague court ordered SyRI's immediate halt, ruling it violated Article 8 of the European Convention on Human Rights. Amnesty International described the system's targeting criteria as “xenophobic machines.” Yet investigations by Lighthouse Reports later confirmed that similar algorithmic surveillance practices continued under slightly adapted systems, even after the ban, with the government having “silently continued to deploy a slightly adapted SyRI in some of the country's most vulnerable neighbourhoods.”

The Stochastic Parrot Problem

Understanding why AI hype is so dangerous requires understanding what these systems actually do, as opposed to what their makers claim they do.

Emily Bender, a linguistics professor at the University of Washington who was included in the inaugural TIME100 AI list of most influential people in artificial intelligence in 2023, co-authored a now-famous paper arguing that large language models are fundamentally “stochastic parrots.” They do not understand language in any meaningful sense. They draw on training data to predict which sequence of tokens is most likely to follow a given prompt. The result is an illusion of comprehension, a pattern-matching exercise that produces outputs resembling intelligent thought without any of the underlying cognition.

In 2025, Bender and sociologist Alex Hanna, director of research at the Distributed AI Research Institute and a former Google employee, published “The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.” The book argues that AI hype serves as a mask for Big Tech's drive for profit, with the breathless promotion of AI capabilities benefiting technology companies far more than users or society. “Who benefits from this technology, who is harmed, and what recourse do they have?” Bender and Hanna ask, framing these as the essential questions that the hype deliberately obscures. Library Journal called the book “a thorough, witty, and accessible argument against AI that meets the moment.”

The stochastic parrot problem has real-world consequences that compound the trust deficit. When AI systems fabricate information with perfect confidence, they undermine the epistemic foundations that societies rely on for decision-making. Legal scholar Damien Charlotin, who tracks AI hallucinations in court filings through his database, had documented at least 206 instances of lawyers submitting AI-generated fabricated case citations by mid-2025. Stanford University's RegLab found that even premium legal AI tools hallucinated at alarming rates: Westlaw's AI-Assisted Research produced hallucinated or incorrect information 33 percent of the time, providing accurate responses to only 42 percent of queries. LexisNexis's Lexis+ AI hallucinated 17 percent of the time. A 2025 study published in Nature Machine Intelligence found that large language models cannot reliably distinguish between belief and knowledge, or between opinions and facts, noting that “failure to make such distinctions can mislead diagnoses, distort judicial judgements and amplify misinformation.”

If the tools marketed as the most reliable in their field fabricate information roughly one-fifth to one-third of the time, what does this mean for the countless lower-stakes applications where AI outputs are accepted without verification?

The AI Washing Economy

The gap between marketing claims and actual capabilities has grown so pronounced that regulators have begun treating AI exaggeration as a form of securities fraud.

In March 2024, the US Securities and Exchange Commission brought its first “AI washing” enforcement actions, simultaneously charging two investment advisory firms, Delphia and Global Predictions, with making false and misleading statements about their use of AI. Delphia paid $225,000 and Global Predictions paid $175,000 in civil penalties. These firms had not been entirely without AI capabilities, but they had overstated what those systems could do, crossing the line from marketing enthusiasm into regulatory violation.

The enforcement actions escalated rapidly. In January 2025, the SEC charged Presto Automation, a formerly Nasdaq-listed company, in the first AI washing action against a public company. Presto had claimed its AI voice system eliminated the need for human drive-through order-taking at fast food restaurants, but the SEC alleged the vast majority of orders still required human intervention and that the AI speech recognition technology was owned and operated by a third party. In April 2025, the SEC and Department of Justice charged the founder of Nate Inc. with fraudulently raising over $42 million by claiming the company's shopping app used AI to process transactions, when in reality manual workers completed the purchases. The claimed automation rate was above 90 percent; the actual rate was essentially zero.

Securities class actions targeting alleged AI misrepresentations increased by 100 percent between 2023 and 2024. In February 2025, the SEC announced the creation of a dedicated Cyber and Emerging Technologies Unit, tasked with combating technology-related misconduct, and flagged AI washing as a top examination priority.

The pattern is instructive. When a technology is overhyped, the incentive to exaggerate capabilities becomes irresistible. Companies that accurately describe their modest AI implementations risk being punished by investors who have been conditioned to expect transformative breakthroughs. The honest actors are penalised while the exaggerators attract capital, creating a market dynamic that systematically rewards deception.

Echoes of Previous Bubbles

The AI hype cycle is not without historical precedent, and the parallels offer both warnings and qualified reassurance.

During the dot-com era, telecommunications companies laid more than 80 million miles of fibre optic cables across the United States, driven by wildly inflated claims about internet traffic growth. Companies like Global Crossing, Level 3, and Qwest raced to build massive networks. The result was catastrophic overcapacity: even four years after the bubble burst, 85 to 95 percent of the fibre laid remained unused, earning the nickname “dark fibre.” The Nasdaq composite rose nearly 400 percent between 1995 and March 2000, then crashed 78 percent by October 2002.

The parallels to today's AI infrastructure buildout are unmistakable. Meta CEO Mark Zuckerberg announced plans for an AI data centre “so large it could cover a significant part of Manhattan.” The Stargate Project aims to develop a $500 billion nationwide network of AI data centres. Goldman Sachs analysts found that hyperscaler companies had taken on $121 billion in debt over the past year, representing a more than 300 percent increase from typical industry debt levels. AI-related stocks had accounted for 75 percent of S&P 500 returns, 80 percent of earnings growth, and 90 percent of capital spending growth since ChatGPT launched in November 2022.

Yet there are important differences. Unlike many dot-com companies that had no revenue, major AI players are generating substantial income. Microsoft's Azure cloud service grew 39 percent year over year to an $86 billion run rate. OpenAI projects $20 billion in annualised revenue. The Nasdaq's forward price-to-earnings ratio was approximately 26 times in November 2023, compared to approximately 60 times at the dot-com peak.

The more useful lesson from the dot-com era is not about whether the bubble will burst, but about what happens to public trust and institutional decision-making in the aftermath. The internet survived the dot-com crash and eventually fulfilled many of its early promises. But the crash destroyed trillions in wealth, wiped out retirement savings, and created a lasting scepticism toward technology claims that took years to overcome. The institutions and individuals who made decisions based on dot-com hype, from pension funds that invested in companies with no path to profitability to governments that restructured services around technologies that did not yet work, bore costs that were never fully recovered.

Algorithmic Bias and the Feedback Loop of Injustice

Perhaps the most consequential long-term risk of the AI hype gap is its intersection with systemic inequality. When policymakers deploy AI systems in criminal justice, welfare administration, and public services based on inflated claims of accuracy and objectivity, the consequences fall disproportionately on communities that are already marginalised.

Predictive policing offers a stark illustration. The Chicago Police Department's “Strategic Subject List,” implemented in 2012 to identify individuals at higher risk of gun violence, disproportionately targeted young Black and Latino men, leading to intensified surveillance and police interactions in those communities. The system created a feedback loop: more police dispatched to certain neighbourhoods resulted in more recorded crime, which the algorithm interpreted as confirmation that those neighbourhoods were indeed high-risk, which led to even more policing. The NAACP has called on state legislators to evaluate and regulate the use of predictive policing, noting mounting evidence that these tools increase racial biases and citing the lack of transparency inherent in proprietary algorithms that do not allow for public scrutiny.

The COMPAS recidivism prediction tool, widely used in US criminal justice, was found to produce biased predictions against Black defendants compared to white defendants, trained on historical data saturated with racial bias. An audit by the LAPD inspector general found “significant inconsistencies” in how officers entered data into a predictive policing programme, further fuelling biased predictions. These are not edge cases or implementation failures. They are the predictable consequences of deploying pattern-recognition systems trained on data that reflects centuries of structural discrimination.

In welfare administration, the pattern is equally troubling. The Dutch childcare benefits scandal demonstrated how algorithmic systems can automate inequality at scale. The municipality of Rotterdam used a discriminatory algorithm to profile residents and “predict” social welfare fraud for three years, disproportionately targeting young single mothers with limited knowledge of Dutch. In the United Kingdom, the Department for Work and Pensions admitted, in documents released under the Freedom of Information Act, to finding bias in an AI tool used to detect fraud in universal credit claims. The tool's initial iteration correctly matched conditions only 35 percent of the time, and by the DWP's own admission, “chronic fatigue was translated into chronic renal failure” and “partially amputation of foot was translated into partially sighted.”

These failures share a common thread. The AI systems were deployed based on claims of objectivity and accuracy that did not withstand scrutiny. Policymakers, influenced by industry hype about AI's capabilities, trusted algorithmic outputs over human judgement, and the people who paid the price were those least equipped to challenge the decisions being made about their lives.

What Sustained Disillusionment Means for Innovation

The long-term consequences of the AI hype gap extend beyond immediate harms to individual victims. They threaten to reshape the relationship between society and technological innovation in ways that could prove difficult to reverse.

First, there is the problem of misallocated resources. The MIT study found that more than half of generative AI budgets were devoted to sales and marketing tools, despite evidence that the best returns came from back-office automation, eliminating business process outsourcing, cutting external agency costs, and streamlining operations. When organisations chase the use cases that sound most impressive rather than those most likely to deliver value, they waste capital that could have funded genuinely productive innovation. The study also revealed a striking shadow economy: while only 40 percent of companies had official large language model subscriptions, 90 percent of workers surveyed reported daily use of personal AI tools for job tasks, suggesting that the gap between corporate AI strategy and actual AI utility is even wider than the headline figures suggest.

Second, the trust deficit creates regulatory feedback loops that can stifle beneficial applications. As public concern about AI grows, so does political pressure for restrictive regulation. The 2025 Stanford HAI report found that references to AI in draft legislation across 75 countries increased by 21.3 percent, continuing a ninefold increase since 2016. In the United States, 73.7 percent of local policymakers agreed that AI should be regulated, up from 55.7 percent in 2022. This regulatory momentum is a direct response to the trust deficit, and while some regulation is necessary and overdue, poorly designed rules driven by public fear rather than technical understanding risk constraining beneficial applications alongside harmful ones. Colorado became the first US state to enact legislation addressing algorithmic bias in 2024, with California and New York following with their own targeted measures.

Third, the hype cycle creates a talent and attention problem. When AI is presented as a solution to every conceivable challenge, researchers and engineers are pulled toward fashionable applications rather than areas of genuine need. Acemoglu has argued that “we currently have the wrong direction for AI. We're using it too much for automation and not enough for providing expertise and information to workers.” The hype incentivises building systems that replace human judgement rather than augmenting it, directing talent and investment away from applications that could produce the greatest social benefit.

Finally, and perhaps most critically, the erosion of public trust in AI threatens to become self-reinforcing. Each failed deployment, each exaggerated claim exposed, each algorithmic system found to be biased or inaccurate further deepens public scepticism. Meredith Whittaker, president of Signal, has warned about the security and privacy risks of granting AI agents extensive access to sensitive data, describing a future where the “magic genie bot” becomes a nightmare if security and privacy are not prioritised. When public trust in AI erodes, even beneficial and well-designed systems face adoption resistance, creating a vicious cycle where good technology is tainted by association with bad marketing.

Rebuilding on Honest Foundations

The AI hype gap is not merely a marketing problem or an investment risk. It is a structural challenge to the relationship between technological innovation and public trust that has been building for years and is now reaching a critical inflection point.

The 2025 Edelman Trust Barometer found that the most powerful drivers of AI enthusiasm are trust and information, with hesitation rooted more in unfamiliarity than negative experiences. This finding suggests a path that does not require abandoning AI, but demands abandoning the hype. As people use AI more and experience its ability to help them learn, work, and solve problems, their confidence rises. The obstacle is not the technology itself but the inflated expectations that set users up for disappointment.

Gartner's placement of generative AI in the Trough of Disillusionment is, paradoxically, encouraging. As the firm's analysts note, the trough does not represent failure. It represents the transition from wild experimentation to rigorous engineering, from breathless promises to honest assessment of what works and what does not. The companies and institutions that emerge successfully from this phase will be those that measured their claims against reality rather than against their competitors' marketing materials.

The lesson from previous technology cycles is clear but routinely ignored. The dot-com bubble popped, but the internet did not disappear. What disappeared were the companies and institutions that confused hype with strategy. The same pattern will likely repeat with AI. The technology will mature, find its genuine applications, and deliver real value. But the path from here to there runs through a period of reckoning that demands honesty about what AI can and cannot do, transparency about the limitations of algorithmic decision-making, and accountability for the real harms caused by deploying immature systems in high-stakes contexts.

As Bender and Hanna urge, the starting point must be asking basic but important questions: who benefits, who is harmed, and what recourse do they have? As Acemoglu wrote in his analysis for “Economic Policy” in 2024, “Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing.” The potential is real. But potential is not performance, and treating it as such has consequences that a $600 billion question only begins to capture.


References and Sources

  1. Acemoglu, D. (2024). “The Simple Macroeconomics of AI.” Economic Policy. Massachusetts Institute of Technology. https://economics.mit.edu/sites/default/files/2024-04/The%20Simple%20Macroeconomics%20of%20AI.pdf

  2. Amnesty International. (2021). “Xenophobic Machines: Dutch Child Benefit Scandal.” Retrieved from https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/

  3. Bender, E. M. & Hanna, A. (2025). The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. Penguin/HarperCollins.

  4. CBS News. (2023). “UnitedHealth uses faulty AI to deny elderly patients medically necessary coverage, lawsuit claims.” Retrieved from https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/

  5. Challapally, A., Pease, C., Raskar, R. & Chari, P. (2025). “The GenAI Divide: State of AI in Business 2025.” MIT NANDA Initiative. As reported by Fortune, 18 August 2025. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/

  6. Edelman. (2025). “2025 Edelman Trust Barometer.” Retrieved from https://www.edelman.com/trust/2025/trust-barometer

  7. Edelman. (2025). “Flash Poll: Trust and Artificial Intelligence at a Crossroads.” Retrieved from https://www.edelman.com/trust/2025/trust-barometer/flash-poll-trust-artifical-intelligence

  8. Edelman. (2025). “The AI Trust Imperative: Navigating the Future with Confidence.” Retrieved from https://www.edelman.com/trust/2025/trust-barometer/report-tech-sector

  9. Gartner. (2025). “Hype Cycle for Artificial Intelligence, 2025.” Retrieved from https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence

  10. Goldman Sachs. (2024). “Top of Mind: AI: in a bubble?” Goldman Sachs Research. Retrieved from https://www.goldmansachs.com/insights/top-of-mind/ai-in-a-bubble

  11. Healthcare Finance News. (2025). “Class action lawsuit against UnitedHealth's AI claim denials advances.” Retrieved from https://www.healthcarefinancenews.com/news/class-action-lawsuit-against-unitedhealths-ai-claim-denials-advances

  12. Lighthouse Reports. (2023). “The Algorithm Addiction.” Retrieved from https://www.lighthousereports.com/investigation/the-algorithm-addiction/

  13. Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C. D. & Ho, D. E. (2025). “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools.” Journal of Empirical Legal Studies, 0:1-27. https://doi.org/10.1111/jels.12413

  14. MIT Technology Review. (2025). “The great AI hype correction of 2025.” Retrieved from https://www.technologyreview.com/2025/12/15/1129174/the-great-ai-hype-correction-of-2025/

  15. NAACP. (2024). “Artificial Intelligence in Predictive Policing Issue Brief.” Retrieved from https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief

  16. Nature Machine Intelligence. (2025). “Language models cannot reliably distinguish belief from knowledge and fact.” https://doi.org/10.1038/s42256-025-01113-8

  17. Novara Media. (2025). “How Labour Is Using Biased AI to Determine Benefit Claims.” Retrieved from https://novaramedia.com/2025/04/15/how-the-labour-party-is-using-biased-ai-to-determine-benefit-claims/

  18. NTT DATA. (2024). “Between 70-85% of GenAI deployment efforts are failing to meet their desired ROI.” Retrieved from https://www.nttdata.com/global/en/insights/focus/2024/between-70-85p-of-genai-deployment-efforts-are-failing

  19. Pew Research Center. (2025). “How the US Public and AI Experts View Artificial Intelligence.” Retrieved from https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/

  20. Radiologybusiness.com. (2025). “'Insufficient governance of AI' is the No. 2 patient safety threat in 2025.” Retrieved from https://radiologybusiness.com/topics/artificial-intelligence/insufficient-governance-ai-no-2-patient-safety-threat-2025

  21. SEC. (2024). “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence.” Press Release 2024-36. Retrieved from https://www.sec.gov/newsroom/press-releases/2024-36

  22. Stanford HAI. (2025). “The 2025 AI Index Report.” Stanford University Human-Centered Artificial Intelligence. Retrieved from https://hai.stanford.edu/ai-index/2025-ai-index-report

  23. STAT News. (2023). “UnitedHealth faces class action lawsuit over algorithmic care denials in Medicare Advantage plans.” Retrieved from https://www.statnews.com/2023/11/14/unitedhealth-class-action-lawsuit-algorithm-medicare-advantage/

  24. The Dutch Childcare Benefits Scandal. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal

  25. Washington Post. (2024). “Big Tech is spending billions on AI. Some on Wall Street see a bubble.” Retrieved from https://www.washingtonpost.com/technology/2024/07/24/ai-bubble-big-tech-stocks-goldman-sachs/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...