The Great AI Confidence Trick: How to Spot the Fakes in a World Drowning in Hype

The shopping app Nate promised something irresistible: buy anything from any online store with a single tap, powered entirely by artificial intelligence. Neural networks that “understand HTML and transact on websites in the same way consumers do,” founder Albert Saniger told investors. The pitch worked spectacularly. Between 2019 and 2021, Nate raised approximately $42 million from venture capitalists hungry for the next AI breakthrough. There was just one problem. The actual automation rate of Nate's supposedly intelligent system was, according to federal prosecutors, effectively zero. Behind the sleek interface, hundreds of human workers in call centres in the Philippines and Romania were manually completing every purchase. When a deadly tropical storm struck the Philippines in October 2021, Nate scrambled to open a new call centre in Romania to handle the backlog. Saniger allegedly concealed the manual processing from investors and employees, restricting access to internal dashboards and describing automation rates as trade secrets. During product demonstrations, Nate engineers worked behind the scenes to manually process orders, making it falsely appear that the app was completing purchases automatically. In April 2025, the US Department of Justice and the Securities and Exchange Commission charged Saniger with securities fraud and wire fraud, each carrying a maximum sentence of twenty years in prison. Nate had run out of money in January 2023, leaving its investors with what prosecutors described as “near total” losses. Saniger had personally profited, selling approximately $3 million of his own Nate shares to a Series A investor in June 2021.

This is not an outlier. It is a symptom. As artificial intelligence becomes the most potent marketing buzzword since “disruption,” a growing number of companies are engaged in what regulators, investors, and technologists now call “AI washing,” the practice of making false, misleading, or wildly exaggerated claims about AI capabilities to attract customers, investors, and talent. The phenomenon mirrors greenwashing, where companies overstate their environmental credentials, but the stakes may be even higher. With the global AI market projected to reach approximately $250 billion by the end of 2025, and with venture capital firms pouring a record $202.3 billion into AI startups in 2025 alone (a 75 per cent increase from 2024, according to Crunchbase data), the financial incentives to slap an “AI-powered” label onto virtually anything have never been greater.

The question is no longer whether AI washing exists. It clearly does, and at scale. The real question is what consumers, investors, and regulators should do about it.

The Scale of the Deception

The first systematic attempt to measure AI washing came in 2019, when London-based venture capital firm MMC Ventures published “The State of AI 2019: Divergence,” a report produced in association with Barclays. The researchers individually reviewed 2,830 European startups across thirteen countries that claimed to use AI. Their finding was stark: in approximately 40 per cent of cases, there was no evidence that artificial intelligence was material to the company's value proposition. These firms were not necessarily lying outright. Many had been classified as “AI companies” by third-party analytics platforms, and as David Kelnar, partner and head of research at MMC Ventures, noted at the time, startups had little incentive to correct the misclassification. Companies labelled as AI-driven were raising between 15 and 50 per cent more capital than traditional software firms. The UK alone accounted for nearly 500 AI startups, a third of Europe's total and twice as many as any other country, making the scale of potential misrepresentation significant.

Six years later, the problem has not improved. A February 2025 survey by MMC Ventures of 1,200 fintech startups found that 40 per cent of companies branding themselves “AI-first” had zero machine-learning code in production. A quarter were simply piping third-party APIs, such as those offered by OpenAI, through a new user interface. Only 12 per cent trained proprietary models on unique datasets. Yet funding rounds that mentioned “generative AI” commanded median valuations 2.3 times higher than those that did not. The financial logic is brutally simple: pitch decks with AI buzzwords close faster and raise larger sums.

The pattern repeats across sectors. Amazon's “Just Walk Out” grocery technology, deployed across its Fresh stores, was marketed as a fully autonomous AI-powered checkout system. Customers could enter, pick up items, and leave without scanning anything. In April 2024, The Information reported that approximately 700 out of every 1,000 Just Walk Out transactions in 2022 required human review by a team of roughly 1,000 workers in India, far exceeding Amazon's internal target of 50 reviews per 1,000 transactions. Customers frequently received their receipts hours after leaving the store, the delay caused by reviewers checking camera footage to verify each transaction. Amazon disputed the characterisation, stating that its “Machine Learning data associates” were annotating data to improve the underlying model. Dilip Kumar, Vice President of AWS Applications, wrote that “the erroneous reports that Just Walk Out technology relies on human reviewers watching from afar is untrue.” Nevertheless, the company subsequently removed Just Walk Out from most Fresh stores, replacing it with simpler “Dash Carts,” and laid off US-based staff who had worked on the technology.

Then there is DoNotPay, which marketed itself as “the world's first robot lawyer.” Founded in 2015 to help people contest parking tickets, the company expanded into broader legal services, claiming its AI could substitute for a human lawyer. The Federal Trade Commission investigated and found that DoNotPay's technology merely recognised statistical relationships between words, used chatbot software to interact with users, and connected to ChatGPT through an API. None of it had been trained on a comprehensive database of laws, regulations, or judicial decisions. The company had never even tested whether its “AI lawyer” performed at the level of a human lawyer. In February 2025, the FTC finalised an order requiring DoNotPay to pay $193,000 in refunds and to notify consumers who had subscribed between 2021 and 2023. The order prohibits the company from claiming its service performs like a real lawyer without adequate evidence. FTC Chair Lina M. Khan stated plainly: “Using AI tools to trick, mislead, or defraud people is illegal. The FTC's enforcement actions make clear that there is no AI exemption from the laws on the books.”

When the SEC Came Knocking

The enforcement reckoning arrived in earnest in March 2024, when the SEC announced its first-ever AI washing enforcement actions. The targets were two investment advisory firms: Delphia (USA) Inc. and Global Predictions Inc. Delphia, a Toronto-based firm, had claimed in SEC filings, press releases, and on its website that it used AI and machine learning to guide investment decisions. When the SEC examined Delphia in 2021, the firm admitted it did not actually possess such an algorithm, yet it subsequently made further false claims about its use of algorithms in investment processes. Global Predictions, based in San Francisco, marketed itself as the “first regulated AI financial advisor,” claiming to produce “expert AI driven forecasts.” SEC Chair Gary Gensler was blunt: “We find that Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not.” He drew a direct parallel to greenwashing, cautioning that “when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies.” Delphia paid a $225,000 civil penalty. Global Predictions paid $175,000.

These penalties were modest, almost symbolic. The cases that followed were not.

In January 2025, the SEC charged Presto Automation Inc., a formerly Nasdaq-listed restaurant technology company, marking the first AI washing enforcement action against a public company. Presto had promoted its “Presto Voice” product as a breakthrough AI system capable of automating drive-through order-taking at fast food restaurants. In its SEC filings between 2021 and 2023, including Forms 8-K, 10-K, and S-4, the company referred to Presto Voice as internally developed technology and claimed that the system “eliminates human order taking.” The SEC's investigation found that the speech recognition technology was actually owned and operated by a third party, and that the system relied heavily on human employees in foreign countries to complete orders.

In April 2025, the DOJ and SEC jointly charged Nate's founder with fraud, the most aggressive AI washing prosecution to date. The parallel criminal and civil actions sent an unmistakable signal: AI washing was no longer a regulatory grey area. It was fraud.

By mid-2025, the SEC had established a dedicated Cybersecurity and Emerging Technologies Unit (CETU) specifically to pursue AI-related misconduct. At the Securities Enforcement Forum West in May 2025, senior SEC officials confirmed that “rooting out” AI washing fraud was an immediate enforcement priority. Existing securities laws provided ample authority to prosecute misleading AI claims, and the Commission would not wait for new legislation.

The private litigation followed. Apple became the highest-profile target when shareholders filed a securities fraud class action in June 2025, alleging that the company had misrepresented the capabilities and timeline of “Apple Intelligence,” its ambitious AI initiative unveiled in June 2024. The complaint, filed by plaintiff Eric Tucker, alleged that Apple lacked a functional prototype of Siri's advanced AI features and misrepresented the time needed to deliver them. When Apple announced in March 2025 that it was indefinitely delaying several AI-based Siri features, the stock dropped $11.59 per share, nearly 5 per cent, in a single trading session. Internal sources, including Siri director Robby Walker, later admitted the company had promoted enhancements “before they were ready,” calling the delay “ugly and embarrassing.” By April 2025, Apple's stock had lost nearly a quarter of its value, approximately $900 billion in market capitalisation. The case, Tucker v. Apple Inc., No. 5:25-cv-05197, remains pending in the US District Court for the Northern District of California.

The Anatomy of an AI Washing Claim

Understanding how AI washing works requires understanding what companies are actually doing when they claim to use “artificial intelligence.” The term itself is part of the problem. There is no universally accepted definition of AI, and the phrase has become so elastic that it can encompass everything from genuinely sophisticated deep learning systems to simple rule-based automation that has existed for decades. As a legal analysis published by CMS Law-Now in July 2025 noted, “AI-washing can constitute misleading advertising” and represents an unfair competitive practice, yet companies continue to exploit the vagueness of the terminology.

The most common forms of AI washing fall into several recognisable categories. First, there is relabelling: companies take existing software, algorithms, or automated processes and rebrand them as “AI-powered” without any meaningful change in functionality. A recommendation engine that uses basic collaborative filtering becomes “our proprietary AI.” A chatbot built on decision trees becomes “our intelligent assistant.” Second, there is API pass-through: companies integrate a third-party AI service, typically from OpenAI, Google, or Anthropic, wrap it in a custom interface, and present it as their own technology. Third, there is capability inflation: companies describe aspirational features as current capabilities, presenting what they hope to build as what already exists. Fourth, and most egregiously, there is the human-behind-the-curtain model, where supposed AI systems rely primarily on manual human labour, as in the cases of Nate and, arguably, Amazon's Just Walk Out technology.

The phenomenon is not confined to startups. As University of Pennsylvania professor Benjamin Shestakofsky has observed, there exists a grey area in artificial intelligence “filled with millions of humans who work in secret,” often hired to train algorithms but who end up performing much of the work instead. This usually involves “human labour that is outsourced to other countries, because those are places where they can get access to labour in places with lower prevailing wages.” The practice of disguising human labour as artificial intelligence has a long history in the technology industry, but the current wave of AI hype has turbocharged it.

The California Management Review published an analysis in December 2024 examining the cultural traps that lead to AI exaggeration within organisations. The study found that one of the most pervasive issues was “the lack of technical literacy among senior leadership. While many are accomplished business leaders, they often lack a nuanced understanding of AI's capabilities and limitations, creating a significant knowledge gap at the top.” This gap allows marketing teams to make claims that engineering teams know are unsupported, while executives lack the technical fluency to challenge them.

Building a Consumer Defence

So how should an ordinary person navigate this landscape? The answer begins with developing what researchers call “AI literacy,” a term that has rapidly moved from academic obscurity to mainstream urgency. Long and Magerko's widely cited academic definition describes AI literacy as “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.” The Organisation for Economic Co-operation and Development published its AI Literacy Framework in May 2025, designed for primary and secondary education but with principles applicable to anyone. The framework emphasises that AI literacy is not about learning to code or understanding neural network architectures. It is about developing the critical thinking skills to evaluate AI claims, understand limitations, and make informed decisions. The World Economic Forum now classifies AI literacy as a civic skill, essential for participating in democratic processes and, without it, people remain vulnerable to misinformation, biased systems, and decisions made by opaque algorithms.

The OECD framework identifies a core principle: “Practicing critical thinking in an AI context involves verifying whether the information provided by an AI system is accurate, relevant, and fair, because AI systems can generate convincing but incorrect outputs.” This applies equally to evaluating AI products themselves. Consumers need to ask not just what an AI system can do, but what it should do, and for whom. The framework also compels users to consider the environmental costs of AI systems, which require significant amounts of energy, materials, and water while contributing to global carbon emissions.

Several practical frameworks have emerged to help consumers and professionals evaluate AI claims. The ROBOT checklist, developed by Ulster University's library guides for evaluating AI tools, begins with the most fundamental question: reliability. How transparent is the company about its technology? What information does it share about when the tool was created, when it was last updated, what data trained it, and how user data is handled?

Ohio University's research, published in November 2025, identifies four integrative domains of AI literacy: effective practices (understanding what different AI platforms can and cannot do), ethical considerations (recognising biases, privacy risks, and power consumption), rhetorical awareness (understanding how AI marketing shapes perception), and subject matter knowledge (having enough domain expertise to evaluate AI outputs critically). These domains are not discrete skills that can be taught independently but rather co-exist and co-inform one another.

Drawing on these frameworks and the enforcement record, consumers can develop a practical toolkit for spotting AI washing. The first question to ask is specificity: does the company explain precisely what its AI does, or does it rely on vague buzzwords? Genuine AI companies tend to be specific about their models, training data, and capabilities. Companies engaged in AI washing tend to use phrases like “powered by AI” or “AI-driven insights” without explaining the underlying technology. The second question is transparency: does the company publish technical documentation, model cards, or performance benchmarks? Reputable AI firms increasingly publish this information voluntarily. The third question concerns provenance: did the company develop its own AI, or is it using a third-party service? There is nothing inherently wrong with building on existing AI platforms, but consumers deserve to know what they are actually paying for. The fourth question is about limitations: does the company acknowledge what its AI cannot do? Every legitimate AI system has significant limitations, and any company that presents its AI as infallible or universally capable is almost certainly overstating its case.

Perhaps the most important principle is the simplest: if a company's AI claims sound too good to be true, they probably are. The technology is advancing rapidly, but it is not magic, and the gap between what AI can actually deliver today and what marketing departments promise remains enormous.

The Regulatory Patchwork

The regulatory response to AI washing is gaining momentum, but it remains fragmented across jurisdictions and agencies, each with different powers, priorities, and approaches.

In the United States, enforcement has proceeded primarily through existing legal frameworks rather than new AI-specific legislation. The SEC has used securities fraud statutes. The FTC has relied on its longstanding authority to police unfair and deceptive trade practices. In September 2024, the FTC launched “Operation AI Comply,” a coordinated enforcement sweep targeting five companies for deceptive AI claims. The agency also brought an action against Ascend, a suite of businesses operated by William Basta and Kenneth Leung that allegedly defrauded consumers of more than $25 million by falsely claiming its AI tools could generate passive income. A proposed settlement in June 2025 imposed a partially suspended $25 million monetary judgement. In August 2025, the FTC filed a complaint against Air AI for advertising a conversational AI tool that allegedly caused business losses of up to $250,000.

The Department of Justice has maintained enforcement continuity across administrations. Despite broader deregulatory shifts under the Trump administration, the DOJ has not rescinded AI enforcement initiatives begun under the Biden administration. It brought a new criminal AI washing case in April 2025, the prosecution of Nate's founder, suggesting bipartisan consensus that fraudulent AI claims merit criminal prosecution.

At the state level, over 1,000 AI-related bills have been introduced in state legislatures since January 2025. Colorado's AI Act, enacted in May 2024, requires developers and deployers of high-risk AI systems to exercise “reasonable care” to avoid algorithmic discrimination. California's proposed SB 1047, though vetoed by Governor Gavin Newsom in September 2024, sparked intense debate about strict liability for AI harms.

The European Union has taken the most comprehensive legislative approach with the EU AI Act (Regulation (EU) 2024/1689), published in the Official Journal of the European Union, which began phased implementation in 2025. The Act takes a risk-based approach spanning 180 recitals and 113 articles. Prohibitions on AI systems posing unacceptable risks took effect on 2 February 2025. Transparency obligations for general-purpose AI systems follow on a twelve-month timeline. The penalties for non-compliance are severe: up to 35 million euros or 7 per cent of worldwide annual turnover, whichever is higher. While the Act was not explicitly designed to combat AI washing, its strict definitions of what constitutes an AI system and its transparency requirements create an environment where false or exaggerated claims carry substantial legal risk. A pending case before the Court of Justice of the European Union is already testing the boundaries of the Act's AI definition. As legal analysts have noted, the regulatory clarity is exerting a “Brussels effect,” shaping expectations for AI governance from Brazil to Canada.

In the United Kingdom, the regulatory approach has been characteristically more principles-based. The Financial Conduct Authority confirmed in September 2025 that it will not introduce AI-specific regulations, citing the technology's rapid evolution “every three to six months.” Instead, FCA Chief Executive Nikhil Rathi announced that the regulator will rely on existing frameworks, specifically the Consumer Duty and the Senior Managers and Certification Regime, to address AI-related harms. The FCA launched an AI Lab in September 2025 enabling firms to develop and deploy AI systems under regulatory supervision, and its Mills Review is expected to report recommendations on AI in retail financial services in summer 2026.

The more significant development for AI washing in the UK may be the Digital Markets, Competition and Consumers Act 2024, which received Royal Assent on 24 May 2024. The Act grants the Competition and Markets Authority sweeping new direct enforcement powers. For the first time, the CMA can investigate and determine breaches of consumer protection law without court proceedings, and impose fines of up to 10 per cent of global annual turnover. While the Act does not contain AI-specific provisions, its broad prohibition on misleading actions and omissions clearly covers exaggerated AI claims. CMA Chief Executive Sarah Cardell has described the legislation as a “watershed moment” in consumer protection. The CMA stated it would focus initial enforcement on “more egregious breaches,” including information given to consumers that is “objectively false.”

The Investment Dimension

AI washing is not merely a consumer protection issue. It is increasingly a systemic risk to financial markets. Goldman Sachs has acknowledged that AI bubble concerns are “back, and arguably more intense than ever, amid a significant rise in the valuations of many AI-exposed companies, continued massive investments in the AI buildout, and the increasing circularity of the AI ecosystem.” The firm's analysis noted that “past innovation-driven booms, like the 1920s and in the 1990s, have led the market to overpay for future profits even though the underlying innovations were real.”

The numbers are staggering. Hyperscaler capital expenditure on AI infrastructure is projected to reach $1.15 trillion from 2025 through 2027, more than double the $477 billion spent from 2022 through 2024. What began as a $250 billion estimate for AI-related capital expenditure in 2025 has swollen to above $405 billion. Goldman Sachs CEO David Solomon has said he expects “a lot of capital that was deployed that doesn't deliver returns.” Amazon founder Jeff Bezos has called the current environment “kind of an industrial bubble.” Even OpenAI CEO Sam Altman has warned that “people will overinvest and lose money.”

When the capital flowing into an industry reaches these proportions, the incentive to overstate AI capabilities becomes almost irresistible. Companies that cannot demonstrate genuine AI differentiation risk losing funding to competitors who can, or who at least claim they can. This creates a vicious cycle: exaggerated claims raise valuations, which attract more capital, which creates more pressure to exaggerate, which distorts the market signals that investors rely on to allocate resources efficiently.

JP Morgan Asset Management's Michael Cembalest has observed that “AI-related stocks have accounted for 75 per cent of S&P 500 returns, 80 per cent of earnings growth and 90 per cent of capital spending growth since ChatGPT launched in November 2022.” When that much market value depends on a technology whose real-world returns remain uncertain, the consequences of widespread AI washing extend far beyond individual consumer harm. They become a matter of market integrity.

What Genuinely Intelligent Regulation Looks Like

The current regulatory patchwork has achieved some notable successes, particularly the SEC's enforcement actions and the FTC's Operation AI Comply. But addressing AI washing at scale requires more than case-by-case prosecution. It requires structural reforms that create incentives for honesty and penalties for deception.

Several principles should guide this effort. First, mandatory technical disclosure. Companies that market products as “AI-powered” should be required to disclose, in plain language, what specific AI technology they use, whether it was developed in-house or licensed from a third party, what data trained it, and what its documented performance metrics are. This is not an unreasonable burden. The pharmaceutical industry must disclose the composition and clinical trial results of every drug it sells. The financial services industry must disclose the risks associated with every investment product. AI companies should face equivalent obligations.

Second, standardised definitions. The absence of a universally accepted definition of “artificial intelligence” has allowed companies to stretch the term beyond recognition. Regulators should work with technical standards bodies to establish clear thresholds for when a product can legitimately be described as “AI-powered,” much as the term “organic” is regulated in food labelling.

Third, third-party auditing. Just as financial statements require independent audits, AI claims should be subject to independent technical verification. The EU AI Act's requirements for conformity assessments of high-risk AI systems point in this direction, but the principle should extend to marketing claims about AI capabilities more broadly.

Fourth, proportionate penalties. The $225,000 fine imposed on Delphia and the $175,000 fine on Global Predictions were gestures, not deterrents. When companies can raise tens of millions through fraudulent AI claims, penalties must be calibrated to remove the financial incentive for deception. The EU AI Act's penalties of up to 7 per cent of global turnover and the UK CMA's new power to fine up to 10 per cent of global turnover represent the right order of magnitude.

Fifth, consumer education at scale. Regulatory enforcement alone cannot protect consumers from AI washing. Governments should invest in public AI literacy programmes, drawing on the frameworks developed by the OECD, UNESCO, and academic institutions. Microsoft's 2025 AI in Education Report found that 66 per cent of organisational leaders said they would not hire someone without AI literacy skills, indicating that the market itself is beginning to demand this competency. Public investment in AI literacy should be treated with the same urgency as digital literacy campaigns were in the early 2000s.

The Honest Middle Ground

None of this is to suggest that artificial intelligence is merely hype. The technology is real, its capabilities are advancing rapidly, and its potential applications are genuinely transformative. The problem is not AI itself but the gap between what AI can actually do and what companies claim it can do. That gap is where AI washing thrives, and closing it requires honesty from companies, scepticism from consumers, and vigilance from regulators.

The enforcement actions of 2024 and 2025 represent a turning point. For the first time, companies face meaningful legal consequences for overstating their AI capabilities. The SEC, FTC, DOJ, EU regulators, and the UK's CMA are all converging on the same message: existing laws already prohibit fraudulent and misleading claims, and the “AI” label does not provide immunity.

But enforcement is reactive by nature. It catches the worst offenders after the damage is done. Building a world where consumers can trust AI claims requires something more fundamental: a culture of transparency, a standard of proof, and a population literate enough to ask the right questions. The technology itself is neither the hero nor the villain of this story. It is simply a tool, and like all tools, its value depends entirely on the honesty of those who wield it.


References and Sources

  1. US Department of Justice, Southern District of New York. (2025). “Indictment: United States of America v. Albert Saniger.” April 2025. https://www.justice.gov/usao-sdny/media/1396131/dl

  2. Securities and Exchange Commission. (2024). “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence.” Press Release 2024-36, March 2024. https://www.sec.gov/newsroom/press-releases/2024-36

  3. MMC Ventures and Barclays. (2019). “The State of AI 2019: Divergence.” March 2019. Reported by CNBC: https://www.cnbc.com/2019/03/06/40-percent-of-ai-start-ups-in-europe-not-related-to-ai-mmc-report.html

  4. MIT Technology Review. (2019). “About 40% of Europe's AI companies don't use any AI at all.” March 2019. https://www.technologyreview.com/2019/03/05/65990/about-40-of-europes-ai-companies-dont-actually-use-any-ai-at-all/

  5. The Information. (2024). Report on Amazon Just Walk Out technology human review rates. April 2024. Reported by Washington Times: https://www.washingtontimes.com/news/2024/apr/4/amazons-just-walk-out-stores-relied-on-1000-people/

  6. Federal Trade Commission. (2025). “FTC Finalizes Order with DoNotPay That Prohibits Deceptive 'AI Lawyer' Claims.” February 2025. https://www.ftc.gov/news-events/news/press-releases/2025/02/ftc-finalizes-order-donotpay-prohibits-deceptive-ai-lawyer-claims-imposes-monetary-relief-requires

  7. Securities and Exchange Commission. (2025). Presto Automation Inc. enforcement action. January 2025. Reported by White & Case: https://www.whitecase.com/insight-alert/new-settlements-demonstrate-secs-ongoing-efforts-hold-companies-accountable-ai

  8. DLA Piper. (2025). “SEC emphasizes focus on 'AI washing' despite perceived enforcement slowdown.” https://www.dlapiper.com/en/insights/publications/ai-outlook/2025/sec-emphasizes-focus-on-ai-washing

  9. DLA Piper. (2025). “DOJ and SEC send warning on 'AI washing' with charges against technology startup founder.” April 2025. https://www.dlapiper.com/en/insights/publications/2025/04/doj-and-sec-send-warning-against-ai-washing-with-charges-against-technology-startup-founder

  10. Tucker v. Apple Inc., et al., No. 5:25-cv-05197. Filed June 2025. Reported by Bloomberg Law: https://news.bloomberglaw.com/litigation/apple-ai-washing-cases-signal-new-line-of-deception-litigation

  11. Federal Trade Commission. (2024). “FTC Announces Crackdown on Deceptive AI Claims and Schemes.” September 2024. https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes

  12. European Parliament. (2024). “EU AI Act: first regulation on artificial intelligence.” https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

  13. Financial Conduct Authority. (2025). “AI and the FCA: our approach.” September 2025. https://www.fca.org.uk/firms/innovation/ai-approach

  14. Digital Markets, Competition and Consumers Act 2024. UK Parliament. https://bills.parliament.uk/bills/3453

  15. CMS Law-Now. (2025). “Avoiding AI-washing: Legally compliant advertising with artificial intelligence.” July 2025. https://cms-lawnow.com/en/ealerts/2025/07/avoiding-ai-washing-legally-compliant-advertising-with-artificial-intelligence

  16. California Management Review. (2024). “AI Washing: The Cultural Traps That Lead to Exaggeration and How CEOs Can Stop Them.” December 2024. https://cmr.berkeley.edu/2024/12/ai-washing-the-cultural-traps-that-lead-to-exaggeration-and-how-ceos-can-stop-them/

  17. Goldman Sachs. (2025). “Top of Mind: AI: in a bubble?” https://www.goldmansachs.com/insights/top-of-mind/ai-in-a-bubble

  18. OECD. (2025). “Empowering Learners for the Age of AI: An AI Literacy Framework.” Review Draft, May 2025. https://ailiteracyframework.org/wp-content/uploads/2025/05/AILitFramework_ReviewDraft.pdf

  19. TechCrunch. (2025). “Fintech founder charged with fraud after 'AI' shopping app found to be powered by humans in the Philippines.” April 2025. https://techcrunch.com/2025/04/10/fintech-founder-charged-with-fraud-after-ai-shopping-app-found-to-be-powered-by-humans-in-the-philippines/

  20. Fortune. (2025). “A tech CEO has been charged with fraud for saying his e-commerce startup was powered by AI.” April 2025. https://fortune.com/2025/04/11/albert-saniger-nate-shopping-app-fraud-ai-justice-department/

  21. DWF Group. (2025). “AI washing: Understanding the risks.” April 2025. https://dwfgroup.com/en/news-and-insights/insights/2025/4/ai-washing-understanding-the-risks

  22. Clyde & Co. (2025). “The fine print of AI hype: The legal risks of AI washing.” May 2025. https://www.clydeco.com/en/insights/2025/05/the-fine-print-of-ai-hype-the-legal-risks-of-ai-wa

  23. Darrow. (2025). “AI Washing Sparks Investor Suits and SEC Scrutiny.” https://www.darrow.ai/resources/ai-washing

  24. Crunchbase. (2025). AI sector funding data for 2025.

  25. Ulster University Library Guides. (2025). “AI Literacy: ROBOT Checklist.” https://guides.library.ulster.ac.uk/c.php?g=728295&p=5303990

  26. Ohio University. (2025). “A framework for considering AI literacy.” November 2025. https://www.ohio.edu/news/2025/11/framework-considering-ai-literacy

  27. Long, D. and Magerko, B. (2020). “What is AI Literacy? Competencies and Design Considerations.” CHI Conference on Human Factors in Computing Systems.

  28. Financial Conduct Authority. (2025). “Mills Review to consider how AI will reshape retail financial services.” https://www.fca.org.uk/news/press-releases/mills-review-consider-how-ai-will-reshape-retail-financial-services

  29. Womble Bond Dickinson. (2024). “Digital Markets, Competition and Consumers Act 2024 explained.” https://www.womblebonddickinson.com/uk/insights/articles-and-briefings/digital-markets-competition-and-consumers-act-2024-explained-cmas


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...