The One-Word Catastrophe: How a Single Character Can Bring AI Down

In August 2025, researchers at MIT's Laboratory for Information and Decision Systems published findings that should terrify anyone who trusts artificial intelligence to make important decisions. Kalyan Veeramachaneni and his team discovered something devastatingly simple: most of the time, it takes just a single word to fool the AI text classifiers that financial institutions, healthcare systems, and content moderation platforms rely on to distinguish truth from fiction, safety from danger, legitimacy from fraud.

“Most of the time, this was just a one-word change,” Veeramachaneni, a principal research scientist at MIT, explained in the research published in the journal Expert Systems. Even more alarming, the team found that one-tenth of 1% of all the 30,000 words in their test vocabulary could account for almost half of all successful attacks that reversed a classifier's judgement. Think about that for a moment. In a vast ocean of language, fewer than 30 carefully chosen words possessed the power to systematically deceive systems we've entrusted with billions of pounds in transactions, life-or-death medical decisions, and the integrity of public discourse itself.

This isn't a theoretical vulnerability buried in academic journals. It's a present reality with consequences that have already destroyed lives, toppled governments, and cost institutions billions. The Dutch government's childcare benefits algorithm wrongfully accused more than 35,000 families of fraud, forcing them to repay tens of thousands of euros, separating 2,000 children from their parents, and ultimately causing some victims to die by suicide. The scandal grew so catastrophic that it brought down the entire Dutch government in 2021. IBM's Watson for Oncology, trained on synthetic patient data rather than real cases, recommended treatments with explicit warnings against use in patients with severe bleeding to a 65-year-old lung cancer patient experiencing exactly that condition. Zillow's AI-powered home valuation system overestimated property values so dramatically that the company purchased homes at inflated prices, incurred millions in losses, laid off 25% of its workforce, and shuttered its entire Zillow Offers Division.

These aren't glitches or anomalies. They're symptoms of a fundamental fragility at the heart of machine learning systems, a vulnerability so severe that it calls into question whether we should be deploying these technologies in critical decision-making contexts at all. And now, MIT has released the very tools that expose these weaknesses as open-source software, freely available for anyone to download and deploy.

The question isn't whether these systems can be broken. They demonstrably can. The question is what happens next.

The Architecture of Deception

To understand why AI text classifiers are so vulnerable, you need to understand how they actually work. Unlike humans who comprehend meaning through context, culture, and lived experience, these systems rely on mathematical patterns in high-dimensional vector spaces. They convert words into numerical representations called embeddings, then use statistical models to predict classifications based on patterns they've observed in training data.

This approach works remarkably well, until it doesn't. The problem lies in what researchers call the “adversarial example,” a carefully crafted input designed to exploit the mathematical quirks in how neural networks process information. In computer vision, adversarial examples might add imperceptible noise to an image of a panda, causing a classifier to identify it as a gibbon with 99% confidence. In natural language processing, the attacks are even more insidious because text is discrete rather than continuous. You can't simply add a tiny amount of noise; you must replace entire words or characters whilst maintaining semantic meaning to a human reader.

The MIT team's approach, detailed in their SP-Attack and SP-Defense tools, leverages large language models to generate adversarial sentences that fool classifiers whilst preserving meaning. Here's how it works: the system takes an original sentence, uses an LLM to paraphrase it, then checks whether the classifier produces a different label for the semantically identical text. If a sentence that means the same thing gets classified differently, you've found an adversarial example. If the LLM confirms two sentences convey identical meaning but the classifier labels them differently, that discrepancy reveals a fundamental vulnerability.

What makes this particularly devastating is its simplicity. Earlier adversarial attack methods required complex optimisation algorithms and white-box access to model internals. MIT's approach works as a black-box attack, requiring no knowledge of the target model's architecture or parameters. An attacker needs only to query the system and observe its responses, the same capability any legitimate user possesses.

The team tested their methods across multiple datasets and found that competing defence approaches allowed adversarial attacks to succeed 66% of the time. Their SP-Defense system, which generates adversarial examples and uses them to retrain models, cut that success rate nearly in half to 33.7%. That's significant progress, but it still means that one-third of attacks succeed even against the most advanced defences available. In contexts where millions of transactions or medical decisions occur daily, a 33.7% vulnerability rate translates to hundreds of thousands of potential failures.

When Classifiers Guard the Gates

The real horror isn't the technical vulnerability itself. It's where we've chosen to deploy these fragile systems.

In financial services, AI classifiers make split-second decisions about fraud detection, credit worthiness, and transaction legitimacy. Banks and fintech companies have embraced machine learning because it can process volumes of data that would overwhelm human analysts, identifying suspicious patterns in microseconds. A 2024 survey by BioCatch found that 74% of financial institutions already use AI for financial crime detection and 73% for fraud detection, with all respondents expecting both financial crime and fraud activity to increase. Deloitte's Centre for Financial Services estimates that banks will suffer £32 billion in losses from generative AI-enabled fraud by 2027, up from £9.8 billion in 2023.

But adversarial attacks on these systems aren't theoretical exercises. Fraudsters actively manipulate transaction data to evade detection, a cat-and-mouse game that requires continuous model updates. The dynamic nature of fraud, combined with the evolving tactics of cybercriminals, creates what researchers describe as “a constant arms race between AI developers and attackers.” When adversarial attacks succeed, they don't just cause financial losses. They undermine trust in the entire financial system, erode consumer confidence, and create regulatory nightmares as institutions struggle to explain how their supposedly sophisticated AI systems failed to detect obvious fraud.

Healthcare applications present even graver risks. The IBM Watson for Oncology debacle illustrates what happens when AI systems make life-or-death recommendations based on flawed training. Internal IBM documents revealed that the system made “unsafe and incorrect” cancer treatment recommendations during its promotional period. The software was trained on synthetic cancer cases, hypothetical patients rather than real medical data, and based its recommendations on the expertise of a handful of specialists rather than evidence-based guidelines or peer-reviewed research. Around 50 partnerships were announced between IBM Watson and healthcare organisations, yet none produced usable tools or applications as of 2019. The company poured billions into Watson Health before ultimately discontinuing the solution, a failure that represents not just wasted investment but potentially compromised patient care at the 230 hospitals worldwide that deployed the system.

Babylon Health's AI symptom checker, which triaged patients and diagnosed illnesses via chatbot, gave unsafe recommendations and sometimes missed serious conditions. The company went from a £1.6 billion valuation serving millions of NHS patients to insolvency by mid-2023, with its UK assets sold for just £496,000. These aren't edge cases. They're harbingers of a future where we've delegated medical decision-making to systems that lack the contextual understanding, clinical judgement, and ethical reasoning that human clinicians develop through years of training and practice.

In public discourse, the stakes are equally high albeit in different dimensions. Content moderation AI systems deployed by social media platforms struggle with context, satire, and cultural nuance. During the COVID-19 pandemic, YouTube's reliance on AI led to a significant increase in false positives when educational and news-related content about COVID-19 was removed after being classified as misinformation. The system couldn't distinguish between medical disinformation and legitimate public health information, a failure that hampered accurate information dissemination during a global health crisis.

Platforms like Facebook and Twitter struggle even more with moderating content in languages such as Burmese, Amharic, and Sinhala or Tamil, allowing misinformation and hate speech to go unchecked. In Sudan, AI-generated content filled communicative voids left by collapsing media infrastructure and disrupted public discourse. The proliferation of AI-generated misinformation distorts user perceptions and undermines their ability to make informed decisions, particularly in the absence of comprehensive governance frameworks.

xAI's Grok chatbot reportedly generated antisemitic posts praising Hitler in July 2025, receiving sustained media coverage before a rapid platform response. These failures aren't just embarrassing; they contribute to polarisation, enable harassment, and degrade the information ecosystem that democracies depend upon.

The Transparency Dilemma

Here's where things get truly complicated. MIT didn't just discover these vulnerabilities; they published the methodology and released the tools as open-source software. The SP-Attack and SP-Defense packages are freely available for download, complete with documentation and examples. Any researcher, security professional, or bad actor can now access sophisticated adversarial attack capabilities that previously required deep expertise in machine learning and natural language processing.

This decision embodies one of the most contentious debates in computer security: should vulnerabilities be disclosed publicly, or should they be reported privately to affected parties? The tension between transparency and security has divided researchers, practitioners, and policymakers for decades.

Proponents of open disclosure argue that transparency fosters trust, accountability, and collective progress. When algorithms and data are open to examination, it becomes easier to identify biases, unfair practices, and unethical behaviour embedded in AI systems. OpenAI believes coordinated vulnerability disclosure will become a necessary practice as AI systems become increasingly capable of finding and patching security vulnerabilities. Their systems have already uncovered zero-day vulnerabilities in third-party and open-source software, demonstrating that AI can play a role in both attack and defence. Open-source AI ecosystems thrive on the principle that many eyes make bugs shallow; the community can identify vulnerabilities and suggest improvements through public bug bounty programmes or forums for ethical discussions.

But open-source machine learning models' transparency and accessibility also make them vulnerable to attacks. Key threats include model inversion, membership inference, data leakage, and backdoor attacks, which could expose sensitive data or compromise system integrity. Open-source AI ecosystems are more susceptible to cybersecurity risks like data poisoning and adversarial attacks because their lack of controlled access and centralised oversight can hinder vulnerability identification.

Critics of full disclosure worry that publishing attack methodologies provides a blueprint for malicious actors. Security researcher responsible disclosure practices traditionally involved alerting the affected company or vendor organisation with the expectation that they would investigate, develop security updates, and release patches in a timely manner before an agreed deadline. Full disclosure, where vulnerabilities are immediately made public upon discovery, can place organisations at a disadvantage in the race against time to fix publicised flaws.

For AI systems, this debate takes on additional complexity. A 2025 study found that only 64% of 264 AI vendors provide a disclosure channel, and just 18% explicitly acknowledge AI-specific vulnerabilities, revealing significant gaps in the AI security ecosystem. The lack of coordinated discovery and disclosure processes, combined with the closed-source nature of many AI systems, means users remain unaware of problems until they surface. Reactive reporting by harmed parties makes accountability an exception rather than the norm for machine learning systems.

Security researchers advocate for adapting the Coordinated Vulnerability Disclosure process into a dedicated Coordinated Flaw Disclosure framework tailored to machine learning's distinctive properties. This would formalise the recognition of valid issues in ML models through an adjudication process and provide legal protections for independent ML issue researchers, akin to protections for good-faith security research.

Anthropic fully supports researchers' right to publicly disclose vulnerabilities they discover, asking only to coordinate on the timing of such disclosures to prevent potential harm to services, customers, and other parties. It's a delicate balance: transparency enables progress and accountability, but it also arms potential attackers with knowledge they might not otherwise possess.

The MIT release of SP-Attack and SP-Defense embodies this tension. By making these tools available, the researchers have enabled defenders to test and harden their systems. But they've also ensured that every fraudster, disinformation operative, and malicious actor now has access to state-of-the-art adversarial attack capabilities. The optimistic view holds that this will spur a race toward greater security as organisations scramble to patch vulnerabilities and develop more robust systems. The pessimistic view suggests it simply provides a blueprint for more sophisticated attacks, lowering the barrier to entry for adversarial manipulation.

Which interpretation proves correct may depend less on the technology itself and more on the institutional responses it provokes.

The Liability Labyrinth

When an AI classifier fails and causes harm, who bears responsibility? This seemingly straightforward question opens a Pandora's box of legal, ethical, and practical challenges.

Existing frameworks struggle to address it.

Traditional tort law relies on concepts like negligence, strict liability, and products liability, doctrines developed for a world of tangible products and human decisions. AI systems upend these frameworks because responsibility is distributed across multiple stakeholders: developers who created the model, data providers who supplied training data, users who deployed the system, and entities that maintain and update it. This distribution of responsibility dilutes accountability, making it difficult for injured parties to seek redress.

The negligence-based approach focuses on assigning fault to human conduct. In the AI context, a liability regime based on negligence examines whether creators of AI-based systems have been careful enough in the design, testing, deployment, and maintenance of those systems. But what constitutes “careful enough” for a machine learning model? Should developers be held liable if their model performs well in testing but fails catastrophically when confronted with adversarial examples? How much robustness testing is sufficient? Current legal frameworks provide little guidance.

Strict liability and products liability offer alternative approaches that don't require proving fault. The European Union has taken the lead here with significant developments in 2024. The revised Product Liability Directive now includes software and AI within its scope, irrespective of the mode of supply or usage, whether embedded in hardware or distributed independently. This strict liability regime means that victims of AI-related damage don't need to prove negligence; they need only demonstrate that the product was defective and caused harm.

The proposed AI Liability Directive addresses non-contractual fault-based claims for damage caused by the failure of an AI system to produce an output, which would include failures in text classifiers and other AI systems. Under this framework, a provider or user can be ordered to disclose evidence relating to a specific high-risk AI system suspected of causing damage. Perhaps most significantly, a presumption of causation exists between the defendant's fault and the AI system's output or failure to produce an output where the claimant has demonstrated that the system's output or failure gave rise to damage.

These provisions attempt to address the “black box” problem inherent in many AI systems. The complexity, autonomous behaviour, and lack of predictability in machine learning models make traditional concepts like breach, defect, and causation difficult to apply. By creating presumptions and shifting burdens of proof, the EU framework aims to level the playing field between injured parties and the organisations deploying AI systems.

However, doubt has recently been cast on whether the AI Liability Directive is even necessary, with the EU Parliament's legal affairs committee commissioning a study on whether a legal gap exists that the AILD would fill. The legislative process remains incomplete, and the directive's future is uncertain.

Across the Atlantic, the picture blurs still further.

In the United States, the National Telecommunications and Information Administration has examined liability rules and standards for AI systems, but comprehensive federal legislation remains elusive. Some scholars propose a proportional liability model where responsibility is distributed among AI developers, deployers, and users based on their level of control over the system. This approach acknowledges that no single party exercises complete control whilst ensuring that victims have pathways to compensation.

Proposed mitigation measures include AI auditing mechanisms, explainability requirements, and insurance schemes to ensure liability protection whilst maintaining business viability. The challenge is crafting requirements that are stringent enough to protect the public without stifling innovation or imposing impossible burdens on developers.

The Watson for Oncology case illustrates these challenges. Who should be liable when the system recommends an unsafe treatment? IBM, which developed the software? The hospitals that deployed it? The oncologists who relied on its recommendations? The training data providers who supplied synthetic rather than real patient data? Or should liability be shared proportionally based on each party's role?

And how do we account for the fact that the system's failures emerged not from a single defect but from fundamental flaws in the training methodology and validation approach?

The Dutch childcare benefits scandal raises similar questions with an algorithmic discrimination dimension. The Dutch data protection authority fined the tax administration €2.75 million for the unlawful, discriminatory, and improper manner in which they processed data on dual nationality. But that fine represents a tiny fraction of the harm caused to more than 35,000 families. Victims are still seeking compensation years after the scandal emerged, navigating a legal system ill-equipped to handle algorithmic harm at scale.

For adversarial attacks on text classifiers specifically, liability questions become even thornier. If a fraudster uses adversarial manipulation to evade a bank's fraud detection system, should the bank bear liability for deploying a vulnerable classifier? What if the bank used industry-standard models and followed best practices for testing and validation? Should the model developer be liable even if the attack methodology wasn't known at the time of deployment? And what happens when open-source tools make adversarial attacks accessible to anyone with modest technical skills?

These aren't hypothetical scenarios. They're questions that courts, regulators, and institutions are grappling with right now, often with inadequate frameworks and precedents.

The Detection Arms Race

Whilst MIT researchers work on general-purpose adversarial robustness, a parallel battle unfolds in AI-generated text detection, a domain where the stakes are simultaneously lower and higher than fraud or medical applications. The race to detect AI-generated text matters for academic integrity, content authenticity, and distinguishing human creativity from machine output. But the adversarial dynamics mirror those in other domains, and the vulnerabilities reveal similar fundamental weaknesses.

GPTZero, created by Princeton student Edward Tian, became one of the most prominent AI text detection tools. It analyses text based on two key metrics: perplexity and burstiness. Perplexity measures how predictable the text is to a language model; lower perplexity indicates more predictable, likely AI-generated text because language models choose high-probability words. Burstiness assesses variability in sentence structures; humans tend to vary their writing patterns throughout a document whilst AI systems often maintain more consistent patterns.

These metrics work reasonably well against naive AI-generated text, but they crumble against adversarial techniques. A method called the GPTZero By-passer modified essay text by replacing key letters with Cyrillic characters that look identical to humans but appear completely different to the machine, a classic homoglyph attack. GPTZero patched this vulnerability within days and maintains an updated greylist of bypass methods, but the arms race continues.

DIPPER, an 11-billion parameter paraphrase generation model capable of paraphrasing text whilst considering context and lexical heterogeneity, successfully bypassed GPTZero and other detectors. Adversarial attacks in NLP involve altering text with slight perturbations including deliberate misspelling, rephrasing and synonym usage, insertion of homographs and homonyms, and back translation. Many bypass services apply paraphrasing tools such as the open-source T5 model for rewriting text, though research has demonstrated that paraphrasing detection is possible. Some applications apply simple workarounds such as injection attacks, which involve adding random spaces to text.

OpenAI's own AI text classifier, released then quickly deprecated, accurately identified only 26% of AI-generated text whilst incorrectly labelling human prose as AI-generated 9% of the time. These error rates made the tool effectively useless for high-stakes applications. The company ultimately withdrew it, acknowledging that current detection methods simply aren't reliable enough.

The fundamental problem mirrors the challenge in other classifier domains: adversarial examples exploit the gap between how models represent concepts mathematically and how humans understand meaning. A detector might flag text with low perplexity and low burstiness as AI-generated, but an attacker can simply instruct their language model to “write with high perplexity and high burstiness,” producing text that fools the detector whilst remaining coherent to human readers.

Research has shown that current detection models can be compromised in as little as 10 seconds, leading to the misclassification of machine-generated text as human-written content. The growing reliance on large language models underscores the urgent need for effective detection mechanisms, which are critical to mitigating misuse and safeguarding domains like artistic expression and social networks. But if detection is fundamentally unreliable, what's the alternative?

Rethinking Machine Learning's Role

The accumulation of evidence points toward an uncomfortable conclusion: AI text classifiers, as currently implemented, may be fundamentally unsuited for critical decision-making contexts. Not because the technology will never improve, but because the adversarial vulnerability is intrinsic to how these systems learn and generalise.

Every machine learning model operates by finding patterns in training data and extrapolating to new examples. This works when test data resembles training data and when all parties act in good faith. But adversarial settings violate both assumptions. Attackers actively search for inputs that exploit edge cases, and the distribution of adversarial examples differs systematically from training data. The model has learned to classify based on statistical correlations that hold in normal cases but break down under adversarial manipulation.

Some researchers argue that adversarial robustness and standard accuracy exist in fundamental tension. Making a model more robust to adversarial perturbations can reduce its accuracy on normal examples, and vice versa. The mathematics of high-dimensional spaces suggests that adversarial examples may be unavoidable; in complex models with millions or billions of parameters, there will always be input combinations that produce unexpected outputs. We can push vulnerabilities to more obscure corners of the input space, but we may never eliminate them entirely.

This doesn't mean abandoning machine learning. It means rethinking where and how we deploy it. Some applications suit these systems well: recommender systems, language translation, image enhancement, and other contexts where occasional errors cause minor inconvenience rather than catastrophic harm. The cost-benefit calculus shifts dramatically when we consider fraud detection, medical diagnosis, content moderation, and benefits administration.

For these critical applications, several principles should guide deployment:

Human oversight remains essential. AI systems should augment human decision-making, not replace it. A classifier can flag suspicious transactions for human review, but it shouldn't automatically freeze accounts or deny legitimate transactions. Watson for Oncology might have succeeded if positioned as a research tool for oncologists to consult rather than an authoritative recommendation engine. The Dutch benefits scandal might have been averted if algorithm outputs were treated as preliminary flags requiring human investigation rather than definitive determinations of fraud.

Transparency and explainability must be prioritised. Black-box models that even their creators don't fully understand shouldn't make decisions that profoundly affect people's lives. Explainable AI approaches, which provide insight into why a model made a particular decision, enable human reviewers to assess whether the reasoning makes sense. If a fraud detection system flags a transaction, the review should reveal which features triggered the alert, allowing a human analyst to determine if those features actually indicate fraud or if the model has latched onto spurious correlations.

Adversarial robustness must be tested continuously. Deploying a model shouldn't be a one-time event but an ongoing process of monitoring, testing, and updating. Tools like MIT's SP-Attack provide mechanisms for proactive robustness testing. Organisations should employ red teams that actively attempt to fool their classifiers, identifying vulnerabilities before attackers do. When new attack methodologies emerge, systems should be retested and updated accordingly.

Regulatory frameworks must evolve. The EU's approach to AI liability represents important progress, but gaps remain. Comprehensive frameworks should address not just who bears liability when systems fail but also what minimum standards systems must meet before deployment in critical contexts. Should high-risk AI systems require independent auditing and certification? Should organisations be required to maintain insurance to cover potential harms? Should certain applications be prohibited entirely until robustness reaches acceptable levels?

Diversity of approaches reduces systemic risk. When every institution uses the same model or relies on the same vendor, a vulnerability in that system becomes a systemic risk. Encouraging diversity in AI approaches, even if individual systems are somewhat less accurate, reduces the chance that a single attack methodology can compromise the entire ecosystem. This principle mirrors the biological concept of monoculture vulnerability; genetic diversity protects populations from diseases that might otherwise spread unchecked.

The Path Forward

The one-word vulnerability that MIT researchers discovered isn't just a technical challenge. It's a mirror reflecting our relationship with technology and our willingness to delegate consequential decisions to systems we don't fully understand or control.

We've rushed to deploy AI classifiers because they offer scaling advantages that human decision-making can't match. A bank can't employ enough fraud analysts to review millions of daily transactions. A social media platform can't hire enough moderators to review billions of posts. Healthcare systems face shortages of specialists in critical fields. The promise of AI is that it can bridge these gaps, providing intelligent decision support at scales humans can't achieve.

This is the trade we made.

But scale without robustness creates scale of failure. The Dutch benefits algorithm didn't wrongly accuse a few families; it wrongly accused tens of thousands. When AI-powered fraud detection fails, it doesn't miss individual fraudulent transactions; it potentially exposes entire institutions to systematic exploitation.

The choice isn't between AI and human decision-making; it's about how we combine both in ways that leverage the strengths of each whilst mitigating their weaknesses.

MIT's decision to release adversarial attack tools as open source forces this reckoning. We can no longer pretend these vulnerabilities are theoretical or that security through obscurity provides adequate protection. The tools are public, the methodologies are published, and anyone with modest technical skills can now probe AI classifiers for weaknesses. This transparency is uncomfortable, perhaps even frightening, but it may be necessary to spur the systemic changes required.

History offers instructive parallels. When cryptographic vulnerabilities emerge, the security community debates disclosure timelines but ultimately shares information because that's how systems improve. The alternative, allowing known vulnerabilities to persist in systems billions of people depend upon, creates far greater long-term risk.

Similarly, adversarial robustness in AI will improve only through rigorous testing, public scrutiny, and pressure on developers and deployers to prioritise robustness alongside accuracy.

The question of liability remains unresolved, but its importance cannot be overstated. Clear liability frameworks create incentives for responsible development and deployment. If organisations know they'll bear consequences for deploying vulnerable systems in critical contexts, they'll invest more in robustness testing, maintain human oversight, and think more carefully about where AI is appropriate. Without such frameworks, the incentive structure encourages moving fast and breaking things, externalising risks onto users and society whilst capturing benefits privately.

We're at an inflection point.

The next few years will determine whether AI classifier vulnerabilities spur a productive race toward greater security or whether they're exploited faster than they can be patched, leading to catastrophic failures that erode public trust in AI systems generally. The outcome depends on choices we make now about transparency, accountability, regulation, and the appropriate role of AI in consequential decisions.

The one-word catastrophe isn't a prediction. It's a present reality we must grapple with honestly if we're to build a future where artificial intelligence serves humanity rather than undermines the systems we depend upon for justice, health, and truth.


Sources and References

  1. MIT News. “A new way to test how well AI systems classify text.” Massachusetts Institute of Technology, 13 August 2025. https://news.mit.edu/2025/new-way-test-how-well-ai-systems-classify-text-0813

  2. Xu, Lei, Sarah Alnegheimish, Laure Berti-Equille, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. “Single Word Change Is All You Need: Using LLMs to Create Synthetic Training Examples for Text Classifiers.” Expert Systems, 7 July 2025. https://onlinelibrary.wiley.com/doi/10.1111/exsy.70079

  3. Wikipedia. “Dutch childcare benefits scandal.” Accessed 20 October 2025. https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal

  4. Dolfing, Henrico. “Case Study 20: The $4 Billion AI Failure of IBM Watson for Oncology.” 2024. https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html

  5. STAT News. “IBM's Watson supercomputer recommended 'unsafe and incorrect' cancer treatments, internal documents show.” 25 July 2018. https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/

  6. BioCatch. “2024 AI Fraud Financial Crime Survey.” 2024. https://www.biocatch.com/ai-fraud-financial-crime-survey

  7. Deloitte Centre for Financial Services. “Generative AI is expected to magnify the risk of deepfakes and other fraud in banking.” 2024. https://www2.deloitte.com/us/en/insights/industry/financial-services/financial-services-industry-predictions/2024/deepfake-banking-fraud-risk-on-the-rise.html

  8. Morris, John X., Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. “TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP.” Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.

  9. European Parliament. “EU AI Act: first regulation on artificial intelligence.” 2024. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

  10. OpenAI. “Scaling security with responsible disclosure.” 2025. https://openai.com/index/scaling-coordinated-vulnerability-disclosure/

  11. Anthropic. “Responsible Disclosure Policy.” Accessed 20 October 2025. https://www.anthropic.com/responsible-disclosure-policy

  12. GPTZero. “What is perplexity & burstiness for AI detection?” Accessed 20 October 2025. https://gptzero.me/news/perplexity-and-burstiness-what-is-it/

  13. The Daily Princetonian. “Edward Tian '23 creates GPTZero, software to detect plagiarism from AI bot ChatGPT.” January 2023. https://www.dailyprincetonian.com/article/2023/01/edward-tian-gptzero-chatgpt-ai-software-princeton-plagiarism

  14. TechCrunch. “The fall of Babylon: Failed telehealth startup once valued at $2B goes bankrupt, sold for parts.” 31 August 2023. https://techcrunch.com/2023/08/31/the-fall-of-babylon-failed-tele-health-startup-once-valued-at-nearly-2b-goes-bankrupt-and-sold-for-parts/

  15. Consumer Financial Protection Bureau. “CFPB Takes Action Against Hello Digit for Lying to Consumers About Its Automated Savings Algorithm.” August 2022. https://www.consumerfinance.gov/about-us/newsroom/cfpb-takes-action-against-hello-digit-for-lying-to-consumers-about-its-automated-savings-algorithm/

  16. CNBC. “Zillow says it's closing home-buying business, reports Q3 results.” 2 November 2021. https://www.cnbc.com/2021/11/02/zillow-shares-plunge-after-announcing-it-will-close-home-buying-business.html

  17. PBS News. “Musk's AI company scrubs posts after Grok chatbot makes comments praising Hitler.” July 2025. https://www.pbs.org/newshour/nation/musks-ai-company-scrubs-posts-after-grok-chatbot-makes-comments-praising-hitler

  18. Future of Life Institute. “2025 AI Safety Index.” Summer 2025. https://futureoflife.org/ai-safety-index-summer-2025/

  19. Norton Rose Fulbright. “Artificial intelligence and liability: Key takeaways from recent EU legislative initiatives.” 2024. https://www.nortonrosefulbright.com/en/knowledge/publications/7052eff6/artificial-intelligence-and-liability

  20. Computer Weekly. “The one problem with AI content moderation? It doesn't work.” Accessed 20 October 2025. https://www.computerweekly.com/feature/The-one-problem-with-AI-content-moderation-It-doesnt-work


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...