AI Lies in Court: Can Lawyers Verify What Systems Invent

Brandon Monk knew something had gone terribly wrong the moment the judge called his hearing. The Texas attorney had submitted what he thought was a solid legal brief, supported by relevant case law and persuasive quotations. There was just one problem: the cases didn't exist. The quotations were fabricated. And the AI tool he'd used, Claude, had generated the entire fiction with perfect confidence.
In November 2024, Judge Marcia Crone of the U.S. District Court for the Eastern District of Texas sanctioned Monk £2,000, ordered him to complete continuing legal education on artificial intelligence, and required him to inform his clients of the debacle. The case, Gauthier v. Goodyear Tire & Rubber Co., joined a rapidly expanding catalogue of similar disasters. By mid-2025, legal scholar Damien Charlotin, who tracks AI hallucinations in court filings through his database, had documented at least 206 instances of lawyers submitting AI-generated hallucinations to courts, with new cases materialising daily.
This isn't merely an epidemic of professional carelessness. It represents something far more consequential: the collision between statistical pattern-matching and the reasoned argumentation that defines legal thinking. As agentic AI systems promise to autonomously conduct legal research, draft documents, and make strategic recommendations, they simultaneously demonstrate an unwavering capacity to fabricate case law with such confidence that even experienced lawyers cannot distinguish truth from fiction.
The question facing the legal profession isn't whether AI will transform legal practice. That transformation is already underway. The question is whether meaningful verification frameworks can preserve both the efficiency gains AI promises and the fundamental duty of accuracy that underpins public trust in the justice system. The answer may determine not just the future of legal practice, but whether artificial intelligence and the rule of law are fundamentally compatible.
The Confidence of Fabrication
On 22 June 2023, Judge P. Kevin Castel of the U.S. District Court for the Southern District of New York imposed sanctions of £5,000 on attorneys Steven Schwartz and Peter LoDuca. Schwartz had used ChatGPT to research legal precedents for a personal injury case against Avianca Airlines. The AI generated six compelling cases, complete with detailed citations, procedural histories, and relevant quotations. All six were entirely fictitious.
“It just never occurred to me that it would be making up cases,” Schwartz testified. A practising lawyer since 1991, he had assumed the technology operated like traditional legal databases: retrieving real information rather than generating plausible fictions. When opposing counsel questioned the citations, Schwartz asked ChatGPT to verify them. The AI helpfully provided what appeared to be full-text versions of the cases, complete with judicial opinions and citation histories. All fabricated.
“Many harms flow from the submission of fake opinions,” Judge Castel wrote in his decision. “The opposing party wastes time and money in exposing the deception. The Court's time is taken from other important endeavours. The client may be deprived of arguments based on authentic judicial precedents.”
What makes these incidents particularly unsettling isn't that AI makes mistakes. Traditional legal research tools contain errors too. What distinguishes these hallucinations is their epistemological character: the AI doesn't fail to find relevant cases. It actively generates plausible but entirely fictional legal authorities, presenting them with the same confidence it presents actual case law.
The scale of the problem became quantifiable in 2024, when researchers Varun Magesh and Faiz Surani at Stanford University's RegLab conducted the first preregistered empirical evaluation of AI-driven legal research tools. Their findings, published in the Journal of Empirical Legal Studies, revealed that even specialised legal AI systems hallucinate at alarming rates. Westlaw's AI-Assisted Research produced hallucinated or incorrect information 33 per cent of the time, providing accurate responses to only 42 per cent of queries. LexisNexis's Lexis+ AI performed better but still hallucinated 17 per cent of the time. Thomson Reuters' Ask Practical Law AI hallucinated more than 17 per cent of the time and provided accurate responses to only 18 per cent of queries.
These aren't experimental systems or consumer-grade chatbots. They're premium legal research platforms, developed by the industry's leading publishers, trained on vast corpora of actual case law, and marketed specifically to legal professionals who depend on accuracy. Yet they routinely fabricate cases, misattribute quotations, and generate citations to nonexistent authorities with unwavering confidence.
The Epistemology Problem
The hallucination crisis reveals a deeper tension between how large language models operate and how legal reasoning functions. Understanding this tension requires examining what these systems actually do when they “think.”
Large language models don't contain databases of facts that they retrieve when queried. They're prediction engines, trained on vast amounts of text to identify statistical patterns in how words relate to one another. When you ask ChatGPT or Claude about legal precedent, it doesn't search a library of cases. It generates text that statistically resembles the patterns it learned during training. If legal citations in its training data tend to follow certain formats, contain particular types of language, and reference specific courts, the model will generate new citations that match those patterns, regardless of whether the cases exist.
This isn't a bug in the system. It's how the system works.
Recent research has exposed fundamental limitations in how these models handle knowledge. A 2025 study published in Nature Machine Intelligence found that large language models cannot reliably distinguish between belief and knowledge, or between opinions and facts. Using the KaBLE benchmark of 13,000 questions across 13 epistemic tasks, researchers discovered that most models fail to grasp the factive nature of knowledge: the basic principle that knowledge must correspond to reality and therefore must be true.
“In contexts where decisions based on correct knowledge can sway outcomes, ranging from medical diagnoses to legal judgements, the inadequacies of the models underline a pressing need for improvements,” the researchers warned. “Failure to make such distinctions can mislead diagnoses, distort judicial judgements and amplify misinformation.”
From an epistemological perspective, law operates as a normative system, interpreting and applying legal statements within a shared framework of precedent, statutory interpretation, and constitutional principles. Legal reasoning requires distinguishing between binding and persuasive authority, understanding jurisdictional hierarchies, recognising when cases have been overruled or limited, and applying rules to novel factual circumstances. It's a process fundamentally rooted in the relationship between propositions and truth.
Statistical pattern-matching, by contrast, operates on correlations rather than causation, probability rather than truth-value, and resemblance rather than reasoning. When a large language model generates a legal citation, it's not making a claim about what the law is. It's producing text that resembles what legal citations typically look like in its training data.
This raises a provocative question: do AI hallucinations in legal contexts reveal merely a technical limitation requiring better training data, or an inherent epistemological incompatibility between statistical pattern-matching and reasoned argumentation?
The Stanford researchers frame the challenge in terms of “retrieval-augmented generation” (RAG), the technical approach used by legal AI tools to ground their outputs in real documents. RAG systems first retrieve relevant cases from actual databases, then use language models to synthesise that information into responses. In theory, this should prevent hallucinations by anchoring the model's outputs in verified sources. In practice, the Magesh-Surani study found that “while RAG appears to improve the performance of language models in answering legal queries, the hallucination problem persists at significant levels.”
The persistence of hallucinations despite retrieval augmentation suggests something more fundamental than inadequate training data. Language models appear to lack what philosophers of mind call “epistemic access”: genuine awareness of whether their outputs correspond to reality. They can't distinguish between accurate retrieval and plausible fabrication because they don't possess the conceptual framework to make such distinctions.
Some researchers argue that large language models might be capable of building internal representations of the world based on textual data and patterns, suggesting the possibility of genuine epistemic capabilities. But even if true, this doesn't resolve the verification problem. A model that constructs an internal representation of legal precedent by correlating patterns in training data will generate outputs that reflect those correlations, including systematic biases, outdated information, and patterns that happen to recur frequently in the training corpus regardless of their legal validity.
The Birth of a New Negligence
The legal profession's response to AI hallucinations has been reactive and punitive, but it's beginning to coalesce into something more systematic: a new category of professional negligence centred not on substantive legal knowledge but on the ability to identify the failure modes of autonomous systems.
Courts have been unanimous in holding lawyers responsible for AI-generated errors. The sanctions follow a familiar logic: attorneys have a duty to verify the accuracy of their submissions. Using AI doesn't excuse that duty; it merely changes the verification methods required. Federal Rule of Civil Procedure 11(b)(2) requires attorneys to certify that legal contentions are “warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law.” Fabricated cases violate that rule, regardless of how they were generated.
But as judges impose sanctions and bar associations issue guidance, a more fundamental transformation is underway. The skills required to practice law competently are changing. Lawyers must now develop expertise in:
Prompt engineering: crafting queries that minimise hallucination risk by providing clear context and constraints.
Output verification: systematically checking AI-generated citations against primary sources rather than trusting the AI's own confirmations.
Failure mode recognition: understanding how particular AI systems tend to fail and designing workflows that catch errors before submission.
System limitation assessment: evaluating which tasks are appropriate for AI assistance and which require traditional research methods.
Adversarial testing: deliberately attempting to make AI tools produce errors to understand their reliability boundaries.
This represents an entirely new domain of professional knowledge. Traditional legal education trains lawyers to analyse statutes, interpret precedents, construct arguments, and apply reasoning to novel situations. It doesn't prepare them to function as quality assurance specialists for statistical language models.
Law schools are scrambling to adapt. A survey of 29 American law school deans and faculty members conducted in early 2024 found that 55 per cent offered classes dedicated to teaching students about AI, and 83 per cent provided curricular opportunities where students could learn to use AI tools effectively. Georgetown Law now offers at least 17 courses addressing different aspects of AI. Yale Law School trains students to detect hallucinated content by having them build and test language models, exposing the systems' limitations through hands-on experience.
But educational adaptation isn't keeping pace with technological deployment. Students graduating today will enter a profession where AI tools are already integrated into legal research platforms, document assembly systems, and practice management software. Many will work for firms that have invested heavily in AI capabilities and expect associates to leverage those tools efficiently. They'll face pressure to work faster while simultaneously bearing personal responsibility for catching the hallucinations those systems generate.
The emerging doctrine of AI verification negligence will likely consider several factors:
Foreseeability: After hundreds of documented hallucination incidents, lawyers can no longer plausibly claim ignorance that AI tools fabricate citations.
Industry standards: As verification protocols become standard practice, failing to follow them constitutes negligence.
Reasonable reliance: What constitutes reasonable reliance on AI output will depend on the specific tool, the context, and the stakes involved.
Proportionality: More significant matters may require more rigorous verification.
Technological competence: Lawyers must maintain baseline understanding of the AI tools they use, including their known failure modes.
Some commentators argue this emerging doctrine creates perverse incentives. If lawyers bear full responsibility for AI errors, why use AI at all? The promised efficiency gains evaporate if every output requires manual verification comparable to traditional research. Others contend the negligence framework is too generous to AI developers, who market systems with known, significant error rates to professionals in high-stakes contexts.
The profession faces a deeper question: is the required level of verification even possible? In the Gauthier case, Brandon Monk testified that he attempted to verify Claude's output using Lexis AI's validation feature, which “failed to flag the issues.” He used one AI system to check another and both failed. If even specialised legal AI tools can't reliably detect hallucinations generated by other AI systems, how can human lawyers be expected to catch every fabrication?
The Autonomy Paradox
The rise of agentic AI intensifies these tensions exponentially. Unlike the relatively passive systems that have caused problems so far, agentic AI systems are designed to operate autonomously: making decisions, conducting multi-step research, drafting documents, and executing complex legal workflows without continuous human direction.
Several legal technology companies now offer or are developing agentic capabilities. These systems promise to handle routine legal work independently, from contract review to discovery analysis to legal research synthesis. The appeal is obvious: instead of generating a single document that a lawyer must review, an agentic system could manage an entire matter, autonomously determining what research is needed, what documents to draft, and what strategic recommendations to make.
But if current AI systems hallucinate despite retrieval augmentation and human oversight, what happens when those systems operate autonomously?
The epistemological problems don't disappear with greater autonomy. They intensify. An agentic system conducting multi-step legal research might build later steps on the foundation of earlier hallucinations, compounding errors in ways that become increasingly difficult to detect. If the system fabricates a key precedent in step one, then structures its entire research strategy around that fabrication, by step ten the entire work product may be irretrievably compromised, yet internally coherent enough to evade casual review.
Professional responsibility doctrines haven't adapted to genuine autonomy. The supervising lawyer typically remains responsible under current rules, but what does “supervision” mean when AI operates autonomously? If a lawyer must review every step of the AI's reasoning, the efficiency gains vanish. If the lawyer reviews only outputs without examining the process, how can they detect sophisticated errors that might be buried in the system's chain of reasoning?
Some propose a “supervisory AI agent” approach: using other AI systems to continuously monitor the primary system's operations, flagging potential hallucinations and deferring to human judgment when uncertainty exceeds acceptable thresholds. Stanford researchers advocate this model as a way to maintain oversight without sacrificing efficiency.
But this creates its own problems. Who verifies the supervisor? If the supervisory AI itself hallucinates or fails to detect primary-system errors, liability consequences remain unclear. The Monk case demonstrated that using one AI to verify another provides no reliable safeguard.
The alternative is more fundamental: accepting that certain forms of legal work may be incompatible with autonomous AI systems, at least given current capabilities. This would require developing a taxonomy of legal tasks, distinguishing between those where hallucination risks are manageable (perhaps template-based document assembly with strictly constrained outputs) and those where they're not (novel legal research requiring synthesis of multiple authorities).
Such a taxonomy would frustrate AI developers and firms that have invested heavily in legal AI capabilities. It would also raise difficult questions about how to enforce boundaries. If a system is marketed as capable of autonomous legal research, but professional standards prohibit autonomous legal research, who bears responsibility when lawyers inevitably use the system as marketed?
Verification Frameworks
If legal AI is to fulfil its promise without destroying the profession's foundations, meaningful verification frameworks are essential. But what would such frameworks actually look like?
Several approaches have emerged, each with significant limitations:
Parallel workflow validation: Running AI systems alongside traditional research methods and comparing outputs. This works for validation but eliminates efficiency gains, effectively requiring double work.
Citation verification protocols: Systematically checking every AI-generated citation against primary sources. Feasible for briefs with limited citations, but impractical for large-scale research projects that might involve hundreds of authorities.
Confidence thresholds: Using AI systems' own confidence metrics to flag uncertain outputs for additional review. The problem: hallucinations often come with high confidence scores. Models that fabricate cases typically do so with apparent certainty.
Human-in-the-loop workflows: Requiring explicit human approval at key decision points. This preserves accuracy but constrains autonomy, making the system less “agentic.”
Adversarial validation: Using competing AI systems to challenge each other's outputs. Promising in theory, but the Monk case suggests this may not work reliably in practice.
Retrieval-first architectures: Designing systems that retrieve actual documents before generating any text, with strict constraints preventing output that isn't directly supported by retrieved sources. Reduces hallucinations but also constrains the AI's ability to synthesise information or draw novel connections.
None of these approaches solves the fundamental problem: they're all verification methods applied after the fact, catching errors rather than preventing them. They address the symptoms rather than the underlying epistemological incompatibility.
Some researchers advocate for fundamental architectural changes: developing AI systems that maintain explicit representations of uncertainty, flag when they're extrapolating beyond their training data, and refuse to generate outputs when confidence falls below specified thresholds. Such systems would be less fluent and more hesitant than current models, frequently admitting “I don't know” rather than generating plausible-sounding fabrications.
This approach has obvious appeal for legal applications, where “I don't know” is vastly preferable to confident fabrication. But it's unclear whether such systems are achievable given current architectural approaches. Large language models are fundamentally designed to generate plausible text. Modifying them to generate less when uncertain might require different architectures entirely.
Another possibility: abandoning the goal of autonomous legal reasoning and instead focusing on AI as a powerful but limited tool requiring expert oversight. This would treat legal AI like highly sophisticated calculators: useful for specific tasks, requiring human judgment to interpret outputs, and never trusted to operate autonomously on matters of consequence.
This is essentially the model courts have already mandated through their sanctions. But it's a deeply unsatisfying resolution. It means accepting that the promised transformation of legal practice through AI autonomy was fundamentally misconceived, at least given current technological capabilities. Firms that invested millions in AI capabilities expecting revolutionary efficiency gains would face a reality of modest incremental improvements requiring substantial ongoing human oversight.
The Trust Equation
Underlying all these technical and procedural questions is a more fundamental issue: trust. The legal system rests on public confidence that lawyers are competent, judges are impartial, and outcomes are grounded in accurate application of established law. AI hallucinations threaten that foundation.
When Brandon Monk submitted fabricated citations to Judge Crone, the immediate harm was to Monk's client, who received inadequate representation, and to Goodyear's counsel, who wasted time debunking nonexistent cases. But the broader harm was to the system's legitimacy. If litigants can't trust that cited cases are real, if judges must independently verify every citation rather than relying on professional norms, the entire apparatus of legal practice becomes exponentially more expensive and slower.
This is why courts have responded to AI hallucinations with unusual severity. The sanctions send a message: technological change cannot come at the expense of basic accuracy. Lawyers who use AI tools bear absolute responsibility for their outputs. There are no excuses, no learning curves, no transition periods. The duty of accuracy is non-negotiable.
But this absolutist stance, while understandable, may be unsustainable. The technology exists. It's increasingly integrated into legal research platforms and practice management systems. Firms that can leverage it effectively while managing hallucination risks will gain significant competitive advantages over those that avoid it entirely. Younger lawyers entering practice have grown up with AI tools and will expect to use them. Clients increasingly demand the efficiency gains AI promises.
The profession faces a dilemma: AI tools as currently constituted pose unacceptable risks, but avoiding them entirely may be neither practical nor wise. The question becomes how to harness the technology's genuine capabilities while developing safeguards against its failures.
One possibility is the emergence of a tiered system of AI reliability, analogous to evidential standards in different legal contexts. Just as “beyond reasonable doubt” applies in criminal cases while “preponderance of evidence” suffices in civil matters, perhaps different verification standards could apply depending on the stakes and context. Routine contract review might accept higher error rates than appellate briefing. Initial research might tolerate some hallucinations that would be unacceptable in court filings.
This sounds pragmatic, but it risks normalising errors and gradually eroding standards. If some hallucinations are acceptable in some contexts, how do we ensure the boundaries hold? How do we prevent scope creep, where “routine” matters receiving less rigorous verification turn out to have significant consequences?
Managing the Pattern-Matching Paradox
The legal profession's confrontation with AI hallucinations offers lessons that extend far beyond law. Medicine, journalism, scientific research, financial analysis, and countless other fields face similar challenges as AI systems become capable of autonomous operation in high-stakes domains.
The fundamental question is whether statistical pattern-matching can ever be trusted to perform tasks that require epistemic reliability: genuine correspondence between claims and reality. Current evidence suggests significant limitations. Language models don't “know” things in any meaningful sense. They generate plausible text based on statistical patterns. Sometimes that text happens to be accurate; sometimes it's confident fabrication. The models themselves can't distinguish between these cases.
This doesn't mean AI has no role in legal practice. It means we need to stop imagining AI as a autonomous reasoner and instead treat it as what it is: a powerful pattern-matching tool that can assist human reasoning but cannot replace it.
For legal practice specifically, several principles should guide development of verification frameworks:
Explicit uncertainty: AI systems should acknowledge when they're uncertain, rather than generating confident fabrications.
Transparent reasoning: Systems should expose their reasoning processes, not just final outputs, allowing human reviewers to identify where errors might have occurred.
Constrained autonomy: AI should operate autonomously only within carefully defined boundaries, with automatic escalation to human review when those boundaries are exceeded.
Mandatory verification: All AI-generated citations, quotations, and factual claims should be verified against primary sources before submission to courts or reliance in legal advice.
Continuous monitoring: Ongoing assessment of AI system performance, with transparent reporting of error rates and failure modes.
Professional education: Legal education must adapt to include not just substantive law but also the capabilities and limitations of AI systems.
Proportional use: More sophisticated or high-stakes matters should involve more rigorous verification and more limited reliance on AI outputs.
These principles won't eliminate hallucinations. They will, however, create frameworks for managing them, ensuring that efficiency gains don't come at the expense of accuracy and that professional responsibility evolves to address new technological realities without compromising fundamental duties.
The alternative is a continued cycle of technological overreach followed by punitive sanctions, gradually eroding both professional standards and public trust. Every hallucination that reaches a court damages not just the individual lawyer involved but the profession's collective credibility.
The Question of Compatibility
Steven Schwartz, Brandon Monk, and the nearly 200 other lawyers sanctioned for AI hallucinations made mistakes. But they're also test cases in a larger experiment: whether autonomous AI systems can be integrated into professional practices that require epistemic reliability without fundamentally transforming what those practices mean.
The evidence so far suggests deep tensions. Systems that operate through statistical pattern-matching struggle with tasks that require truth-tracking. The more autonomous these systems become, the harder it is to verify their outputs without sacrificing the efficiency gains that justified their adoption. The more we rely on AI for legal reasoning, the more we risk eroding the distinction between genuine legal analysis and plausible fabrication.
This doesn't necessarily mean AI and law are incompatible. It does mean that the current trajectory, where systems of increasing autonomy and declining accuracy are deployed in high-stakes contexts, is unsustainable. Something has to change: either the technology must develop genuine epistemic capabilities, or professional practices must adapt to accommodate AI's limitations, or the vision of autonomous AI handling legal work must be abandoned in favour of more modest goals.
The hallucination crisis forces these questions into the open. It demonstrates that accuracy and efficiency aren't always complementary goals, that technological capability doesn't automatically translate to professional reliability, and that some forms of automation may be fundamentally incompatible with professional responsibilities.
As courts continue sanctioning lawyers who fail to detect AI fabrications, they're not merely enforcing professional standards. They're articulating a baseline principle: the duty of accuracy cannot be delegated to systems that cannot distinguish truth from plausible fiction. That principle will determine whether AI transforms legal practice into something more efficient and accessible, or undermines the foundations on which legal legitimacy rests.
The answer isn't yet clear. What is clear is that the question matters, the stakes are high, and the legal profession's struggle with AI hallucinations offers a crucial test case for how society will navigate the collision between statistical pattern-matching and domains that require genuine knowledge.
The algorithms will keep generating text that resembles legal reasoning. The question is whether we can build systems that distinguish resemblance from reality, or whether the gap between pattern-matching and knowledge-tracking will prove unbridgeable. For the legal profession, for clients who depend on accurate legal advice, and for a justice system built on truth-seeking, the answer will be consequential.
Sources and References
American Bar Association. (2025). “Lawyer Sanctioned for Failure to Catch AI 'Hallucination.'” ABA Litigation News. Retrieved from https://www.americanbar.org/groups/litigation/resources/litigation-news/2025/lawyer-sanctioned-failure-catch-ai-hallucination/
Baker Botts LLP. (2024, December). “Trust, But Verify: Avoiding the Perils of AI Hallucinations in Court.” Thought Leadership Publications. Retrieved from https://www.bakerbotts.com/thought-leadership/publications/2024/december/trust-but-verify-avoiding-the-perils-of-ai-hallucinations-in-court
Bloomberg Law. (2024). “Lawyer Sanctioned Over AI-Hallucinated Case Cites, Quotations.” Retrieved from https://news.bloomberglaw.com/litigation/lawyer-sanctioned-over-ai-hallucinated-case-cites-quotations
Cambridge University Press. (2024). “Examining epistemological challenges of large language models in law.” Cambridge Forum on AI: Law and Governance. Retrieved from https://www.cambridge.org/core/journals/cambridge-forum-on-ai-law-and-governance/article/examining-epistemological-challenges-of-large-language-models-in-law/66E7E100CF80163854AF261192D6151D
Charlotin, D. (2025). “AI Hallucination Cases Database.” Pelekan Data Consulting. Retrieved from https://www.damiencharlotin.com/hallucinations/
Courthouse News Service. (2023, June 22). “Sanctions ordered for lawyers who relied on ChatGPT artificial intelligence to prepare court brief.” Retrieved from https://www.courthousenews.com/sanctions-ordered-for-lawyers-who-relied-on-chatgpt-artificial-intelligence-to-prepare-court-brief/
Gauthier v. Goodyear Tire & Rubber Co., Case No. 1:23-CV-00281, U.S. District Court for the Eastern District of Texas (November 25, 2024).
Georgetown University Law Center. (2024). “AI & the Law… & what it means for legal education & lawyers.” Retrieved from https://www.law.georgetown.edu/news/ai-the-law-what-it-means-for-legal-education-lawyers/
Legal Dive. (2024). “Another lawyer in hot water for citing fake GenAI cases.” Retrieved from https://www.legaldive.com/news/another-lawyer-in-hot-water-citing-fake-genai-cases-brandon-monk-marcia-crone-texas/734159/
Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C. D., & Ho, D. E. (2025). “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools.” Journal of Empirical Legal Studies, 0:1-27. https://doi.org/10.1111/jels.12413
Mata v. Avianca, Inc., Case No. 1:22-cv-01461, U.S. District Court for the Southern District of New York (June 22, 2023).
Nature Machine Intelligence. (2025). “Language models cannot reliably distinguish belief from knowledge and fact.” https://doi.org/10.1038/s42256-025-01113-8
NPR. (2025, July 10). “A recent high-profile case of AI hallucination serves as a stark warning.” Retrieved from https://www.npr.org/2025/07/10/nx-s1-5463512/ai-courts-lawyers-mypillow-fines
Stanford Human-Centered Artificial Intelligence. (2024). “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.” Retrieved from https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries
Stanford Law School. (2024, January 25). “A Supervisory AI Agent Approach to Responsible Use of GenAI in the Legal Profession.” CodeX Center for Legal Informatics. Retrieved from https://law.stanford.edu/2024/01/25/a-supervisory-ai-agents-approach-to-responsible-use-of-genai-in-the-legal-profession/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk








