The Trust Tax: How Hidden AI Is Breaking Professional Fiduciary Duty

On an ordinary afternoon in July 2025, a man named Saucedo went to a Sharp Rees-Stealy clinic in southern California for a routine physical. He answered the usual questions and left. Weeks later, scrolling through his patient portal, he found a line in his record stating that he had been advised his visit would be recorded and had consented. He had not been asked. He had not consented. The sentence had been generated, it turned out, by an ambient artificial intelligence scribe quietly running on a clinician's microphone-enabled device, transcribing the consultation in real time, piping the audio to a third-party vendor's cloud, and, in an almost baroque loop, drafting its own false record of having been authorised to do all of it. By late 2025, Saucedo was the named plaintiff in a class action alleging that more than a hundred thousand patients had been recorded the same way. The complaint, filed in November 2025 and currently winding through the California courts, describes the scribe as doing two things at once: documenting the patient, and documenting its own permission to document the patient. It is an almost perfect small allegory for where the professions have arrived.

The invisible professional has become the defining ethical question of the 2026 services economy, and the reason is that the technology works. Ambient AI scribes now listen to tens of millions of consultations a year. Large language models draft legal briefs, compliance memos, and financial planning letters faster than any human could; they are marketed to professionals explicitly as productivity multipliers, the oxygen of a squeezed industry. The models have become good enough that, in a great many cases, the professional using them does not feel they are doing anything different than they always did. The patient, client, or consumer sitting across the desk, however, is in an entirely different reality. They are talking to a human they believe is listening, weighing, judging. They do not know there is a second presence in the room.

The moral and legal question that Reuters put on the table in a widely circulated investigation in January 2026, and that a Reddit thread full of anxious parents turned into a consumer-facing issue the same month, is whether the old professional duty of trust survives the arrival of this second presence. When the note your doctor signs is drafted by software, when the brief your lawyer files was partly written by a model that has no idea what the law says, when the plan your adviser sends you was generated by an algorithm that nobody can quite explain, has the fiduciary relationship quietly slipped its mooring? And if the profession will not tell you, does it matter?

The Ambient Listener in the Consultation Room

The Reuters reporting in January 2026 framed the ambient scribe market as one of the fastest-growing tools in healthcare and named the frontier it had opened: patient consent, data ownership, clinical accuracy. The frontier is not theoretical. It is sitting on millions of examination-room desks. Industry analysts estimate that ambient documentation tools, sold under brand names like Abridge, Nuance DAX, Suki, and a crowded long tail of startups, have been adopted by six-figure populations of clinicians across the United States, Canada, the United Kingdom, and Australia in the space of roughly eighteen months. NHS England issued fresh guidance on their safe use in April 2026. The American Hospital Association the same month published a list of six large health systems already embedding them into care delivery. The speed is, for a healthcare sector, astonishing.

The appeal is not obscure. Clinicians document for hours after every clinic day; burnout is epidemic; a tool that eats the paperwork gives back the most precious commodity of a working life, simply the time to look a patient in the eye. Randomised trial data from the United States shows meaningful reductions in documentation time and modest improvements in reported clinician wellbeing. The evidence on note quality is more mixed, with accuracy figures clustering in the 95 to 98 per cent range and a hallucination rate that, on the most pessimistic estimates published in the trade press in early 2026, sits around seven per cent of finished notes containing at least one fabricated element. One in fourteen. That is the number that stops clinicians in their tracks when it is put to them plainly.

The structural problem Reuters identified, and that the American Bar Association's health law section expanded on in its own early-2026 analysis, is not that the tools are bad. The tools are, in many ways, extraordinary. It is that the professional relationship sitting underneath them has never been renegotiated for their presence. Patient consent frameworks in most jurisdictions were built around two parties in a room: the clinician and the patient. The ambient scribe is a third party. It listens; it records; it ships audio to a cloud; it hands the audio to a vendor who may or may not retain it, may or may not train models on it, may or may not be based in a jurisdiction whose data protection regime resembles the patient's own. State wiretapping laws in California, Illinois, and Florida may criminalise the recording if the patient has not consented in the manner local statute requires. General treatment consent, the blanket paperwork signed at the front desk, was not designed to cover a microphone with a commercial afterlife.

The Saucedo litigation is the sharp end of this problem, but it is not unique. Additional class actions in early 2026 have been filed against Sutter Health and MemorialCare on similar theories. In February 2026, a federal court in Illinois dismissed the wiretapping claims in one scribe suit under the “ordinary course of business” exception, but let other claims stand. In Florida, where wiretapping is a felony, the trade press has begun to warn clinicians that recording a consultation without explicit two-party consent could expose them, personally, to criminal liability. The invisible professional is, slowly, becoming visible in the worst possible way: in court filings and state prosecutors' inboxes.

What the Paediatric Reddit Thread Actually Revealed

If the litigation captures the legal frontier, a messier picture of the moral frontier turned up on Reddit in early 2026, when a parent in a family medicine community posted that their paediatric practice had started asking for consent for an AI note-taking tool. The thread, and others like it across r/medicine, r/FamilyMedicine, and parenting forums, did something interesting. It did not split along the predictable lines of AI optimism and AI pessimism. It split along the lines of what consent actually means.

Parents described being handed a one-page form at check-in. Some had read it; most had skimmed it; a few had not even realised, until asked, that they had signed anything. The form typically said that an AI assistant would help with note-taking, that the recording would not be retained, that the practice would not use it for any other purpose. Parents in the thread started asking the questions the form did not answer. What does “help with note-taking” mean? Where does the audio go in the meantime? Who owns the transcript? What happens if the vendor is acquired, or goes bankrupt, or changes its terms of service? If the note is wrong, who notices? If the note is wrong in six months when a specialist reads it, who is liable? And, most pointedly: what happens if I say no?

That last question is the one that matters. Several parents reported being told that opting out was fine in principle but that it might mean their clinician had to spend longer typing, which in a short appointment meant less time with their child. Others said the practice did not have an alternative workflow. The consent, in other words, was shaped like a choice and functioned like a fait accompli. It was not the hard refusal that Nuremberg, or Salgo, or Montgomery had contemplated. It was a soft refusal, one in which the patient could technically say no but would pay a price in care to do so.

This is where the historical weight of informed consent starts to bear. The Nuremberg Code, drafted in August 1947 in the shadow of the Doctors' Trial, put voluntary consent at its very first principle, not as a bureaucratic nicety but as a bulwark against the worst thing a medical system could do. The Salgo v. Leland Stanford decision in California in 1957 gave the doctrine its name, when a patient awoke paralysed from a procedure whose risks he had never been told. The UK Supreme Court's decision in Montgomery v. Lanarkshire Health Board in 2015 brought the doctrine forward, rejecting the old paternalist test and holding that a doctor is under a duty to ensure that a patient is aware of any material risks in a proposed treatment and of any reasonable alternative options. Montgomery is a judgment about adult autonomy, about the patient as decision-maker rather than recipient of expertise.

An ambient scribe sitting quietly under a clinician's desk is not, in the classic sense, a material risk. It does not increase the probability of a punctured artery. But it is a reasonable-alternative-options problem, because the alternative, the consultation without a third-party recorder, is the one most patients believed they were getting. If Montgomery means anything in 2026, it probably means that the patient gets to choose. The Reddit thread's quiet insight was that the profession had made the choice first and was asking for consent afterwards.

The Canadian Pipeline Nobody Mentioned

The question of what happens to the data once the scribe has finished listening is, in one sense, the real story. And here, some of the most uncomfortable reporting of the last eighteen months has come out of Canada.

A qualitative investigation published in a Canadian Medical Association journal in 2022 and updated by follow-on work through 2025 mapped the Canadian primary care medical record industry in unusual detail. It found at least two commercial data brokers, each claiming access to between one and two million primary care patient records, operating on a business model that allowed third parties to access those records without any meaningful patient involvement in how they had been collected or were being used. Because Canadian privacy legislation designates physicians, not patients, as the data custodians for medical records, the consent that mattered was the physician's. Patients, in most practical senses, were not in the loop.

By early 2026, the Canadian situation had sharpened further, because the commercial data in question was increasingly feeding AI development. Primary care records, scrubbed of obvious identifiers but often still disturbingly rich in context, were being channelled into training datasets and product pipelines for commercial AI systems without patients ever being told their notes were en route. A Policy Options analysis in April 2026 argued that this was producing a structural problem the Canadian privacy regime was not built to handle: it could regulate the initial collection of health information, but it struggled to regulate the secondary uses that AI development now made possible.

The Alberta privacy commissioner's earlier investigations into Telus Health's Babylon app, which produced 31 findings and 20 recommendations, had already exposed a similar pattern at a different scale. The app had used facial recognition for identity verification without proper notification or consent; it had shared personal health information with third-party service providers in the United States and Ireland without disclosing this to patients; it had retained audio and video consultations beyond what the commissioner considered justifiable. The investigations read, in retrospect, as a dry run for the ambient scribe era.

Then, in an incident that briefly made headlines in Canadian health IT trade press, an AI scribe bot at one Ontario institution autonomously recorded a group of physicians discussing seven patients and emailed the transcript to 65 people. Nobody had asked it to. Nobody had told the patients. The bot had made a perfectly reasonable inference about its task, acted on it, and only the scale of the resulting distribution brought the incident to anybody's attention. The Canadian story is not that patients are being deliberately deceived. It is that the architecture of professional trust, in which the physician is the trusted intermediary, has been overlaid with a commercial and technological architecture in which the physician is one of many actors and no longer the custodian the law assumes them to be.

The New York Bill and the Drawing of Lines

In March 2026, a bill sitting on the New York Senate calendar moved the conversation from healthcare consent into something wider. Senate Bill S7263, introduced by Senator Kristen Gonzalez in April 2025, had cleared the Internet and Technology Committee on a 6-0 vote on 25 February 2026 and was positioned for a full floor vote. Its operative idea was sharp: if a chatbot provides substantive responses or advice that, if given by a human, would constitute the unauthorised practice of law, medicine, dentistry, nursing, engineering, or any of the other licensed professions governed by the state's Education Law and Judiciary Law, the chatbot's proprietor is on the hook. The bill would create a private right of action for damages and, in cases of wilful violation, attorneys' fees.

Two details in S7263 did the real work. The first was that a disclaimer was explicitly not a defence. A popup telling the user they were talking to an AI and should not rely on its advice would not, under the bill, shield the operator from liability if the bot was in fact giving professional advice. The second was that the bill was technology-neutral about how the advice was being given. It did not matter whether the chatbot claimed to be a lawyer, or a non-lawyer, or nothing at all. What mattered was the substantive character of the output.

Legal commentary in the trade press was predictably mixed. Holland & Knight's analysis in March 2026 noted that the bill had drafting problems that could expose operators to liability for outputs that were merely informational rather than advisory. A Burrell Law analysis flagged four specific drafting issues the legislature would need to address. But the direction of travel was clear, and it sat alongside a slew of other state-level AI legislation that had taken effect on 1 January 2026. New York was staking out the position that professional practice has a perimeter, that the perimeter is defined by state licensing law, and that a chatbot crossing the perimeter is inside the same liability regime a human practitioner would be.

The bill's significance for the invisible professional question is indirect but important. S7263 is written for the case where a consumer interacts with a chatbot directly. But its logic, the idea that a machine cannot quietly do licensed professional work without the accountability that follows licensed professional work, has obvious implications for the case where a machine is doing the licensed professional work while a human signs the output. If the chatbot cannot practise law anonymously, can a lawyer quietly practise as a relay for a chatbot without telling the client? The bill does not answer that question, but it asks it.

Hallucinated Law and the Collapse of Plausible Signing

Lawyers have been answering that question in court, painfully, ever since a now-famous filing in Mata v. Avianca in 2023. Two lawyers in the Southern District of New York had submitted a brief citing six cases that did not exist. The brief had been generated, in relevant part, by ChatGPT, which had produced plausible-looking citations with plausible-looking quotations from plausible-looking judges. Judge P. Kevin Castel fined the lawyers five thousand dollars, called their conduct an act of subjective bad faith, and wrote an opinion that became an instant staple of continuing legal education.

What Mata started, nearly three years of follow-on cases have extended. The French researcher Damien Charlotin has been maintaining a public database of AI-hallucination incidents in court filings; by mid-2025 it had catalogued over 230 matters worldwide in which fabricated citations had surfaced. The pattern is grimly consistent. A lawyer, often under time pressure, often junior, often working outside their field, uses a model to help draft. The model produces an output that looks right. The lawyer checks cursorily, or not at all. The brief goes in. A judge or opposing counsel notices the citation does not exist. Sanctions follow.

In July 2025, the U.S. District Court for the Northern District of Alabama handed down the decision that many court watchers treat as the new high-water mark for severity. Johnson v. Dunn involved lawyers at a large and respected firm submitting hallucinated citations in a motion. Instead of fining the firm, the court disqualified the offending attorneys from representing the client for the remainder of the case, ordered the opinion published in the Federal Supplement, and directed the clerk to inform bar regulators in every state where the lawyers were licensed. The signal was that a monetary penalty was no longer sufficient; the profession itself was being told that this behaviour was a licensing matter.

The American Bar Association's first formal ethics opinion on generative AI, published in July 2024, had already laid out the principles. Under the Rules of Professional Conduct, lawyers using AI retain their duties of competence, confidentiality, communication, and candour toward the tribunal. The lawyer is always accountable for the output. The lawyer must not disclose confidential client information to a tool that would retain or train on it without client consent. And the lawyer must, in circumstances where the use of AI is material to the representation, tell the client. That last duty, communication, is where fiduciary trust enters the analysis in its most stripped-down form, because it is the duty the profession's own self-regulation has been least able to enforce.

The uncomfortable fact is that a lawyer using a large language model to draft a brief, or to research, or to generate a first cut of a compliance memo, is in many ways acting no differently than one who uses an associate, a contract lawyer, a paralegal, or a research service. The profession has always been ghost-authored. What is different about the model is that the model does not know what the law is; it produces text that is correlated with what the law looks like. A paralegal's draft can be wrong. A model's draft can be wrong in a way that is statistically fluent and substantively invented. The failure mode is new, and it is the failure mode, not the ghost authorship itself, that has begun to erode the plausibility of the signature at the bottom.

The Financial Adviser Who Will Not Tell You About Their Algorithm

In financial services, the arguments have taken a slightly different shape, because the industry has been living with algorithmic assistance for decades. Robo-advisers, hybrid advice models, and algorithmic portfolio construction tools predate the generative AI wave. What has changed is that the models have become more opaque, more central to the advice the client receives, and harder to describe in the plain-English terms regulators have traditionally demanded.

The U.S. Securities and Exchange Commission's 2026 examination priorities, published in late 2025 and elaborated through the first quarter of 2026, make AI an explicit area of scrutiny. Registered investment advisers who integrate AI into portfolio management, trading, marketing, or compliance will find examinations looking in depth at whether their disclosures to clients match what the AI is actually doing. The SEC's long-standing fiduciary framework, distilled in its 2019 interpretation of the Investment Advisers Act into a duty of care and a duty of loyalty, places the burden of disclosure squarely on the adviser. A 2025 CLS Blue Sky Blog analysis noted that digital advisers in particular have been put on notice: they must provide comprehensive, plain-English explanations of how their algorithms work. The days of treating the algorithm as a trade secret the client has no need to understand are, regulators have made clear, over.

The UK's Financial Conduct Authority has been moving in a similar direction, with its emphasis on consumer understanding under the Consumer Duty rules and a steady drumbeat of discussion papers on AI governance in financial services. The practical effect is that an adviser who hides the machine behind the advice is not merely breaching an ethical norm. They are running afoul of a rule. And the private right of action that comes with mis-selling regimes in both jurisdictions makes the liability concrete.

But disclosure is running into its own peculiar resistance. A growing body of research, including studies published in 2024 and 2025 on patient attitudes toward AI-drafted responses in healthcare, has found a counter-intuitive dynamic. When a response is identical in content, participants consistently rate disclosed AI authorship lower than undisclosed or human authorship. A study of patient preferences for AI-drafted electronic messages found a roughly 0.13-point satisfaction penalty on a standard scale for AI disclosure versus human disclosure, and a smaller but measurable penalty for AI disclosure versus no disclosure at all. A large Canadian survey of 12,153 adults, published in the Journal of the American Medical Informatics Association in early 2026, found that 61.8 per cent of respondents were reluctant about future AI scribe use, even as a plurality acknowledged potential benefits. Awareness of current AI scribe use was strikingly low, at 28.3 per cent.

The research converges on a pattern that puts the invisible professional question in a harsher light. Patients and clients, when told the machine is there, rate the service worse even when the service is exactly the same. They are, in the most literal sense, penalising disclosure. This is the structural economic incentive that hangs over the whole landscape. AI scribes and drafting tools are sold to professionals as productivity multipliers; their value proposition is faster work at equal or better quality. The moment a professional discloses the tool, a portion of the client base reacts by trusting the work less. There is, in other words, a trust tax on disclosure, and a direct financial reward for invisibility.

Ghost Authorship and the Pen That Nobody Holds

This is where the older concept of ghost authorship becomes unexpectedly useful. Professional work product has always been partially authored by others. A senior partner's brief is polished by an associate; an attending's discharge summary is drafted by a resident; a chief executive's strategy memo reflects the work of an entire planning team. The signature at the bottom is not a claim of sole authorship. It is a claim of responsibility. The person signing takes ownership of the judgement, the accuracy, the fit to the client's situation, regardless of who pushed which keys.

AI tools, at their best, can be absorbed into this tradition. A lawyer who uses a model to generate a first-draft summary of a thirty-thousand-page discovery set, then reviews, corrects, and signs off, is doing nothing the profession has not done for a century with junior labour. A doctor who uses an ambient scribe to produce a structured draft of the visit note, then edits it and endorses it, is doing nothing cognitively novel. The signature still means what it has always meant: I have reviewed this; I take responsibility for it.

The problem is that the signature increasingly does not mean this in practice. The volume of AI-generated output is too high, the review too cursory, the incentives to skim too strong. The Alabama court in Johnson v. Dunn was, in effect, holding the profession to the older meaning of the signature and finding that in the AI era that meaning was at risk of quietly evaporating. The seven-per-cent hallucination rate in ambient scribe notes is another manifestation of the same dynamic. If one in fourteen notes contains a fabricated element, and clinicians sign the notes without catching the fabrications, the signature is no longer doing the epistemic work it used to do.

The European Union has tried to address this head-on with two overlapping frameworks. GDPR Article 22 gives data subjects the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects, with narrow exceptions requiring meaningful safeguards and explicit consent. The EU AI Act, which entered its high-risk compliance regime in 2026, classifies most medical and legal AI systems as high-risk and imposes requirements for human oversight, transparency, and a right to explanation under its Article 86. The intent is clear: the human must remain meaningfully in the loop; the individual affected must have the right to know and to contest.

What remains uncertain is whether the compliance regimes will produce meaningful human oversight or merely the appearance of it. An ambient scribe that generates a note and a clinician who signs it without reading it have the legal form of human oversight but not the substance. A lawyer who skim-reviews a model-drafted brief and files it has the same problem. The law can require a human to sign; it cannot, easily, require the human to read.

What Trust Was Actually For

The older legal and moral concept underneath all of this is fiduciary duty: the obligation of a professional who holds power over another person's interests to act in that person's interests rather than their own. The duty predates the professions in their modern form. Its classical articulation is in the trust law of the English Chancery courts, where the trustee who held legal title to another's property was bound to loyalty, care, and full disclosure. When the professions organised themselves in the nineteenth and twentieth centuries, they borrowed this structure. The doctor, the lawyer, the financial adviser, the accountant: each occupied a role in which the client was, by virtue of their relative lack of expertise, unavoidably vulnerable, and in which the price of accepting that vulnerability was the professional's commitment to absolute good faith.

Disclosure has always been the operational heart of this commitment. A fiduciary who conceals a conflict of interest is not a fiduciary. A fiduciary who conceals a material fact about the service being rendered is not a fiduciary. Whether the concealment is intentional or merely convenient, whether driven by greed or by the ordinary pressures of the working day, the effect on the relationship is the same. The client or patient, believing themselves to be in one kind of interaction, is actually in another.

The invisible AI professional is a new instance of a very old problem. The tool might be excellent. The outcome might be indistinguishable from, or better than, the outcome without the tool. But the relationship has changed, and the person on the receiving end has not been told. That is, in the classical formulation, a breach of the duty to disclose. It is not a technology question; it is a trust question.

The defence many professionals offer, reasonably, is that disclosure fatigue is real; that clients already sign too many forms they do not read; that listing every tool the professional uses would produce an unreadable addendum; that the tools work, and the obsession with disclosure is procedural theatre. There is truth in this. Nobody wants a consent form for the stethoscope. Nobody wants a disclosure for the word processor. The distinction the profession has yet to draw crisply is between tools that merely execute the professional's judgement and tools that participate in forming it. An ambient scribe, if it only transcribed and never shaped, would be closer to the stethoscope. An ambient scribe whose draft shapes the structure of the note, whose summarisation decisions survive into the record, whose hallucinations live on as facts the patient will be treated for a decade from now, is something else. It is in the room, and the patient is entitled to know.

The Question That Does Not Close

The invisible professional era will not be legislated away in a single session, and the regulatory responses emerging, New York S7263, the SEC's 2026 examination priorities, the FCA's evolving guidance, the EU AI Act's high-risk regime, NHS England's ambient scribe framework, the Canadian provincial privacy commissioners' ongoing investigations, will not settle the underlying question cleanly. They will push against the edges. They will shape behaviour at the margin. They will raise the cost of the most egregious invisibility. They will not dissolve the economic gravity that pulls professionals, especially those under the fiercest time pressure, toward quiet adoption.

What will do that, if anything does, is closer to a cultural adjustment inside the professions themselves. The doctor who volunteers the information that an AI scribe is running, who invites the patient to opt out without penalty, who stops the consultation if the patient wants to look at the transcript, is performing fiduciary duty in its older, deeper sense. The lawyer who writes into the engagement letter that generative AI may be used for certain tasks, who identifies which tasks, who accepts the client's preference if the client says no, is doing the same. The adviser who explains, in the plain English the SEC has always demanded, what role the algorithm plays in the portfolio recommendation and what its known limitations are, is honouring a duty whose contours predate the technology by several centuries.

Saucedo, the patient in the California clinic, trusted his doctor. The trust did not disappear because an ambient scribe was running. It disappeared because the scribe documented a consent he had never given. What broke was not the relationship with AI. What broke was the relationship with the humans who were supposed to tell him it was there. Whatever the courts decide about his class action, whatever version of S7263 eventually becomes law in New York, whatever the Canadian privacy commissioners do next, the question that will not go away is whether the professions can bring themselves to pay the trust tax of disclosure, or whether they will, in the ordinary way of institutions under pressure, decide that the machine does not really count.

References and Sources

  1. American Bar Association Health Law Section. “Ambient AI Scribes: Efficiency Gains vs Emerging Privacy and Cybersecurity Risks.” 2026. https://www.americanbar.org/groups/health_law/news/2026/ambient-ai-scribes-privacy-cybersecurity/
  2. Medscape. “Sharp HealthCare Sued Over AI Scribe, Patient Consent.” 2026. https://www.medscape.com/viewarticle/health-system-sued-over-ai-scribe-technology-patient-consent-2026a10001k7
  3. American Hospital Association. “6 Health Systems Enhancing Care Delivery with Ambient AI Scribes.” 14 April 2026. https://www.aha.org/aha-center-health-innovation-market-scan/2026-04-14-6-health-systems-enhancing-care-delivery-ambient-ai-scribes
  4. Digital Health. “NHSE publishes fresh guidance on safe use of ambient scribes.” April 2026. https://www.digitalhealth.net/2026/04/nhse-publishes-fresh-guidance-on-safe-use-of-ambient-scribes/
  5. Wireless-Life Sciences Alliance. “Ambient AI Scribe: Evidence, ROI, Risks, and an Implementation Playbook.” February 2026. https://wirelesslifesciences.org/2026/02/ambient-ai-scribe-evidence-roi-risks-and-an-implementation-playbook/
  6. Florida Doctor Magazine. “AI Scribes Are Recording Your Patients Without Consent, and Florida Doctors Could Face Felony Charges.” 2026. https://floridadoctormagazine.com/ai-scribes-recording-patients-without-consent-florida-felony-2026/
  7. TechTarget. “Sutter Health, MemorialCare face class action lawsuit over AI scribe use.” https://www.techtarget.com/healthtechsecurity/news/366641717/Sutter-Health-MemorialCare-face-class-action-lawsuit-over-AI-scribe-use
  8. Policy Options / IRPP. “AI scribes in health care raise risks for patients and privacy.” April 2026. https://policyoptions.irpp.org/2026/04/ai-scribes-health-care-canada-privacy-safety-risks/
  9. Canadian Medical Association Journal. “The commercialization of patient data in Canada: ethics, privacy and policy.” https://www.cmaj.ca/content/194/3/E95
  10. PMC. “The Primary Care Medical Record Industry in Canada and Its Data Collection and Commercialization Practices.” https://pmc.ncbi.nlm.nih.gov/articles/PMC12053517/
  11. Canadian AI Incident Monitor. “Clinical AI Systems in Canada: Deployed with Documented Evidence Gaps and Privacy Violations.” https://caim.horizonomega.org/hazards/66/
  12. Office of the Information and Privacy Commissioner of Alberta. “Commissioner Releases Babylon by Telus Health Investigation Reports.” https://oipc.ab.ca/p2021-ir-02-h2021-ir-01/
  13. Holland & Knight. “New York Bill Would Create Liability for Chatbot Proprietors Offering Professional Advice.” March 2026. https://www.hklaw.com/en/insights/publications/2026/03/new-york-bill-would-create-liability-for-chatbot-proprietors
  14. New York State Senate. “NY State Senator Kristen Gonzalez on her bill to address AI Chatbots impersonating licensed professionals.” 2026. https://www.nysenate.gov/newsroom/press-releases/2026/kristen-gonzalez/ny-state-senator-kristen-gonzalez-her-bill-address-ai
  15. Burrell Law. “Will Using AI Chatbots Cost You? New York's AI Chatbot Liability Bill (S7263): Four Critical Drafting Problems the Legislature Should Fix.” https://burrell-law.com/artificial-intelligence-a-i/will-using-a-i-chatbots-cost-you-new-yorks-a-i-chatbot-liability-bill-s7263-four-critical-drafting-problems-the-legislature-should-fix/
  16. Berkeley Law. “Mata v. Avianca, Inc., 678 F.Supp.3d 443 (2023).” https://www.law.berkeley.edu/wp-content/uploads/2025/12/Mata-v-Avianca-Inc.pdf
  17. Relativity Blog. “AI Case Law Update: The Lamborghini Doctrine of Hallucinations.” https://www.relativity.com/blog/ai-case-law-update-the-lamborghini-doctrine-of-hallucinations/
  18. Goodwin. “2026 SEC Exam Priorities for Registered Investment Advisers and Registered Investment Companies.” December 2025. https://www.goodwinlaw.com/en/insights/publications/2025/12/alerts-privateequity-pif-2026-sec-exam-priorities-for-registered-investment-advisers
  19. CLS Blue Sky Blog. “Regulating Algorithmic Accountability in Financial Advising.” 4 June 2025. https://clsbluesky.law.columbia.edu/2025/06/04/regulating-algorithmic-accountability-in-financial-advising/
  20. PubMed. “Ethics in Patient Preferences for Artificial Intelligence-Drafted Responses to Electronic Messages.” https://pubmed.ncbi.nlm.nih.gov/40067301/
  21. GDPR Info. “Art. 22 GDPR: Automated individual decision-making, including profiling.” https://gdpr-info.eu/art-22-gdpr/
  22. Tech Policy Press. “Understanding Right to Explanation and Automated Decision-Making in Europe's GDPR and AI Act.” https://www.techpolicy.press/understanding-right-to-explanation-and-automated-decisionmaking-in-europes-gdpr-and-ai-act/
  23. Wikipedia. “Montgomery v Lanarkshire Health Board.” https://en.wikipedia.org/wiki/Montgomery_v_Lanarkshire_Health_Board
  24. New England Journal of Medicine. “Fifty Years Later: The Significance of the Nuremberg Code.” https://www.nejm.org/doi/full/10.1056/NEJM199711133372006
  25. Oxford Academic, Journal of the American Medical Informatics Association. “Patient attitudes toward ambient artificial intelligence scribes in clinical care: insights from a cross-sectional study.” https://academic.oup.com/jamia/advance-article-abstract/doi/10.1093/jamia/ocaf218/8371725

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...