The Machine Is Listening: Disclosure and Consent in Modern Therapy

The fifty-minute hour has always been a negotiated fiction. A patient talks, a therapist listens, and somewhere between them a third thing forms: a clinical relationship dense with attention, memory, and the particular quality of being heard. For more than a century, that triangle contained only two people. It no longer does.

Open the laptop of a typical American psychologist in the spring of 2026 and the odds are better than even that an AI tool is already running. It might be an ambient scribe quietly listening to the session and transcribing it into a SOAP note by the time the patient reaches their car. It might be a risk-screening dashboard that flags linguistic markers of suicidality in a client's intake forms. It might be a real-time coaching tool that whispers a reframe into the clinician's earpiece when a silence stretches too long. The American Psychological Association's Practitioner Pulse Survey, the results of which landed in March 2026, is unambiguous: AI has moved from fringe novelty to routine infrastructure in clinical practice. Fifty-six per cent of the 1,742 psychologists surveyed said they had used an AI tool to assist their work in the past year. Twenty-nine per cent use one at least monthly. In 2024, seventy-one per cent said they had never touched the stuff. That is a generational shift compressed into twelve months.

What the APA did not find, and went out of its way to say it did not find, was a professional consensus on any of the hard questions. Not on disclosure. Not on patient rights. Not on what a therapist owes a client before flipping on a machine that will listen to their most intimate disclosures and then ship the audio to a vendor in a different state, possibly a different country, for processing. The technology has outrun the ethics, and the ethics have outrun the law, and the patient, sitting on the couch, is the last to know.

This is the part of the story that does not get a press release. When somebody talks about AI in mental health, they usually mean the chatbot, the thing a lonely teenager might befriend at three in the morning. That conversation has had its reckoning in courtrooms and congressional hearings, and it is not the one that matters most for the several million people who still see a human therapist each week. The interesting question is quieter and harder. If the clinician sitting across from you, the one you were referred to by your GP or your insurer, is quietly running an AI in the background of your treatment, what exactly are they obliged to tell you about it? And does the research support the idea that adding the machine makes the therapy better, or only cheaper to provide?

The Ambient Scribe Enters the Room

The single most common entry point for AI into clinical practice is also the most banal. Therapists hate paperwork. They always have. The running joke in any psychology department is that nobody went into the field to write insurance justification letters, and yet everyone spends roughly a third of their working hours doing exactly that. A standard therapy session runs fifty minutes. The documentation attached to it, progress notes, billing codes, risk assessments, treatment plan updates, can eat another thirty. Over a week, the arithmetic is brutal. A therapist seeing twenty-five clients is writing for twelve and a half hours on top of their clinical load. Much of that work happens after dinner.

Into this grim calculus walked a class of product nobody would have predicted five years ago: the AI therapy scribe. Blueprint, founded in 2020 and now one of the category leaders, offers an ambient listening tool that records sessions, transcribes them, and spits out a structured progress note in under sixty seconds. The pitch is seductive. The company claims average documentation times can fall from around 4.7 hours a day to 1.2, a reduction of roughly seventy-five per cent. Upheal, a Prague-based competitor, promises essentially the same thing with a slightly different feature set: SOAP, DAP, BIRP, or GIRP notes generated to a therapist's preferred template, with the model learning the clinician's writing style over time. Both companies advertise HIPAA compliance, SOC 2 certification, Business Associate Agreements, and end-to-end encryption. Blueprint automatically deletes its recordings after transcription. Upheal deletes audio too, though it retains transcripts for clinician review.

These are not marginal products. Blueprint says it is used by tens of thousands of therapists across the United States. Upheal reports a comparable footprint. They have competitors: Freed, Mentalyc, Twofold, Heidi Health. They have pricing tiers aimed at solo practitioners and at group practices, and in some cases at the in-house compliance officers at hospital systems. They have venture capital behind them. They have, above all, momentum.

And they sit at the centre of an ethical problem most of their users have not fully reckoned with. When a client walks into a therapy session, their spoken words are, under ordinary confidentiality rules, legally and ethically protected to a very high standard. Therapy is the paradigm case of a privileged communication. What happens when those words are transmitted in real time to a third-party vendor for algorithmic processing, stored in a cloud they cannot see, and used, in most contracts, to improve the vendor's underlying model? The HIPAA answer is that the vendor becomes a business associate, that a Business Associate Agreement must be in place, and that appropriate safeguards must be documented. The ethical answer is considerably more complicated. It turns on whether the client has any meaningful understanding of what is happening, and any meaningful ability to say no.

What the APA Actually Said

The March 2026 APA findings, laid out across the association's Monitor on Psychology coverage and the underlying Practitioner Pulse Survey data, painted a field in the middle of an uneven, improvised adoption curve. The most common AI uses reported by psychologists were administrative: drafting emails, summarising documents, generating content, and dictation or note-taking. These are not, for the most part, tasks that touch clinical judgement. They are the grinding logistics of running a practice. But a growing minority of respondents reported more ambitious uses: treatment planning support, risk assessment augmentation, and, in the most contested category, real-time prompting during sessions.

The concerns practitioners reported were granular and serious. Sixty-seven per cent worried about data breaches. At least sixty per cent flagged concerns about biased inputs and outputs, inaccurate outputs, lack of rigorous testing, and unanticipated social harms. Thirty-eight per cent worried that AI would eventually render some or all of their job duties obsolete. The APA's accompanying health advisory, issued in tandem with the survey, was blunt: AI chatbots and wellness apps lack the testing and safety measures needed to provide quality mental health support, and such tools cannot replace qualified clinicians. The advisory did not, however, tell therapists what to do about the AI already running quietly in their own offices.

That gap was partially filled by the APA's Ethical Guidance for AI in the Professional Practice of Health Service Psychology, a document released in mid-2025 and refined through the end of that year. The guidance articulates what, in theory, any American clinician using AI is obliged to do. It is worth dwelling on, because the clarity of the guidance stands in such contrast to the murk of actual practice. Clinicians must obtain informed consent. They must clearly communicate the purpose, application, and potential benefits and risks of the AI tools they are using. They must disclose AI use to individuals receiving direct care, to other relevant providers, and to any third party who might be considered a client. The disclosure must cover the type of AI tools involved, their effect on treatment, the flow of data, the role of third-party vendors, and the cost implications of AI-assisted care. Crucially, clients must be able to opt out, and where feasible be offered a human-only alternative. AI must augment, not replace, professional judgement. The psychologist remains responsible for every decision, and must not defer blindly to a machine-generated recommendation.

This is an admirably complete list, and for anyone who has sat through the five-minute consent ritual in a modern clinic, a slightly fantastical one. The average informed consent form in private practice is already a dense three-page document that clients sign while their therapist fetches water. Adding a granular disclosure of AI usage, one that specifies which vendor, which data paths, which retention policies, and which opt-out mechanisms apply, pushes the document towards something no reasonable client will read. And yet the APA is right that the alternative, a disclosure so thin it amounts to a rubber stamp, betrays the therapeutic relationship's foundational trust.

The Psychiatric News Point/Counterpoint in January 2026 surfaced the tension openly. One camp argued that AI disclosure should be patient-empowering by default, with consent forms that actively invite clients to ask questions about the technology in the room. The other argued for a stricter standard rooted in the principle of non-maleficence: if the clinician is not prepared to answer detailed technical questions about the tool they are using, they should not be using it. Somewhere between those two positions sits the average working therapist, using a scribe because it saves them six hours a week, with only a dim understanding of where the audio goes after the session ends.

The Evidence Problem

Suppose, for a moment, that every clinician disclosed every AI use with flawless clarity. Suppose every client understood, in appropriate detail, what was happening. The question underneath the disclosure question would remain: does any of this make therapy better, or does it simply make it cheaper to deliver? Here the literature is much thinner and much less flattering than the marketing materials suggest.

The most comprehensive recent answer came from a scoping review of reviews published in January 2026 in Frontiers in Psychiatry, which examined the state of AI in mental health care across the field. The review's core finding was that most clinical AI models in mental health remain proof-of-concept. They achieve impressive headline numbers, AUCs of up to 0.98 and accuracies reaching ninety-seven per cent, but those numbers come overwhelmingly from internal validation on small cohorts. When the same models are tested against independent data, performance degrades, sometimes substantially. External validation is sparse. Where it exists, AUCs typically settle into a more modest 0.80 to 0.88 range, and often lower. The review concluded that the gap between demonstrated performance in the laboratory and deployed performance in the clinic had not meaningfully narrowed.

This matters because the business case for clinical AI assumes the headline numbers. A risk-screening tool that correctly identifies suicidal ideation with ninety-five per cent accuracy in a training dataset might flag a tenth of that in practice. A treatment-planning assistant that suggests evidence-based interventions might do so using a corpus that over-represents certain populations and therapies. None of this is to say AI cannot help. It is to say that the current evidence base for routine clinical deployment is not yet what the confidence of the product demos implies.

The picture is worse, and better documented, for consumer-facing therapy chatbots, which remain distinct from clinician-supporting tools but share an underlying technology and some of the same pathologies. The Stanford study that has become the most frequently cited reference point in this debate, presented at the ACM Conference on Fairness, Accountability, and Transparency and led by Jared Moore, Declan Grabb, and Nick Haber of Stanford with collaborators from Carnegie Mellon, the University of Minnesota, and the University of Texas at Austin, delivered a particularly uncomfortable finding. The researchers evaluated five popular therapy chatbots, including 7cups' Pi and Noni and Character.AI's Therapist, against a set of clinical criteria. Licensed human therapists responded appropriately ninety-three per cent of the time. The chatbots responded appropriately less than sixty per cent of the time.

More damning still, the researchers found that the chatbots exhibited more stigma towards certain mental health conditions than others. Alcohol dependence and schizophrenia drew more negative responses than depression. Newer and larger models did not perform better than older or smaller ones. In scenarios involving suicidal ideation or delusions, where a trained clinician would gently push back or help the client reframe, the chatbots sometimes enabled the dangerous thinking rather than interrupting it. The Stanford team was careful about what they were claiming. Their argument was not that chatbots could never be useful. It was that the specific failure modes they documented, stigma reinforcement and crisis mishandling, are particularly dangerous precisely because of the kind of user who seeks out an AI therapist in the first place.

There is a subtler finding layered underneath. If the chatbot reinforces stigma about schizophrenia, and the user of the chatbot is someone with a family history of schizophrenia who has not yet sought formal care, the effect of the interaction is to push that person further from the clinician they actually need. The AI therapist, in certain configurations, does not merely fail to help. It delays the real help.

The TikTok Problem

A second category of failure sits adjacent to the clinical AI question, and is probably more consequential at a population level. It concerns the recommendation algorithms that sit on the social-media platforms where vulnerable people now habitually go looking for mental health information. An arXiv pre-print published on 16 April 2026 by a research team auditing TikTok's mental health recommendations brought the issue into sharp focus. The paper, titled “Seeking Help, Facing Harm”, used thirty simulated accounts that interacted with TikTok's For You feed over a seven-day period in January 2026, collecting 8,727 videos. The audit varied two dimensions: whether the initial search framing was distress-initiated or help-initiated, and whether the account's interaction strategy leaned into the content or away from it.

The finding was that engaged accounts saw their feeds saturate with mental health content, roughly forty-five per cent of daily recommendations. Avoidance and passive viewing reduced the exposure but did not eliminate it, with mental health content still comprising between eleven and twenty per cent of recommended videos. Crucially, TikTok's algorithm did not appear to meaningfully distinguish between a user expressing distress and a user seeking help. Both were treated as a single mental health interest cluster. The consequence is that someone who types “how to stop feeling suicidal” gets treated roughly the same as someone who types “romanticising depression”. Recovery-oriented intent is conflated with distress consumption, and vulnerable users get served content that may exacerbate rather than mitigate harm.

The TikTok finding matters for the therapy-room question because it changes the context in which the therapy happens. The client who walks into a session at five o'clock has spent the preceding weeks, often the preceding years, consuming algorithmic content that actively shapes their sense of what their condition is and how it should be treated. Eating-disorder content, trauma content, ADHD content, grief content: all of it flows through the same engagement-optimised pipe. When the therapist then introduces an additional AI tool into the session, even a benign one like a scribe, they are not doing so in neutral territory. They are doing so on ground already extensively ploughed by algorithmic influence. The disclosure question is no longer just about the scribe. It is about the whole ecosystem of machine-mediated attention the client arrives with.

What the Patient Is Owed

So: if AI is now embedded in what remains one of the most intimate and trust-dependent professional relationships most people will ever have, what disclosure obligations should apply? The answer is not a single sentence, but it is a reasonably short list.

First, a client has the right to know, in plain language, that AI is in the room. That starts with whether a scribe is recording the session, but it extends to whether any automated system is generating, drafting, or shaping the treatment plan; flagging risk signals that will influence the clinician's judgement; or processing session data after the fact. The disclosure should happen at the start of treatment, not buried in a stack of intake forms. It should be revisited when tools change.

Second, the client has the right to understand the data path. Not every technical detail, but the substantive answers to the questions any thoughtful person would ask. Where does the audio go? Who stores it? For how long? Is it used to train a model? Can it be accessed by anyone other than the therapist? What happens if the vendor is acquired, goes bankrupt, or suffers a breach? These questions are answerable, in most cases, from a vendor's published documentation. If the therapist cannot answer them, the therapist does not yet understand the tool they are using.

Third, the client has the right to opt out. This is the hinge of the whole disclosure regime. A disclosure that comes with no meaningful alternative is not consent; it is notification. The APA's guidance is explicit that, where feasible, a human-only alternative should be offered. The qualifier “where feasible” is doing a great deal of work, and it is precisely the word that clinicians and practice managers will lean on when it becomes inconvenient to honour the principle. A patient who says “please do not record the session” should not have to become a problem case.

Fourth, the client has the right to know what the AI is not doing. Most people, when told an AI is involved in their care, will assume more than is actually the case. They may assume the AI is making diagnostic judgements it is not making. They may assume the therapist has abdicated more authority than they have. They may also assume the AI is doing less than it is. The disclosure should be honest in both directions.

Illinois, in August 2025, became the first US state to write this logic into statute. Public Act 104-0054, known as the Wellness and Oversight for Psychological Resources Act, prohibits the use of AI to provide therapy directly, to make independent therapeutic decisions, to interact with clients in any form of therapeutic communication, or to generate treatment plans without clinician review. It permits AI for administrative and supplementary support, provided the patient has given specific, written, revocable consent. Violations carry civil penalties of up to $10,000. Illinois will not be the last state to legislate this territory. Governing AI in Mental Health, a fifty-state legislative review published in late 2025, identified similar proposals in a growing number of jurisdictions. The Federal Trade Commission, having opened an inquiry in September 2025 into AI chatbots acting as companions, is circling the consumer end of the problem. The HHS approach is more deregulatory, aimed at promoting adoption through pilot programmes, but the direction of travel on disclosure is fairly clear.

In the UK, the regulatory movement has been quieter but real. The British Psychological Society has endorsed a Global Psychology Alliance guidance document on AI in psychology. The Health and Care Professions Council, which regulates practitioner psychologists in the UK, collaborated with four other regulators in February 2026 on a joint statement on AI in education and training. Neither document carries the specificity of the Illinois statute. But the principle is being worked out in comparable terms: disclosure, consent, clinician oversight, and meaningful alternatives.

The Economic Gravity

The harder question, and the one the industry would prefer not to answer, is whether the motivation for AI adoption in therapy has more to do with clinical improvement or with cost reduction. The available evidence suggests the latter is doing most of the work.

Start with what the data says about outcomes. A randomised clinical trial of a conversational AI agent for psychiatric symptoms, published in late 2025, found meaningful reductions in anxiety and depression symptoms compared to group therapy and control conditions. Meta-analytic work on AI-driven interventions has found moderate effect sizes, broadly comparable to low-intensity clinician-delivered treatments, particularly for cognitive behavioural approaches to mild or moderate depression. The therapeutic alliance literature suggests users can form surprisingly strong bonds with AI agents, sometimes within three to five days, and that these bonds correlate with engagement and symptom improvement. None of this should be dismissed. For someone who would otherwise get no care at all, a decent AI-mediated intervention is clearly better than nothing.

Now set that next to the economic reality of how therapy is paid for. Insurers routinely deny around fifteen per cent of mental health claims even when prior authorisation has been obtained. Therapists in network spend roughly a third of their working week on documentation and billing. The average productivity gain from an AI scribe, across published implementation studies, is a reduction in documentation time of roughly seventy per cent. That saving does not, in general, accrue to the patient as a lower fee. It accrues to the practice as either higher clinician throughput, a reduction in administrative overhead, or both. In the most straightforwardly commercial model, the clinician sees more clients per week because each one requires less paperwork, and the practice's revenue per clinician rises.

This is not inherently sinister. More clinician availability in a field with chronic workforce shortages is a genuine social good. But it reshapes what AI is for in a way the consent forms rarely acknowledge. The AI is not principally a clinical tool; it is a throughput tool. It makes it economically viable for the therapist to do more of what they already do. Whether it makes what they do better is a separate question, and the answer so far is: perhaps slightly, and only in certain narrow domains, and only where clinicians actively engage with the output rather than letting it substitute for their attention.

The managed-care angle is where this becomes politically charged. If AI scribes and AI treatment-planning aids allow a clinician to see, say, thirty per cent more patients per week, an insurer looking at its network will be tempted, over time, to recalibrate reimbursement rates on the assumption that AI productivity gains will absorb the pressure. The savings that flowed to the practice in year one flow to the payer in year five. The patient's out-of-pocket cost does not decline, but the amount of clinician attention per dollar declines quietly. This is the pattern that radiology, primary care, and coding specialities have already been living through for a decade. Psychotherapy is a latecomer to the trend, but there is no obvious reason it will be immune.

There is a further and more disquieting possibility, which the APA's guidance gestures at without quite confronting. If insurer incentives push towards AI-augmented therapy, and AI-augmented therapy can be delivered at higher throughput, the marginal therapist may find it increasingly difficult to operate without the AI. Not because the AI makes them a better clinician, but because opting out becomes economically untenable. The clinician who refuses the scribe sees fewer patients, generates less revenue, and becomes less competitive in the market. The right to opt out, from the therapist's side, begins to erode under the same logic that is already eroding it on the patient's side.

The Alliance Question

The single deepest concern, and the one hardest to measure, is what all of this does to the therapeutic alliance itself. The alliance, in the clinical literature, refers to the particular quality of collaboration and bond between client and clinician that consistently predicts outcomes across therapeutic modalities. It is not a decoration on therapy; it is substantively what therapy is. The studies on AI-mediated interventions suggest the alliance can form with a machine as well as a human, which is a finding both reassuring and unsettling. A recent Journal of Medical Internet Research integrative review on the digital therapeutic alliance identified five components, goal alignment, task agreement, therapeutic bond, user engagement, and the facilitators and barriers that mediate them, and found each was at least partially achievable in digital contexts.

That literature, however, is mostly about stand-alone digital interventions. The live question for the clinician-with-AI case is whether introducing a machine into the room between two humans changes the alliance those two humans are building. Common sense suggests it must. If the client knows that a transcript of the session is being processed by a vendor, that knowledge will shape what they say. Certain topics, abortion, substance use, past legal trouble, contemplated self-harm, may come up more tentatively or not at all. The therapist's assurance that “the system is HIPAA compliant” will not, in most clients, fully dissolve the awareness that a machine is listening. Awareness of surveillance, even benign surveillance, changes speech. That change is precisely what good therapy exists to undo.

There is a further dynamic on the therapist's side. When a clinician is using an ambient scribe, a small part of their attention is necessarily devoted to what the machine is picking up, whether it is capturing the important moments, whether the resulting note will require heavy editing. That split attention is subtle, but it is not zero. A significant body of attentional research suggests that the perceived presence of a recording device changes how the recorder behaves. Whether that change, summed across a caseload of thirty clients per week, is large enough to measurably degrade therapeutic outcomes is an open empirical question. Nobody has yet designed the study that would answer it cleanly.

The Shape of an Honest Practice

If the current moment is a kind of frontier, a period in which adoption has raced ahead of norms, what does an honest AI-augmented practice look like? A few things are becoming clear.

It looks like disclosure written in plain English and revisited as tools change, not a legal boilerplate tucked into the intake packet. It looks like vendor contracts the clinician has actually read and can summarise for a curious patient in three sentences. It looks like genuine opt-out alternatives, including for the administrative AI, with no penalty for the patient who takes them. It looks like clinicians who can articulate what the AI is doing and, more importantly, what it is not doing. It looks like practices that do not quietly shift the productivity dividend of AI into more intense caseloads without acknowledging the trade-off. It looks like professional bodies that enforce their guidance, rather than publishing it and moving on. It looks like regulators, at the state and federal level, who treat psychotherapy as a high-risk domain and set the bar for deployment accordingly.

It also looks like a willingness, on the part of the profession, to resist the parts of this technology that do not serve the patient. Real-time therapeutic prompting, in which an AI listens to the session and suggests what the clinician should say next, is the category where the ethics strain hardest. There is a plausible account of it in which a supervisor-in-the-ear helps less experienced clinicians avoid mistakes and improves overall care. There is a much less flattering account in which a clinician becomes a kind of animatronic front-end for a model whose reasoning is opaque, whose training data is proprietary, and whose accuracy on this particular patient has never been validated. The Illinois statute's prohibition on AI-driven therapeutic communication without clinician review reflects a judgement that the risk outweighs the benefit at the present state of the art. Other jurisdictions will have to make the same call.

The research agenda, too, has to catch up. The Frontiers scoping review's conclusion that most clinical AI in mental health remains proof-of-concept is not an argument against the technology. It is an argument for external validation studies, for pre-registered trials comparing AI-augmented care to unaugmented care on the outcomes patients actually care about, for longitudinal work on therapeutic alliance in hybrid settings, and for harm-reporting infrastructure of the kind that exists for drugs and devices but does not yet exist for software that mediates clinical relationships.

The Patient Still in the Room

Return, finally, to the room. A client arrives for a fifty-minute hour. They sit on a couch that has accommodated many lives before theirs. They are about to tell a therapist something they have never told anyone, or something they have told many people but never in quite this way. The question the APA report in March, the Frontiers review in January, the Stanford chatbot study, and the TikTok audit in April all circle without quite naming is whether that room is still the same room it was before the machines moved in.

The answer, probably, is that it is not, but that it can still be a good room, provided the people responsible for it are honest about what has changed. The therapist who discloses the scribe and offers the alternative, who reads the vendor contract, who does not pretend to their client that the AI is neutral or invisible, is not being paranoid. They are doing the oldest job in their profession, which is the protection of the therapeutic relationship against whatever threatens it, including the tools the therapist themselves brought in.

The patient, for their part, has a right the culture has not quite caught up with: the right to ask, and to get a real answer. Not a legally defensible reassurance, but the answer a friend would give if they happened to know the technology. Whether the profession can make that kind of answer routine is probably the question that will decide, more than any regulation, whether AI makes therapy better, or simply makes it cheaper to deliver, and quieter in its cheapness.

References and Sources

  1. American Psychological Association. “AI in the therapist's office: Uptake increases, caution persists.” Monitor on Psychology, March 2026. https://www.apa.org/monitor/2026/03/ai-reshaping-therapy
  2. American Psychological Association. “2025 Practitioner Pulse Survey: AI in the therapist's office.” APA Publications, released December 2025. https://www.apa.org/pubs/reports/practitioner/2025
  3. American Psychological Association. “Ethical guidance for AI in the professional practice of health service psychology.” APA Topics, 2025. https://www.apa.org/topics/artificial-intelligence-machine-learning/ethical-guidance-ai-professional-practice
  4. Frontiers in Psychiatry. “Artificial intelligence in mental health care: a scoping review of reviews.” January 2026. https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2026.1688043/full
  5. Moore, J., Grabb, D., Agnew, W., Ong, D. C., and Haber, N. (Stanford HAI, Carnegie Mellon, University of Minnesota, University of Texas at Austin). “Exploring the Dangers of AI in Mental Health Care.” Presented at ACM FAccT. Stanford HAI. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
  6. “Seeking Help, Facing Harm: Auditing TikTok's Mental Health Recommendations.” arXiv preprint, 16 April 2026. Accepted at ICWSM 2026. https://arxiv.org/abs/2604.14832
  7. Illinois General Assembly. “Public Act 104-0054: Wellness and Oversight for Psychological Resources Act.” August 2025. https://ilga.gov/Documents/Legislation/PublicActs/104/PDF/104-0054.pdf
  8. JMIR Mental Health. “Artificial Intelligence in Mental Health Services Under Illinois Public Act 104-0054: Legal Boundaries and a Framework for Establishing Safe, Effective AI Tools.” 2025. https://mental.jmir.org/2025/1/e84854
  9. Blueprint. “Blueprint: AI-Assisted EHR for Therapists.” Product documentation. https://www.blueprint.ai
  10. Upheal. “AI Clinical Notes: HIPAA-compliant therapy documentation.” Product documentation. https://www.upheal.io/ai-clinical-notes
  11. Blueprint. “AI for Therapists: ACA and APA Ethical Guidance.” https://www.blueprint.ai/blog/aca-and-apa-guidance-for-the-use-of-ai-for-therapists
  12. Videra Health. “Navigating the APA's AI Ethical Guidance: A Clinician's Perspective.” July 2025. https://viderahealth.com/2025/07/03/apa-ai-ethical-guidance-clinician-perspective/
  13. Documentation Wizard. “AI in Psychotherapy: Disclosure or Consent?” https://documentationwizard.com/ai-in-psychotherapy-disclosure-or-consent/
  14. HIPAA Journal. “When AI Technology and HIPAA Collide.” https://www.hipaajournal.com/when-ai-technology-and-hipaa-collide/
  15. Psychiatric News. “Point/Counterpoint: Empowering Patients vs. Do No Harm in the Age of AI.” January 2026. https://www.psychiatryonline.org/doi/10.1176/appi.pn.2026.01.1.11
  16. British Psychological Society. “New guidance sets out principles for understanding artificial intelligence in psychology.” https://www.bps.org.uk/news/new-guidance-sets-out-principles-understanding-artificial-intelligence-psychology
  17. Health and Care Professions Council. “HCPC collaborates on joint statement on use of AI in health and care professional education.” February 2026. https://www.hcpc-uk.org/education-providers/updates/2026/hcpc-collaborates-on-joint-statement-on-use-of-ai-in-health-and-care-professional-education/
  18. Journal of Medical Internet Research Mental Health. “Does the Digital Therapeutic Alliance Exist? Integrative Review.” 2025. https://mental.jmir.org/2025/1/e69294
  19. JMIR. “Efficacy of a Conversational AI Agent for Psychiatric Symptoms and Digital Therapeutic Alliance: A Randomized Clinical Trial.” 2025. https://pubmed.ncbi.nlm.nih.gov/41979879/
  20. Bipartisan Policy Center. “Paying for AI in U.S. Health Care.” https://bipartisanpolicy.org/issue-brief/paying-for-ai-in-u-s-health-care/
  21. Kelley Drye and Warren LLP. “AI Chatbots Face Rising Legal and Legislative Scrutiny.” https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ai-chatbots-face-rising-legal-and-legislative-scrutiny
  22. Nixon Law Group. “How States Are Enforcing New AI Laws in Healthcare and Why It Matters.” https://www.nixonlawgroup.com/resources/how-states-are-enforcing-new-ai-laws-in-healthcareand-why-it-matters
  23. NPR. “AI in the mental health care workforce is met with fear, pushback and enthusiasm.” 7 April 2026. https://www.npr.org/2026/04/07/nx-s1-5771707/mental-health-care-workforce-artificial-intelligence-ai
  24. Manatt Health. “Health AI Policy Tracker.” https://www.manatt.com/insights/newsletters/health-highlights/manatt-health-health-ai-policy-tracker
  25. Governing AI in Mental Health: 50-State Legislative Review. 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12578431/

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Listen to the free weekly SmarterArticles Podcast

Discuss...