AI Diagnoses Better Than Doctors: Why Patients Still Refuse to Trust It

The numbers are startling, and they demand attention. An estimated 795,000 Americans die or become permanently disabled each year because of diagnostic errors, according to a 2023 Johns Hopkins University study. In the United Kingdom, diagnostic errors affect at least 10 to 15 per cent of patients, with heart attack misdiagnosis rates reaching nearly 30 per cent in initial assessments. These are not abstract statistics. They represent people who trusted their doctors, sought help, and received the wrong answer at a critical moment.
Into this landscape of fallibility comes a promise wrapped in silicon and algorithms: artificial intelligence that can diagnose diseases faster, more accurately, and more consistently than human physicians. The question is no longer whether AI can perform this feat. Mounting evidence suggests it already can. The real question is whether you will trust a machine with your life, and what happens to the intimate relationship between doctor and patient when algorithms enter the examination room.
The Diagnostic Revolution Arrives
The pace of development has been breathtaking. In 2018, IDx-DR became the first fully autonomous AI diagnostic system in any medical field to receive approval from the United States Food and Drug Administration. The system, designed to detect diabetic retinopathy from retinal images, achieved a sensitivity of 87.4 per cent and specificity of 89.5 per cent in its pivotal clinical trial. A more recent systematic review and meta-analysis published in the American Journal of Ophthalmology found pooled sensitivity of 95 per cent and pooled specificity of 91 per cent. These numbers matter enormously. Diabetic retinopathy is a leading cause of blindness worldwide, and early detection can prevent irreversible vision loss. The algorithm does not tire, does not have off days, does not rush through appointments because another patient is waiting.
By December 2025, the FDA's database listed over 1,300 AI-enabled medical devices authorised for marketing. Radiology dominates, with more than 1,000 approved tools representing nearly 80 per cent of the total. The agency authorised 235 AI devices in 2024 alone, the most in its history. In the United Kingdom, the NHS has invested over 113 million pounds into more than 80 AI-driven innovations through its AI Lab, and AI now analyses acute stroke brain scans in 100 per cent of stroke units across England.
The performance data emerging from controlled studies is remarkable, though it requires careful interpretation. A March 2025 meta-analysis published in Nature's npj Digital Medicine, examining 83 studies, found that generative AI achieved an overall diagnostic accuracy of 52.1 per cent, with no significant difference between AI models and physicians overall. However, the picture becomes more interesting when we examine specific applications. Microsoft's AI diagnostic orchestrator correctly diagnosed 85 per cent of challenging cases from the New England Journal of Medicine, compared to approximately 20 per cent accuracy for the 21 general practice doctors who attempted the same cases. These were deliberately difficult diagnostic puzzles, the kind that stump even experienced clinicians.
In a 2024 randomised controlled trial at the University of Virginia Health System, ChatGPT Plus achieved a median diagnostic accuracy exceeding 92 per cent when used alone, while physicians using conventional approaches achieved 73.7 per cent. The researchers were surprised by an unexpected finding: adding a human physician to the AI actually reduced diagnostic accuracy, though it improved efficiency. The physicians often disagreed with or disregarded the AI's suggestions, sometimes to the detriment of diagnostic precision.
The Stanford Medicine study on AI in dermatology revealed that medical students, nurse practitioners, and primary care doctors improved their diagnostic accuracy by approximately 13 points in sensitivity and 11 points in specificity when using AI guidance. Even dermatologists and dermatology residents, who performed better overall, saw improvements with AI assistance. A systematic review comparing AI to clinicians in skin cancer detection found AI algorithms achieved sensitivity of 87 per cent and specificity of 77.1 per cent, compared to all clinicians at 79.78 per cent sensitivity and 73.6 per cent specificity. The differences were statistically significant.
In breast cancer screening, the evidence is mounting with remarkable consistency. The MASAI trial in Sweden, the world's first randomised controlled trial of AI-supported mammography screening, demonstrated that AI can increase cancer detection while reducing screen-reading workload. The German PRAIM trial, the largest study on integrating AI into mammography screening to date, found that AI-supported mammography detected breast cancer at a rate of 6.7 per 1,000 women screened, a 17.6 per cent increase over the standard double-reader approach at 5.7 per 1,000. A Lancet Digital Health commentary declared that standard double-reading of mammograms will likely be phased out from organised breast screening programmes if additional trials confirm these findings.
The Trust Paradox
Yet despite this evidence, something curious emerges from research into patient preferences. People do not straightforwardly embrace the diagnostic algorithm, even when presented with evidence of its superior performance.
A 2024 study published in Frontiers in Psychology analysed data from 1,183 participants presented with scenarios across cardiology, orthopaedics, dermatology, and psychiatry. The results were consistent across all four medical disciplines: people preferred a human doctor, followed by a human doctor working with an AI system, with AI alone coming in last place. A preregistered randomised survey experiment among 1,762 US participants found results consistent across age, gender, education, and political affiliation, indicating what researchers termed a “broad aversion to AI-assisted diagnosis.”
Research published in the Journal of the American Medical Informatics Association in 2025 found that patient expectations of AI improving their relationships with doctors were notably low at 19.55 per cent. Expectations that AI would improve healthcare access were comparatively higher but still modest at 30.28 per cent. Perhaps most revealing: trust in providers and the healthcare system was positively associated with expectations of AI benefit. Those who already trusted their doctors were more likely to embrace AI recommendations filtered through those doctors.
The trust dynamics are complex and sometimes contradictory. A cross-sectional vignette study published in the Journal of Medical Internet Research found that AI applications may have a potentially negative effect on the patient-physician relationship, especially among women and in high-risk situations. Trust in a doctor's personal integrity and professional competence emerged as key mediators of what researchers termed “AI-assistance aversion.” Lower trust in doctors who use AI directly reduced patients' intention to seek medical help at all.
Yet a contrasting survey from summer 2024 found 64 per cent of patients would trust a diagnosis made by AI over that of a human doctor, though trustworthiness decreased as healthcare issues became more complicated. Just 3 per cent said they were uncomfortable with any AI involvement in medicine. The contradiction reveals the importance of context, framing, and the specific clinical situation.
What explains these seemingly contradictory findings? Context matters enormously. The University of Arizona study that found patients almost evenly split (52.9 per cent chose human doctor, 47.1 per cent chose AI clinic) also discovered that a primary care physician's explanation about AI's superior accuracy, a gentle push towards AI, and a positive patient experience could significantly increase acceptance. How AI is introduced, who introduces it, and what the patient already believes about their healthcare provider all shape the response.
A Relationship Centuries in the Making
To understand what is at stake requires understanding what came before. The doctor-patient relationship is among the oldest professional bonds in human civilisation. Cave paintings representing healers date back fourteen thousand years. Before the secularisation of medicine brought by the Hippocratic school in the fifth century BCE, no clear boundaries existed between medicine, magic, and religion. The healer was often an extension of the priest, and seeking medical help meant placing yourself in the hands of someone who understood mysteries you could not fathom.
For most of medical history, this relationship was profoundly asymmetrical. The physician possessed knowledge that patients could not access or evaluate. Compliance was expected. The doctor decided, the patient accepted. This paternalistic model persisted well into the twentieth century. As one historical analysis noted, physicians were viewed as dominant or superior to patients due to the inherent power dynamic of controlling health, treatment, and access to knowledge. The physician conveyed only the information necessary to convince the patient of the proposed treatment course.
The shift came gradually but represented a fundamental reconception of the relationship. By the late twentieth century, the patient transformed from passive receiver of decisions into an agent with well-defined rights and broad capacity for autonomous decision-making. The doctor transformed from priestly father figure into technical adviser whose knowledge was offered but whose decisions were no longer taken for granted. Informed consent emerged as a legal and ethical requirement. Shared decision-making became the professional ideal.
Trust remained central throughout these transformations. Research consistently shows that trust, along with empathy, communication, and listening, characterises a productive doctor-patient relationship. For patients, a consistent relationship with their doctors has been shown to facilitate treatment adherence and improved health outcomes. The relationship itself is therapeutic.
But this trust has been eroding for decades. Public confidence in medicine peaked in the mid-1960s. A 2023 Gallup Poll found that only about one in three Americans expressed “great or quite a lot” of confidence in the medical system. Trust in doctors, though higher at roughly two in three Americans, remains below pre-pandemic levels. As one analysis observed, physicians' employers, pharmaceutical companies, and insurance companies have entered what was once a private relationship. The generic substitution of “healthcare provider” for “physician” and “client” for “patient” reflects a growing impersonality. Medicine has become commercialised, the encounter increasingly transactional.
Into this already complicated landscape arrives artificial intelligence, promising to further reshape what it means to receive medical care.
The Equity Reckoning
The introduction of AI into healthcare carries profound implications for equity, and not all of them are positive. The technology has the potential either to reduce or to amplify existing disparities, depending entirely on how it is developed and deployed.
A 2019 study sent shockwaves through the medical community when it revealed that a clinical algorithm used by many hospitals to decide which patients needed care showed significant racial bias. Black patients had to be deemed much sicker than white patients to be recommended for the same care. The algorithm had been trained on past healthcare spending data, which reflected a history in which Black patients had less to spend on their health compared to white patients. The algorithm learned to perpetuate that inequity.
The problem persists and may even be worsening as AI becomes more prevalent. A systematic review on AI-driven racial disparities in healthcare found a significant association between AI utilisation and the exacerbation of racial disparities, especially in minority populations including Black and Hispanic patients. Sources identified included biased training data, algorithm design choices, unfair deployment practices, and historic systemic inequities embedded in the healthcare system.
A Cedars-Sinai study found patterns of racial bias in treatment recommendations generated by leading AI platforms for psychiatric patients. Large language models, when presented with hypothetical clinical cases, often proposed different treatments for patients when African American identity was stated or implied than for patients whose race was not indicated. Specific disparities included LLMs omitting medication recommendations for ADHD cases when race was explicitly stated and suggesting guardianship for depression cases with explicit racial characteristics.
The sources of bias are multiple and often embedded in the foundational data that AI systems learn from. Public health AI typically suffers from historic bias, where prior injustices in access to care or discriminatory health policy become embedded within training datasets. Representation bias emerges when samples from urban, wealthy, or well-connected groups lead to the systematic exclusion of samples from rural, indigenous, or disenfranchised groups. Measurement bias occurs when health endpoints are approximated with proxy variables that differ between socioeconomic or cultural environments.
Research warns that minoritised communities, whose trust in health systems has been eroded by historical inequities, ongoing biases, and in some cases outright malevolence, are likely to approach AI with heightened scepticism. These communities have seen how systemic disparities can be perpetuated by the very tools meant to serve them.
Addressing these issues requires comprehensive bias detection tools and mitigation strategies, coupled with active supervision by physicians who understand the limitations of the systems they use. Mitigating algorithmic bias must occur across all stages of an algorithm's lifecycle, including authentic engagement with patients and communities during all phases, explicitly identifying healthcare algorithmic fairness issues and trade-offs, and ensuring accountability for equity and fairness in outcomes.
The Validation Gap
For all the impressive performance statistics emerging from research studies, a troubling pattern emerges upon closer examination of how AI diagnostic tools actually reach the market and enter clinical practice.
A cross-sectional study of 903 FDA-approved AI devices found that at the time of regulatory approval, clinical performance studies were reported for approximately half of the analysed devices. One quarter explicitly stated that no such studies had been conducted. Less than one third of clinical evaluations provided sex-specific data, and only one fourth addressed age-related subgroups. Perhaps most concerning: 97 per cent of all devices were cleared via the 510(k) pathway, which does not require independent clinical data demonstrating performance or safety. Devices are cleared based on their similarity to previously approved devices, creating a chain of approvals that may never have been anchored in rigorous clinical validation.
A JAMA Network Open study examining the generalisability of FDA-approved AI-enabled medical devices for clinical use warned that evidence about clinical generalisability is lacking. The number of AI-enabled tools cleared continues to rise, but the robust real-world validation that would inspire confidence often does not exist.
This matters because AI systems that perform brilliantly in controlled research settings may falter in the messy reality of clinical practice. The UVA Health researchers who found ChatGPT Plus achieving 92 per cent accuracy cautioned that the system “likely would fare less well in real life, where many other aspects of clinical reasoning come into play.” Determining downstream effects of diagnoses and treatment decisions involves complexities that current AI systems do not reliably navigate. A correct diagnosis is only the beginning; knowing what to do with it requires judgment that algorithms do not yet possess.
Studies have also found that most physicians treated AI tools like a search function, much as they would Google or UpToDate, rather than leveraging optimised prompting strategies that might improve performance. This suggests that even when AI tools are available, the human element of how they are used introduces significant variability that research settings often fail to capture.
What Machines Cannot Do
The argument for AI in diagnosis often centres on consistency and processing power. Algorithms do not forget, do not tire, do not bring personal problems to work. They can compare a patient's presentation against millions of cases instantly. They do not have fifteen-minute appointment slots that force rushed assessments.
But medicine is not merely pattern recognition. Eric Topol, Executive Vice-President of Scripps Research and author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, has argued that AI development in healthcare could lead to a dramatic shift in the culture and practice of medicine. Yet he cautions that AI on its own will not fix the current challenges of what he terms “shallow medicine.” In his assessment, the field is “long on AI promise but very short on real-world, clinical proof of effectiveness.”
Topol envisions AI restoring the essential human element of medical practice by enabling machine support of tasks better suited for automation, thereby freeing doctors, nurses, and other healthcare professionals to focus on providing real care for patients. This is a fundamentally different vision from replacing physicians with algorithms. It imagines a symbiosis where each contributor does what it does best: the machine handles pattern recognition and data processing while the human provides judgment, empathy, and presence.
The obstacles to achieving this vision are substantial. Topol identifies medical community resistance to change, reimbursement issues, regulatory challenges, the need for greater transparency, the need for compelling evidence, engendering trust among clinicians and the public, and implementation challenges as chief barriers to progress. These are not merely technical problems but cultural and institutional ones.
Doctors must also contend with the downsides of AI adoption. Models can generate incorrect or misleading results, the phenomenon known as AI hallucinations or confabulations. AI models can produce results that reflect human bias encoded in training data. A diagnosis is not merely a label; it is a communication that affects how a person understands their body, their future, their mortality. Getting that communication wrong carries consequences that extend far beyond clinical metrics.
The Regulatory Response
Governments and regulatory bodies around the world are scrambling to keep pace with the technology, developing frameworks that balance innovation with safety.
In the United States, the FDA published guidance on “Transparency for Machine Learning-Enabled Medical Devices” in June 2024, followed by final guidance on predetermined change control plans for AI-enabled device software in December 2024. Draft guidance on lifecycle management for AI-enabled device software followed in January 2025. The FDA's Digital Health Advisory Committee held its inaugural meeting in November 2024 to discuss how the agency should adapt its regulatory approach for generative AI-enabled devices, which present novel challenges because they can produce outputs that even their creators cannot fully predict.
In the United Kingdom, the MHRA AI Airlock launched in May 2024 and expanded with a second cohort in 2025. This regulatory sandbox allows developers to test their AI as a Medical Device in supervised, real-world NHS environments. A new National Commission was announced to accelerate safe access to AI in healthcare by advising on a new regulatory framework to be published in 2026. The Commission brings together experts from technology companies including Google and Microsoft alongside clinicians, researchers, and patient advocates.
The NHS Fit For The Future: 10 Year Health Plan for England, published in July 2025, identified data, artificial intelligence, genomics, wearables, and robotics as five transformative technologies that are strategic priorities. A new framework procurement process will be introduced in 2026-2027 to allow NHS organisations to adopt innovative technologies including ambient AI.
The National Institute for Health and Care Excellence has conditionally recommended AI tools such as TechCare Alert and BoneView for NHS use in identifying fractures on X-rays, provided they are used alongside clinician review. This last phrase is crucial: alongside clinician review. The regulatory consensus, for now, maintains human oversight as a non-negotiable requirement.
The Nobel Prize and Its Implications
In October 2024, Demis Hassabis and John Jumper of Google DeepMind were co-awarded the Nobel Prize in Chemistry for their work on AlphaFold, alongside David Baker for his work on computational protein design. This recognition elevated AI in life sciences to the highest level of scientific honour, signalling that the technology has passed from speculative promise to demonstrated achievement.
AlphaFold has predicted over 200 million protein structures, nearly all catalogued proteins known to science. As of November 2025, it is being used by over 3 million researchers from over 190 countries, tackling problems including antimicrobial resistance, crop resilience, and heart disease. AlphaFold 3, announced in May 2024 and made publicly available in February 2025, can predict the structures of protein complexes with DNA, RNA, post-translational modifications, and selected ligands and ions. Google DeepMind reports a 50 per cent improvement in prediction accuracy compared to existing methods, effectively doubling what was previously possible.
The implications for drug discovery are substantial. Isomorphic Labs, the Google DeepMind spinout, raised 600 million dollars in March 2025 and is preparing to initiate clinical trials for AI-developed oncology drugs. Scientists at the company are collaborating with Eli Lilly and Novartis to discover antibodies and new treatments that inhibit disease-related targets. According to GlobalData's Drugs database, there are currently more than 3,000 drugs developed or repurposed using AI, with most in early stages of development.
Meanwhile, Med Gemini, Google DeepMind's medical AI platform, achieved 91.1 per cent accuracy on diagnostic tasks, outperforming prior models by 4.6 per cent. The system leverages deep learning to analyse medical images including X-rays and MRIs, helping in early detection of diseases including cancer, heart conditions, and neurological disorders.
In India, Google's bioacoustic AI model is enabling development of tools that can screen tuberculosis through cough sounds, with potential to screen 35 million people. AI is also working to close maternal health gaps by making ultrasounds accessible to midwives. These applications suggest that AI could expand access to diagnostic capabilities in resource-limited settings, potentially democratising healthcare in ways that human expertise alone could never achieve.
Hospitals Using AI Today
The integration is already happening, hospital by hospital, department by department. This is not a future scenario but present reality.
Pilot programmes at several Level I trauma centres report that AI-flagged X-rays get read 20 to 30 minutes faster on average than normal work-list order. In acute care, those minutes can be critical; in stroke treatment, every minute of delay costs brain cells. A multi-centre study in the UK identified that AI-assisted mammography had the potential to cut radiologists' workload by almost half without sacrificing diagnostic quality. Another trial in Canada demonstrated faster triage of suspected strokes when CT scans were pre-screened by AI, resulting in up to 30 minutes of saved treatment time.
A 2024 survey of physician sentiments revealed that at least two-thirds view AI as beneficial to their practice, with overall use cases increasing by nearly 70 per cent, particularly in medical documentation. The administrative burden of medicine is substantial: physicians spend more time on paperwork than on patients. AI that handles documentation potentially frees physicians for direct patient interaction, the very thing that drew many of them to medicine.
Thanks to the AI Diagnostic Fund in England, 50 per cent of hospital trusts are now deploying AI to help diagnose conditions including lung cancer. Research indicates that hospitals using AI-supported diagnostics have seen a 42 per cent reduction in diagnostic errors. If these figures hold at scale, the impact on patient outcomes could be transformative. Recall those 795,000 Americans harmed by diagnostic errors each year. Even modest improvements in diagnostic accuracy would translate to thousands of lives saved or changed.
The Question of the Self
Beyond the clinical metrics lies a deeper question about human experience. When you are ill, vulnerable, frightened, what do you need? What does healing require?
The paternalistic model of medicine assumed patients needed authority: someone who knew what to do and would do it. The patient-centred model assumed patients needed partnership: someone who would share information, discuss options, respect autonomy. Both models assumed a human on the other side of the relationship, someone capable of understanding what it means to suffer.
A 2025 randomised factorial experiment found that functionally, people trusted the diagnosis of human physicians more than medical AI or human-involved AI. But at the relational and emotional levels, there was no significant difference between human-AI and human-human interactions. This finding suggests something complicated about what patients actually experience versus what they believe they prefer. We may say we want a human, but we may respond to something else.
The psychiatric setting reveals particular tensions. The Frontiers in Psychology study found that the situation in psychiatry differed strongly from cardiology, orthopaedics, and dermatology, especially in the “human doctor with an AI system” condition. Mental health involves not just pattern recognition but the experience of being heard, validated, understood. Whether AI can participate meaningfully in that process remains deeply uncertain. A diagnosis of depression is not like a diagnosis of a fracture; it touches the core of selfhood.
Research on trust in AI-assisted health systems emphasises that trust is built differently in each relationship: between patients and providers, providers and technology, and institutions and their stakeholders. Trust is bidirectional; people must trust AI to perform reliably, while AI relies on the quality of human input. This circularity complicates simple narratives of replacement or enhancement.
Reimagining the Consultation
What might a transformed healthcare encounter look like in practice?
One possibility is the augmented physician: a doctor who arrives at your appointment having already reviewed an AI analysis of your symptoms, test results, and medical history. The AI has flagged potential diagnoses ranked by probability. The AI has identified questions the doctor should ask to differentiate between possibilities. The AI has checked for drug interactions, noted relevant recent research, compared your presentation to anonymised similar cases.
The doctor then spends your appointment actually talking to you. Understanding your concerns. Explaining options. Answering questions. Making eye contact. The administrative and analytical burden has shifted to the machine; the human connection remains with the human.
This vision aligns with Topol's argument in Deep Medicine. The title itself is instructive: the promise is not that AI will make healthcare mechanical but that it might make healthcare human again. Fifteen-minute appointments driven by documentation requirements represent a form of dehumanisation that preceded AI. If algorithms absorb the documentation burden, perhaps doctors can rediscover the relationship that drew many of them to medicine in the first place.
But this optimistic scenario requires deliberate design choices. If AI primarily serves cost-cutting, if healthcare administrators use diagnostic algorithms to reduce physician staffing, if the efficiency gains flow to shareholders rather than patient care, the technology will deepen rather than heal medicine's wounds.
The Coming Transformation
The trajectory is set, though the destination remains uncertain.
The NHS Healthcare AI Solutions agreement, expected to be worth 180 million pounds, is forecast to open for bids in summer 2025 and go live in 2026. The UCLA-led PRISM Trial, the first major randomised trial of AI in breast cancer screening in the United States, is underway with 16 million dollars in funding. Clinical trials for AI-designed drugs from Isomorphic Labs are imminent.
Meanwhile, the fundamental questions persist. Will patients trust algorithms with their lives? The evidence suggests: sometimes, depending on context, depending on how the technology is presented, depending on who is doing the presenting. Trust in providers and the healthcare system is positively associated with expectations of AI benefit. Those who already trust their doctors are more likely to trust AI recommendations filtered through those doctors.
Will the doctor-patient relationship survive this transformation? The relationship has survived extraordinary changes before: the rise of specialisation, the introduction of evidence-based medicine, the intrusion of insurance companies and electronic health records. Each change reshaped but did not extinguish the fundamental bond between someone who is suffering and someone who can help.
The machines are faster. They may well be more accurate, at least for certain diagnostic tasks. They do not tire, do not forget, do not have personal problems. But they also do not care, not in any meaningful sense. They do not sit with you in your fear. They do not hold your hand while delivering difficult news. They do not remember that your mother died of the same disease and understand why this diagnosis terrifies you.
Perhaps the answer is not trust in machines or trust in humans but trust in a system where each contributes what it does best. The algorithm analyses the scan. The doctor explains what the analysis means for your life. The algorithm flags the drug interaction. The doctor discusses whether the benefit outweighs the risk. The algorithm never forgets a detail. The doctor never forgets you are a person.
This synthesis requires more than technological development. It requires deliberate choices about healthcare systems, medical education, regulatory frameworks, and reimbursement structures. It requires confronting the biases encoded in training data and the inequities they can perpetuate. It requires maintaining human oversight even when algorithms outperform humans on specific metrics. It requires remembering that a diagnosis is not just an output but a communication that changes someone's understanding of their own existence.
The algorithm can see you now. Whether you will trust it, and whether that trust is warranted, depends on decisions being made in research laboratories, regulatory agencies, hospital boardrooms, and government ministries around the world. The doctor-patient relationship that has defined healthcare for centuries is being renegotiated. The outcome will shape medicine for the centuries to come.
References and Sources
Newman-Toker, D.E. et al. (2023). “Burden of serious harms from diagnostic error in the USA.” BMJ Quality & Safety. Johns Hopkins Armstrong Institute Center for Diagnostic Excellence. https://pubmed.ncbi.nlm.nih.gov/37460118/
Takita, H. et al. (2025). “A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians.” npj Digital Medicine, 8(175). https://www.nature.com/articles/s41746-025-01543-z
Parsons, A.S. et al. (2024). “Does AI Improve Doctors' Diagnoses?” Randomised controlled trial, UVA Health. JAMA Network Open. https://newsroom.uvahealth.com/2024/11/13/does-ai-improve-doctors-diagnoses-study-finds-out/
FDA. (2024-2025). Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices database. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
IDx-DR De Novo Classification (DEN180001). (2018). FDA regulatory submission for autonomous AI diabetic retinopathy detection. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/denovo.cfm?id=DEN180001
Kim, J. et al. (2024). “Human-AI interaction in skin cancer diagnosis: a systematic review and meta-analysis.” npj Digital Medicine. Stanford Medicine. https://www.nature.com/articles/s41746-024-01031-w
Lång, K. et al. (2025). “Screening performance and characteristics of breast cancer detected in the Mammography Screening with Artificial Intelligence trial (MASAI).” The Lancet Digital Health, 7(3), e175-e183. https://www.thelancet.com/journals/landig/article/PIIS2589-7500(24)00267-X/fulltext
Riedl, R., Hogeterp, S.A. & Reuter, M. (2024). “Do patients prefer a human doctor, artificial intelligence, or a blend, and is this preference dependent on medical discipline?” Frontiers in Psychology, 15. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1422177/full
Zondag, A.G.M. et al. (2024). “The Effect of Artificial Intelligence on Patient-Physician Trust: Cross-Sectional Vignette Study.” Journal of Medical Internet Research, 26, e50853. https://www.jmir.org/2024/1/e50853
Nong, P. & Ji, M. (2025). “Expectations of healthcare AI and the role of trust: understanding patient views on how AI will impact cost, access, and patient-provider relationships.” Journal of the American Medical Informatics Association, 32(5), 795-799. https://academic.oup.com/jamia/article/32/5/795/8046745
Obermeyer, Z. et al. (2019). “Dissecting racial bias in an algorithm used to manage the health of populations.” Science, 366(6464), 447-453. https://www.science.org/doi/10.1126/science.aax2342
Aboujaoude, E. et al. (2025). “Racial bias in AI-mediated psychiatric diagnosis and treatment: a qualitative comparison of four large language models.” npj Digital Medicine. Cedars-Sinai. https://www.cedars-sinai.org/newsroom/cedars-sinai-study-shows-racial-bias-in-ai-generated-treatment-regimens-for-psychiatric-patients/
Windecker, D. et al. (2025). “Generalizability of FDA-Approved AI-Enabled Medical Devices for Clinical Use.” JAMA Network Open, 8(4), e258052. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2833324
Topol, E.J. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books. https://drerictopol.com/portfolio/deep-medicine/
NHS England. (2024-2025). NHS AI Lab investments and implementation reports. https://www.gov.uk/government/news/health-secretary-announces-250-million-investment-in-artificial-intelligence
GOV.UK. (2025). “New Commission to help accelerate NHS use of AI.” https://www.gov.uk/government/news/new-commission-to-help-accelerate-nhs-use-of-ai
Department of Health and Social Care. (2025). “Fit For The Future: 10 Year Health Plan for England.” https://www.gov.uk/government/publications/10-year-health-plan-for-england-fit-for-the-future
Nobel Prize Committee. (2024). “The Nobel Prize in Chemistry 2024” — Hassabis, Jumper (AlphaFold) and Baker. https://www.nobelprize.org/prizes/chemistry/2024/press-release/
Truog, R.D. (2012). “Patients and Doctors — The Evolution of a Relationship.” New England Journal of Medicine, 366(7), 581-585. https://www.nejm.org/doi/full/10.1056/nejmp1110848
Gallup. (2023). “Confidence in U.S. Institutions Down; Average at New Low.” https://news.gallup.com/poll/394283/confidence-institutions-down-average-new-low.aspx

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk