The Machine Says No: Health Insurance and the Vanishing Right to Appeal

The fax machine in a Florida rheumatologist's office, the least futuristic object in any American clinic, still receives a steady stream of prior authorisation decisions from health insurers. In early April 2026, one of those faxes, addressed to a patient the Palm Beach Post would eventually call only by her first name, Iris, came back in under the time it takes to pour a cup of coffee. The request had been submitted a few minutes earlier. The reply, denying coverage for an injection she had been receiving for years, was generated, signed, and transmitted without any documented human pause in the middle. Iris is 80. Her hands, on the worst mornings, do not open. Her doctor, looking at the timestamp, understood instantly what had happened. The claim had not been reviewed. It had been processed.
The word processed has started to carry a weight it was never designed to hold. In the American health insurance system in 2026, it is the polite term for an event that, in almost any other domain of life, we would call a decision: a binding determination about whether a human being will have access to the medical care their doctor has recommended. Except the entity making the decision is not a person. It is a model. And the model, as anyone who has tried to ask one why it did what it did already knows, does not owe anyone an explanation.
This is the quiet crisis at the centre of the Palm Beach Post investigation published this month, which spent weeks charting how artificial intelligence has begun to deny health insurance claims at a scale and a speed no human reviewer could match. It is also the crisis at the centre of a Stanford study in Health Affairs, which landed in January, warning that the human oversight supposedly wrapped around these systems is too thin, too rushed, and too incentivised by the wrong things to function as a real check. And it is the crisis sitting on top of a three-billion-dollar bet from the largest health insurer in the United States, UnitedHealth Group, that the answer to all of this, after the litigation and the newspaper investigations and a murdered chief executive, is to put more artificial intelligence into the pipeline, not less.
The question the brief for this piece asked is deceptively simple: if the systems making some of the most consequential decisions in people's lives cannot explain their reasoning, and the regulatory framework to challenge them barely exists, what does the right to appeal actually mean in practice? It sounds like a legal question. It turns out to be something stranger. It is a question about whether a civic procedure that assumed a human decision-maker on the other end of the form still works when the other end of the form is a probability distribution.
The Machine That Says No Before You Have Finished Typing
Start with the basic mechanics, because they have moved faster than the public understanding of them. Cigna's now notorious PxDx system, exposed by ProPublica and The Capitol Forum in March 2023, was an early glimpse of the genre. Internal spreadsheets showed Cigna's medical directors spending an average of 1.2 seconds on each of more than 300,000 claim denials over two months. One doctor, Dr Cheryl Dopke, was reported to have signed off on approximately 60,000 denials in a single month. A former Cigna physician told ProPublica's reporters, Patrick Rucker, Maya Miller, and David Armstrong, that the review process was essentially cosmetic: “We literally click and submit. It takes all of 10 seconds to do 50 at a time.”
The revealing word in that sentence is “literally”. It is the language of someone who has realised that the verb “review”, as it appears in the regulatory paperwork, is doing work it cannot possibly do.
Eight months later, a class action lawsuit against UnitedHealth's nH Predict algorithm, operated through its NaviHealth subsidiary, alleged that Medicare Advantage patients in post-acute care were being cut off from rehabilitation services in bad faith, with employees pressured to keep stays within 1 per cent of the length predicted by the model. When federal administrative law judges eventually heard appeals on these denials, roughly 90 per cent were reversed, according to the complaint. Only a tiny fraction of denied patients ever appeal. In February 2025, the federal court in Minnesota denied UnitedHealth's motion to dismiss the breach of contract and bad-faith claims, allowing the case to proceed.
Then, in late 2024, ProPublica and The Capitol Forum turned to EviCore, the utilisation-management arm of Evernorth owned by Cigna, which sells its services to other insurers. EviCore operates what some internal sources called “the dial”, an algorithm that scores each prior authorisation request with a probability of approval. The company can tune the threshold: if it wants more denials, it can lower the bar at which a request gets referred to human reviewers, who are statistically much more likely to deny than to approve. ProPublica reported that EviCore markets itself to insurers on the basis of a three-to-one return, promising three dollars in saved medical costs for every dollar the insurer pays it. Its denial rate in Arkansas, one of the few states that requires publication of the figure, ran at close to 20 per cent, compared with about 7 per cent for Medicare Advantage nationally.
The Palm Beach Post's April 2026 investigation, reported by Anne Geggis, extends this lineage into the near-present. The Post documented how AI tools are now embedded deep inside pre-authorisation workflows in Florida, one of 22 states the paper identified as having adopted no specific rules governing how AI can be used to reject a claim. The figure of 22 is the one that ought to give pause. These are not marginal jurisdictions. They include Florida, Georgia, Minnesota, and Oregon. Roughly half the American population lives in a state where an insurer can, in principle, use an algorithm to deny care without a single statute on the books requiring that algorithm to be explainable, auditable, or subject to human sign-off.
In contrast, California's Physicians Make Decisions Act, signed by Governor Gavin Newsom in September 2024 and in force since January 2025, explicitly requires that a denial, delay, or modification based on medical necessity be made by a licensed physician or competent provider. Arizona, Maryland, Nebraska, and Texas have adopted versions of the same principle. The federal Centres for Medicare and Medicaid Services issued guidance in 2024 restricting the use of algorithmic tools as the sole basis for Medicare Advantage denials. None of this changes the underlying asymmetry. State laws end at state lines. The models are national, their deployments enterprise-wide, and the training data pooled from populations that do not consent to being training data in the first place.
The Stanford Warning and the Myth of the Human in the Loop
Into this landscape, on 6 January 2026, Michelle Mello, Professor of Health Policy and Law at Stanford, and three colleagues (Artem A. Trotsyuk, Abdoul Jalil Djiberou Mahamadou, and Danton Char) published a paper in Health Affairs with the unusually blunt title, “The AI Arms Race in Health Insurance Utilization Review: Promises of Efficiency and Risks of Supercharged Flaws”. The paper is a careful, cold document. It does not call for a ban on AI in insurance. It does something more corrosive. It describes, in sober detail, why the reassurances everyone keeps giving, about human reviewers, about oversight, about governance, do not correspond to anything that is actually happening inside the insurers.
The central finding is that meaningful human oversight of AI-driven prior authorisation is, in Mello's own phrasing, largely a myth. Human reviewers at insurance companies, the paper observes, often lack the time, the relevant clinical expertise, and the incentives to meaningfully interrogate the recommendations produced by a model. The opacity of modern systems compounds this. An adjuster presented with a denial recommendation does not see a chain of reasoning that can be evaluated. They see an output. To push back on the output, they would have to reproduce, from scratch, the analysis that led to it, without access to the training data, the feature weights, or a record of how similar cases were decided in the past. Given production targets, they do not do this. They click.
Mello's paper notes that past flawed coverage decisions become embedded in the training data for the next generation of models, which then reproduce and scale the pattern. The phrase “supercharged flaws” is not rhetorical. It is a description of what happens when a statistical system is trained on a history of denials and then used to generate future denials, with the previous denials as ground truth. Mistakes do not get caught. They get normalised, archived, and re-expressed at volume.
The data on downstream appeals has circulated for a while, but the Stanford paper pulls it into focus. In Medicare Advantage, according to KFF's January 2025 analysis of 2023 figures, insurers made nearly 50 million prior authorisation determinations, denied 3.2 million of them, and saw only 11.7 per cent of those denials appealed. Of those appealed, 81.7 per cent were partially or fully overturned. In an earlier era, overturn rates above 80 per cent on appeal would have prompted a federal reckoning. In the current system, they are published in briefing notes and largely forgotten by the following week.
If the appeal process reverses more than four in five decisions on review, the appeal process is not a safety net bolted onto a functioning decision system. It is the decision system, belatedly engaged, in the small minority of cases where a patient has the time, the literacy, the advocacy, and the stamina to demand it. Everyone else simply absorbs the denial. That is not an operational detail. It is the design.
UnitedHealth's Three Billion and the Logic of Scaling the Problem
On 6 April 2026, STAT News reported that UnitedHealth Group, through its Optum Insight division, plans to spend at least three billion dollars over the next few years embedding AI more deeply into its claims processing, care management, fraud detection, and clinical documentation systems. Sandeep Dadlani, chief executive of Optum Insight, told reporters that the company employs 22,000 software engineers globally, that over 80 per cent of them now use AI to write code or build new agents, and that executives expect to generate a billion dollars in savings this year alone by pushing AI further into operations. Dadlani's framing was the one insurers have settled on: AI, he argued, will speed up decision-making and streamline health insurance's notoriously time-consuming bureaucracy.
He is not wrong about the bureaucracy. The American health insurance system wastes staggering amounts of time, labour, and money on a claims process that no participant, patient, provider, or payer, thinks works. The question is what “speed up decision-making” means when the original slowness was partly functional: the friction of human review was, at its best, the thing that caught errors, gave context, and let claimants be heard. If the friction is engineered out, so is the friction of accountability.
And the three-billion-dollar figure needs to be read alongside the context UnitedHealth is operating in. The company's former chief executive, Brian Thompson, was shot dead in Manhattan in December 2024 in an attack whose alleged perpetrator referenced the company's denial practices in his writings. The class action over nH Predict was allowed to proceed the following February. The Palm Beach Post investigation landed this April. There is, if one wants to read it this way, a choice the insurer has made. It could have used the last eighteen months to make its claims-processing systems more transparent, more accountable, more humane. It has instead committed to scaling them up, and measuring its own success in savings generated rather than denials avoided.
This is the logic that animates everything else in the sector. Under the business model that has built the American managed-care industry, every dollar approved in claims is a dollar of medical-loss ratio, and every dollar denied is, within the limits set by the Affordable Care Act's 80 to 85 per cent floor, a dollar of retained earnings. Any technology that lowers the marginal cost of generating a plausible denial, and raises the barrier to generating a successful appeal, is, from the perspective of the quarterly report, working exactly as intended. This is not a conspiracy theory. It is a reading of the incentives stated on the face of the filings.
Where the Patients and the Nurses Are Keeping the Records
Because the regulators have not, in most states, built the infrastructure to track algorithmic denials systematically, that job has fallen to the patients and clinicians themselves, largely on Reddit. Communities such as r/nursing, r/medicine, and the various state-level and condition-specific subreddits have become, almost by accident, one of the most useful public archives of how AI-driven prior authorisation actually functions at the point of care.
The threads follow a recognisable rhythm. A nurse describes submitting a request for a patient whose case is, clinically, straightforward. A denial returns in seconds or, at most, a couple of minutes. The denial letter cites the insurer's internal clinical guidelines, which are not, in most cases, the same as published medical society guidelines. An appeal is mounted. The appeal takes weeks to resolve. In the interim, the patient either forgoes the treatment, pays out of pocket, or lands in a more expensive emergency setting that the insurer will then, often, cover. The commenters in these threads document the pattern because nobody else does. They are, in effect, doing the work that in a different jurisdiction would be done by an independent audit office.
The sub-two-second denial is not a single documented statistic; it is a folk fact, borne out by the Cigna PxDx data, by screenshots circulated in these communities, by the fax-timestamp evidence that rheumatologists and oncologists have been quietly compiling. A system that returns a denial before the clinical reasoning could plausibly have been read is a system that has, as a matter of physics, not been read. The courts, slowly, are beginning to say so. In the Cigna class action in California and the nH Predict case in Minnesota, the factual allegations that reviews were not meaningfully performed have survived motions to dismiss. Discovery is going to be, in a phrase one plaintiff's lawyer used on background, interesting.
The Reddit record is, of course, anecdotal in a formal evidentiary sense. It is also, collectively, thousands of practitioners with professional licences describing a consistent pattern. When the formal data and the informal data align this closely, and both are saying the same thing that independent investigators and academic researchers are saying, the reasonable assumption is not that the nurses are wrong.
The Florida Bill and the Architecture of Political Failure
If the picture so far suggests that legislators would rush to impose a human signature on AI-generated denials, the story of Florida House Bill 527 is a useful corrective. The bill, introduced by state Representative Hillary Cassel, would have required that every insurance claim denial or reduction be reviewed and signed off by a qualified human professional, with AI output permitted as an input but not as the sole basis for the decision. It was, by the standards of recent American legislative politics, a popular proposal. In early December 2025, a House panel unanimously backed it. It then passed the full Florida House on a 108 to 0 vote, a consensus across parties that is almost unheard of on any contested business-regulation matter.
Cassel was candid about what had moved her. Speaking to reporters, she said: “The genesis of this bill came to me with the murder of the United Healthcare CEO. One of the alleged motives was the denial basis by that company, and there's currently a class action that shows allegedly that 90 per cent of their claims were denied with errors when they utilized AI.” It is an extraordinary quote, because it concedes that the political window for reform opened at the moment a billionaire insurance executive was killed in the street, and that the opening was narrow.
The Senate version, SB 202, sponsored by Senator Don Gaetz, did not survive. Its last action, according to the Florida Senate's public record, was on 13 March 2026, when it died in the Banking and Insurance Committee without a floor vote. Industry representatives from the Florida Insurance Council, the American Property Casualty Insurance Association, and the Personal Insurance Federation of Florida lobbied against it, arguing that mandatory human review would slow the resolution of claims. The Florida Hospital Association and the Florida Medical Association, who represent the entities actually filing claims for patients, lobbied for it. The committee did not bring it up.
Zoom out and the pattern is familiar. A bipartisan legislative majority in a populous, insurance-heavy state backed a minimum procedural protection that almost everyone not in the insurance industry supported. It died in committee, quietly, without a recorded vote. There was no scandal. There was no single villain. There was, instead, the ordinary friction of legislative attention: a bill that had the votes to pass did not have the procedural protection to reach a vote, and a session ended. Multiply this failure across two dozen states and you get, approximately, the current regulatory environment.
What the Right to Appeal Actually Means in an Algorithmic System
Here is the analytic move the whole debate has been circling. The right to appeal, in American administrative and insurance law, is a right that assumes certain things about the original decision. It assumes there was a decision-maker. It assumes the decision-maker had reasons, which can be stated, contested, and either defended or abandoned on review. It assumes the appellant, given adequate time, can understand the basis of the decision well enough to argue against it. It assumes a symmetry of cognition between the original decision-maker and the appellate one.
An algorithmic denial breaks all of these assumptions at once.
It breaks the first because the decision-maker is not an individual but a pipeline. It breaks the second because modern models do not have reasons in any sense a lawyer would recognise; they have weights, activations, and outputs. Even the engineers who built the system cannot generally, for a specific denial, reconstruct why this patient's case tipped into the negative region of the decision surface. They can say what features mattered on average. They cannot say what mattered for Iris.
It breaks the third because the denial letter, drafted as the output of a template populated with a justification selected from a limited menu, tells the appellant something that may not be a true description of the decision. It is a plausible description, designed to be legally defensible and clinically intelligible, but the actual cause, somewhere in the latent space of the model, is not accessible to anyone. To appeal a denial on its stated grounds is to joust with a shadow.
And it breaks the fourth because the appellant is human and the opponent is a statistical system trained on millions of prior cases. The insurer's machinery can generate, cheaply, a thousand variations on why the original denial was sound. The patient has one case, one letter from their doctor, one window of time before the treatment decision becomes moot. The asymmetry is not the small asymmetry of a lay person versus a trained adjuster. It is an asymmetry of cognitive capacity, of parallelism, of cost per round, of a kind the administrative law of the 1970s did not contemplate.
This is why the Stanford group's paper matters more than a straightforward policy critique. Mello and her coauthors are not simply pointing out that AI sometimes gets it wrong. They are pointing out that the institutional scaffolding that was supposed to catch the errors was built for a different kind of decision-maker, and does not scale to the one now making the calls. A patient appealing an algorithmic denial is not, functionally, appealing at all in the sense the word was originally meant. They are triggering a subsequent stage of the same algorithmic process, in which the second layer inherits the priors of the first.
You can see, in the published reform proposals, two broad theories of how to repair this. The first, reflected in California's SB 1120 and the dead Florida HB 527, is to legislate a human signature back into the decision. Require that a named, licensed professional review and sign off on any denial, with documentary evidence that they did so. This is the bluntest and, on current evidence, the only version that insurers can be counted on to resist. It is also the most fragile, because the record of Cigna's medical directors clicking through denials at 1.2 seconds per case shows that “human signature” can be gamed into meaninglessness unless the rules specify what review means in minutes, in content, and in accountability.
The second theory is algorithmic transparency: require insurers to disclose the logic, the training data, the error rates, and the audit trails of the systems they use. This is the preferred framing of academics, regulators, and some of the AI industry itself. Its limits are by now familiar to anyone who has worked on explainable AI. For classical rules-based systems, transparency is straightforward. For modern neural systems, it is a research problem that has not been solved, and may not be solvable in the strong sense. An audit report that says “the model weights were examined” is not a substitute for the ability to say, of a particular denial, why it was made.
Neither theory, on its own, is sufficient. A mandated human signature without transparency produces fake review at industrial scale. Transparency without a mandated human signature produces elegant documentation of decisions that nobody can be held accountable for. The only versions that might actually work combine both: a human who must sign, a record of what they looked at when they signed, and a genuine, externally audited account of what the model contributed and why. Nothing currently in force in the United States, at the federal level, does this.
The Stakes Underneath the Stakes
It is tempting to frame the whole situation as a fight about artificial intelligence, because AI is the novel element. But the deeper fight is about something older: whether a person subject to a consequential institutional decision has the right to a reasoned account of why the decision went the way it did, and a real chance to change it.
American health insurance, for reasons that long predate generative AI, has been steadily undermining that right for decades, through the proliferation of prior authorisation requirements, through narrow networks, through opaque formulary tiers, through appeals processes designed to exhaust rather than enlighten. The arrival of AI has not created the pathology. It has industrialised it. What used to take an adjuster an hour now takes a model a second, and what used to happen to thousands of patients a year now happens to millions. The scale changes the moral physics.
And the scale will grow. UnitedHealth's three-billion-dollar investment will not sit alone. Every other major insurer will match it, because they must, because the efficiency gains compound and the laggards lose. The Palm Beach Post investigation will be joined by others. The Reddit threads will lengthen. The Florida-style bills will pass in a few more states, and die in committee in many more. Somewhere in the middle of this, the language will drift: the word “review” will come to mean something smaller than it used to, the word “decision” something less personal, the word “appeal” something closer to a ritual than a remedy. This is already happening.
What stops the drift, if anything does, is a reassertion of the civic premise the whole insurance system was supposed to honour: that a claim is not a data point but a moment in a person's life, that a denial is not an output but an act, and that the entity issuing that act owes the person on the other end an intelligible reason and a real chance to be wrong about them. None of that is technologically impossible. Some of it is, in fact, quite cheap. What makes it hard is that the incentives, as currently aligned, reward the opposite: the cheapest plausible denial, issued at scale, defended just well enough to exhaust the appellant's capacity to keep fighting.
Iris, in the Palm Beach Post story, eventually got her medicine. Her doctor appealed on her behalf. It took weeks. She is one of the lucky ones, in that she had a doctor with the time and inclination to wage the fight. Most people do not. They have a denial letter, a phone tree, a model on the other end of the form, and a finite number of mornings on which they can open their hands enough to sign the next appeal. What the right to appeal means in practice, at this moment, is that if you are patient, and articulate, and unusually well-represented, you can sometimes persuade the system to notice you. That is not a right. It is a lottery with a ticket price measured in stamina. Whether it can still be repaired into something that deserves its own name is the question the next decade will answer, and the answer will not be written by the models.
References and Sources
- Rucker, Patrick; Miller, Maya; and Armstrong, David. “How Cigna Saves Millions by Having Its Doctors Reject Claims Without Reading Them.” ProPublica, 25 March 2023. https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims
- Armstrong, David; Rucker, Patrick; and Miller, Maya. “EviCore, the Company Helping U.S. Health Insurers Deny Coverage for Treatments.” ProPublica, 24 October 2024. https://www.propublica.org/article/evicore-health-insurance-denials-cigna-unitedhealthcare-aetna-prior-authorizations
- Mello, Michelle M.; Trotsyuk, Artem A.; Djiberou Mahamadou, Abdoul Jalil; and Char, Danton S. “The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws.” Health Affairs, 6 January 2026. https://www.healthaffairs.org/doi/10.1377/hlthaff.2025.00897
- Stanford University News. “AI-driven insurance decisions raise concerns about human oversight.” Stanford Report, January 2026. https://news.stanford.edu/stories/2026/01/ai-algorithms-health-insurance-care-risks-research
- Stanford Law School. “When AI Algorithms Decide Whether Your Insurance Will Cover Your Care.” 6 January 2026. https://law.stanford.edu/press/when-ai-algorithms-decide-whether-your-insurance-will-cover-your-care/
- Ross, Casey, and Herman, Bob. “UnitedHealth Group is making a $3 billion bet on AI. What does it mean for patients?” STAT News, 6 April 2026. https://www.statnews.com/2026/04/06/unitedhealth-group-massive-artificial-intelligence-push-patient-implications/
- Napach, Bernice. “UnitedHealth Group is making a $3 billion bet on AI. What does it mean for patients?” HealthLeaders Media, April 2026. https://www.healthleadersmedia.com/payer/unitedhealth-group-making-3-billion-bet-ai-what-does-it-mean-patients
- Geggis, Anne. “AI already at work in insurance. Do bots comply with state laws? Are they fair to consumers?” The Palm Beach Post / USA TODAY Florida Network, October 2025, updated April 2026. https://bluewaterhealthyliving.com/news/national-news/florida/ai-already-at-work-in-insurance-do-bots-comply-with-state-laws-are-they-fair-to-consumers/
- Ogozalek, Drew. “Insurance Companies Already Deploying AI Systems to Deny Claims Faster Than Ever Before.” Futurism, April 2026. https://futurism.com/future-society/ai-insurance-claims-denial
- Florida House of Representatives. “HB 527: Mandatory Human Review of Insurance Claims Denials.” Bill analysis, updated 2 March 2026. https://www.flsenate.gov/Session/Bill/2026/527/Analyses/h0527c.COM.PDF
- Florida Senate. “Senate Bill 202 (2026).” Bill history, last action 13 March 2026. https://www.flsenate.gov/Session/Bill/2026/202
- Ogles, Jacob. “House advances bill making humans review insurance claims.” Florida Politics, December 2025. https://floridapolitics.com/archives/768954-cassel-ai-insurance-houseib/
- Cassel, Hillary. Public statement on HB 527. X (formerly Twitter), December 2025. https://x.com/RepCassel/status/1998601482661245225
- Becker, Josh. “Governor signs Physicians Make Decisions Act, keeping medical decisions between patients and doctors, not AI.” California State Senate, 30 September 2024. https://sd13.senate.ca.gov/news/press-release/september-30-2024/governor-signs-physicians-make-decisions-act-keeping-medical
- California Legislative Information. “Senate Bill No. 1120: Health Care Coverage: Utilization Review.” 2023-2024 Regular Session. https://legiscan.com/CA/text/SB1120/id/3023335
- Napoli, Anthony. “UnitedHealth AI algorithm allegedly led to Medicare Advantage denials, lawsuit claims.” Healthcare Finance News, November 2023. https://www.healthcarefinancenews.com/news/unitedhealth-ai-algorithm-allegedly-led-medicare-advantage-denials-lawsuit-claims
- Napoli, Anthony. “Class action lawsuit against UnitedHealth's AI claim denials advances.” Healthcare Finance News, February 2025. https://www.healthcarefinancenews.com/news/class-action-lawsuit-against-unitedhealths-ai-claim-denials-advances
- DLA Piper. “Lawsuit over AI usage by Medicare Advantage plans allowed to proceed.” AI Outlook, 2025. https://www.dlapiper.com/en/insights/publications/ai-outlook/2025/lawsuit-over-ai-usage-by-medicare-advantage-plans-allowed-to-proceed
- Biniek, Jeannie Fuglesten; Sroczynski, Nolan; Cubanski, Juliette; and Neuman, Tricia. “Medicare Advantage Insurers Made Nearly 50 Million Prior Authorization Determinations in 2023.” KFF, 28 January 2025. https://www.kff.org/medicare/nearly-50-million-prior-authorization-requests-were-sent-to-medicare-advantage-insurers-in-2023/
- American Medical Association. “How AI is leading to more prior authorization denials.” AMA, 2025. https://www.ama-assn.org/practice-management/prior-authorization/how-ai-leading-more-prior-authorization-denials
- Centres for Medicare and Medicaid Services. “Final Rule CMS-4201-F: Medicare Advantage and Part D Prior Authorization Guidance.” 2024.
- Manatt Health. “Health AI Policy Tracker.” Manatt, Phelps and Phillips, 2026. https://www.manatt.com/insights/newsletters/health-highlights/manatt-health-health-ai-policy-tracker
- Reyes, Shelby. “Fighting AI Driven Insurance Denials: How to Appeal When Algorithms Reject Your Healthcare Claim.” Counterforce Health, 2025. https://www.counterforcehealth.org/post/fighting-ai-driven-insurance-denials-how-to-appeal-when-algorithms-reject-your-healthcare-claim-2025-guide/
- Gordon, Noam. “AI Prior Authorization Tools Have an 82% Overturn Rate, And That's the Problem.” AI2Work, 2026. https://ai2.work/blog/ai-prior-authorization-tools-have-an-82-overturn-rate-and-that-s-the-problem
- Hunton Andrews Kurth LLP. “Court Allows Discovery Into Insurer's Use of AI to Deny Claims.” Hunton Insurance Recovery Blog, 2025-2026. https://www.hunton.com/hunton-insurance-recovery-blog/court-allows-discovery-into-insurers-use-of-ai-to-deny-claims

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk