Aged Care Does Not Need More AI: It Needs More Staff

The first thing the sensor sees is the ceiling. It is an unremarkable ceiling, white acoustic tile, fluorescent strip, a slight nicotine tinge from a generation of residents who were once allowed to smoke indoors. The sensor is not a camera in the conventional sense. It does not record video; the procurement document made a point of that. It is a low-resolution thermal array, mounted in a discreet white housing about the size of a smoke alarm, and it watches the room beneath it as a heat map. When a heat-map blob detaches from the bed and crosses the floor, it logs movement. When the blob lies horizontal in a place a human body should not be horizontal, it pings a tablet at the nurses' station. The vendor calls this fall detection. The procurement notice called it dignity-preserving monitoring. The night shift on a typical residential aged care floor in Australia or England in early 2026, which is often one registered nurse and two personal care workers covering upwards of forty residents, calls it the thing that goes off.
What the thing goes off about, on the kinds of nights the Australian Royal Commission into Aged Care Quality and Safety documented across ninety-nine sitting days of evidence and that the Care Quality Commission in England continues to describe in its state-of-care reports, is the sort of incident that happens when an older resident with dementia transfers from bed, returns toward it, and falls. The sensor logs the transfer; it logs the horizontal heat signature on the floor; it pings the tablet. The personal care worker on duty may be two corridors away changing another resident. By the time anybody arrives, the resident has been on the carpet long enough for a hip to break. The sensor has done exactly what the brochure said it would do. Nobody has been close enough for the information to matter. That pattern, not any one incident, is what the evidence that regulators have taken in sworn testimony describes.
It is the gap between those two facts, the thing that the technology measured and the thing that the system did with the measurement, that a paper published in The Conversation on 24 February 2026 by Barbara Barbosa Neves of the University of Sydney, Alexandra Sanders, and Geoffrey Mead set out to dramatise. Their argument, distilled from an analysis of the marketing materials of thirty-three companies selling AI tools into aged care across Australia, East Asia, Europe and North America, is that the industry has succeeded in convincing governments and investors that algorithmic monitoring, automated care planning and companion robotics are the answer to a workforce crisis when they are, in fact, a way of avoiding the question. The crisis is structural. The tools, however clever, cannot be structural answers. “If we let AI companies define what is broken,” the authors write, “we also let them define what repair looks like. That may leave our systems more profitable, but far less caring and humane.”
The numbers behind the pitch are now large enough to set the rest of the policy debate around them. Fortune Business Insights estimated the global elderly care market at 53.29 billion US dollars in 2025 and projected it to reach 57.78 billion in 2026, on its way to roughly 114 billion by 2034. The agetech subsegment, the layer of digital and AI products sold into that market, is projected by industry analysts cited in the Neves paper to reach A$170 billion by 2030. By any reading, the next decade of aged care will be one of the most heavily capitalised periods in the sector's history, and a substantial fraction of that capital is going into systems that are designed to do things humans currently do.
The question this article is concerned with is not whether the technology works. Some of it does, in narrow ways, under controlled conditions. The question is what accountability structures would have to exist before deploying it at scale, into a population that cannot easily refuse it and cannot reliably tell anyone when it has failed, could be considered ethical. The honest answer, in April 2026, is that very few of those structures exist anywhere, and most of what passes for them is designed to manage the reputational risk of providers and vendors rather than the safety of residents.
What The Inquiries Already Told Us
The Royal Commission into Aged Care Quality and Safety, which delivered its 2,500-page final report, Care, Dignity and Respect, on 1 March 2021, did not lack for diagnoses. Across twenty-three public hearings, ninety-nine sitting days, 641 witnesses and more than ten thousand public submissions, commissioners Lynelle Briggs and Tony Pagone arrived at 148 recommendations. The findings were as plain as they were grim. Commissioner Briggs put the proportion of residents who had experienced physical or sexual assault at between thirteen and eighteen per cent. The report described two decades of underfunding amounting to approximately 9.8 billion Australian dollars cut from the sector's annual budget. It documented residents left in soiled continence aids, malnourished, restrained chemically and physically, and dying in conditions the Commission did not euphemise.
What the Commission did not say, in any of those pages, is that the answer to those failings lay in machine learning. The recommendations focused on staff ratios, on the qualifications and pay of personal care workers, on a new statutory framework for the rights of older people, on enforceable care standards, and on an independent regulator with real teeth. The Aged Care Act 2024, which came into effect on 1 November 2025 after a delay from its originally legislated 1 July date, codified some of that framework. From October 2024, providers had been required to deliver a national average of 215 minutes of personal and nursing care per resident per day, of which 44 minutes was to come from a registered nurse. From 1 October 2025, the Star Ratings used to grade residential providers were re-engineered to require those minutes for a three-star or better staffing rating. None of those reforms involved an algorithm.
The same pattern recurs in every comparable jurisdiction. The Care Quality Commission in England, which by the summer of 2024 was being publicly described by the Secretary of State for Health and Social Care, Wes Streeting, as a failing organisation, commissioned the Dash Review of its operational effectiveness; the full report, published in October 2024, found that the time taken by the regulator to re-inspect a service rated “requires improvement” had risen from 142 days in 2015 to 360 days in 2024. The CQC's chief executive, Ian Trenholm, resigned that July. Skills for Care reported that as of March 2025 there were 111,000 vacant posts in adult social care in England, a vacancy rate of 6.4 per cent against a labour-market average of 2.2, with care worker vacancies running at 8.3 per cent and homecare vacancies above 10 per cent. Annual turnover sat at thirty per cent. In May 2025 the UK government closed the international recruitment route for new care workers, cutting off a pipeline that had been delivering an average of twelve thousand recruits a quarter into the independent sector. None of those problems have algorithmic solutions.
In the United States, the federal minimum staffing standard for long-term care facilities published by the Centres for Medicare and Medicaid Services in May 2024, requiring 3.48 hours of nursing care per resident per day and twenty-four-hour onsite registered nurse coverage, was repealed in December 2025. Section 71111 of Public Law 119-21 then prohibited CMS from implementing or enforcing the rule until at least 30 September 2034. Public Citizen and the Center for Medicare Advocacy estimated that the original rule, had it survived, would have prevented approximately thirteen thousand deaths a year. In Canada, the May 2020 Canadian Armed Forces report on five Ontario long-term care homes, which described cockroaches, rotting food, ulcerated bed-bound residents and staff cycling between units in contaminated personal protective equipment, prompted no national workforce reform of any depth; provincial inquiries in Ontario and Quebec produced more recommendations than implementations. The same picture, with local variations, holds in the Nordic countries, in France and in much of east Asia.
What the inquiries documented, in other words, was not a sector that had failed to adopt the latest technology. It was a sector that had failed to be funded, staffed, regulated and respected. The premise of the agetech pitch, that AI can plug the gap, is in this light a category error. There is no reasonable reading of Care, Dignity and Respect in which the missing ingredient is more sensors.
The Pitch And The Products
Walk the floor of any of the recent agetech expos, the SilverEco Forum in Cannes, the Aged Care 2026 conference in Melbourne, the Health 2.0 trade fair in Tokyo, and the categories repeat. There are passive monitoring systems, of which the thermal sensor in the opening scene is one example. There are wearable fall detectors that combine accelerometers and machine-learned gait classifiers, sold by firms like Vayyar, Kepler Vision and a long tail of European start-ups. There are continuous bed and chair sensors, marketed under names like SafelyYou and Tellus You Care. There are automated care-planning platforms that ingest electronic health records and generate suggested daily routines, hydration prompts and bowel charts. There are medication management dispensers. There are predictive analytics layers that promise to flag clinical deterioration days before it shows up in vital signs. There are companion robots: PARO, the harp-seal-shaped therapeutic robot developed by Takanori Shibata at Japan's National Institute of Advanced Industrial Science and Technology, in clinical use since the mid-2000s; ElliQ, the desktop social companion built by Intuition Robotics in Israel; SoftBank's humanoid Pepper, repurposed from retail receptionist into care-floor entertainer; and the various lower-cost robotic-cat and robotic-dog products that proliferate at the budget end.
The evidence base for these products is uneven and almost always thinner than the marketing implies. A 2021 systematic review and meta-analysis published in Innovation in Aging by Hung and colleagues, “Effectiveness of Companion Robot Care for Dementia”, found that PARO produced statistically meaningful but small improvements in agitation, depression and medication use across pooled trials, with the authors noting that most studies were small, short and unblinded. A 2023 systematic review in the International Journal of Nursing Studies reached a similar conclusion: a possible benefit, evidence quality low to moderate, no demonstrated long-term effect. A pilot randomised controlled trial of a different companion robot for community-dwelling people with dementia, published in 2017 by Moyle and colleagues, recorded engagement with the device but did not show robust effects on the primary outcomes.
ElliQ has produced more uplifting headline numbers, largely from one programme. The New York State Office for the Aging began deploying ElliQ in 2022; by May 2025 the agency reported 834 active clients, with 94 per cent saying they felt less lonely, 97 per cent feeling better overall, average usage of forty-one interactions per day, and a customer satisfaction score of 4.6 out of 5. Those are the figures Intuition Robotics quotes in its marketing decks. The peer-reviewed literature is more cautious. A 2024 review by Broxvall and colleagues, “ElliQ, an AI-Driven Social Robot to Alleviate Loneliness: Progress and Lessons Learned”, described the deployment as “promising” and explicitly called for randomised controlled trials before efficacy claims could be considered established. The NYSOFA outcomes, reassuring as they are, were collected from a self-selected user base that consented, that engaged voluntarily, and that retained the cognitive bandwidth to fill in a satisfaction survey. They tell us very little about what would happen if the same device were deployed by default to a less able population.
Fall detection is the category in which the gap between vendor claim and operational reality is widest. A 2025 scoping review in Applied Sciences, “AI-Driven Inpatient Fall Prevention Using Continuous Monitoring”, examined the evidence on continuous monitoring systems in hospital and long-term care settings and reached a conclusion that vendors do not put on their websites: while sensitivity for detecting falls can exceed ninety per cent, false-positive rates of thirty to forty per cent are common, and across the evidence base detection systems “did not consistently reduce fall incidence or the occurrence of injurious falls”. The same paper, like a closely related 2025 review in Medicina, found that reporting of “implementation-critical metrics” such as alert burden, response times and downstream actions was patchy. Studies of clinical alarm fatigue across acute care have repeatedly found that as many as eighty to ninety per cent of audible alarms in monitored wards are non-actionable. There is no plausible mechanism by which adding more alarms to an understaffed care floor improves outcomes, and reasonable mechanisms by which it makes them worse.
Predictive analytics for clinical deterioration carry a related set of problems. Algorithms trained on the electronic health records of one population have been shown repeatedly, including in a much-cited 2023 JAMA Internal Medicine analysis of the Epic sepsis prediction model, to perform worse than advertised when deployed in different populations. Aged care residents are an unusually heterogeneous group, often with multimorbidity, polypharmacy and cognitive complications that distort the signals the model was trained to detect. The risk is not that the model produces nothing useful; it is that it produces enough useful output to displace clinical judgement while the genuinely unusual cases, the ones a human carer would recognise on sight, slip past unflagged.
The Advocacy Gap
Across all of these tools, the same population variable does most of the moral work. The people on whom the sensors and dispensers and screens are aimed are, by definition of the sector they are in, frail. A substantial proportion are cognitively impaired; the Australian Institute of Health and Welfare estimated in its 2024 dementia report that more than half of permanent residents in Australian aged care had a diagnosis of dementia. Many are socially isolated; the loneliness data that companion robots cite as a justification is real. Many have limited or no digital fluency; older Australians in residential care are dramatically under-represented in surveys of internet use, smartphone ownership and the everyday literacy that allows a person to interrogate, refuse or modify a digital tool. And almost all of them sit in a profound power asymmetry with the people on whom they depend for daily care.
The implications for consent are not theoretical. The standard model of informed consent in healthcare assumes a person capable of understanding the nature of the proposed intervention, weighing it against alternatives, and communicating a decision. A 2025 review in Frontiers in Digital Health, “Designing for Dignity: Ethics of AI Surveillance in Older Adult Care”, catalogued how badly that model breaks down in practice when the intervention is a continuous, ambient monitoring system and the person being monitored has fluctuating capacity. Many older adults in care settings, the authors noted, have “no knowledge about what data is being harvested” and lack the cognitive or technical capability to adjust settings. Consent is typically obtained at admission, signed by a family member acting as substitute decision-maker, and never revisited. The system that the resident did not knowingly agree to becomes the system they live inside.
The asymmetry is sharper still where AI is making, or shaping, allocative decisions. Australia's new Support at Home programme, introduced in November 2025 to replace earlier home-care packages, uses a rules-based algorithm called the Integrated Assessment Tool to convert assessor responses into funding entitlements. As reported by The Conversation in March 2026 in a follow-up piece by Sebastian Cordoba and colleagues titled “First Robodebt, now NDIS and aged care: how computers still decide who gets care”, neither assessors nor participants can clearly see how the algorithm converts answers into funding levels. Departmental officials told a Senate inquiry that there is “no discretionary element” in the process; an override function present during testing was removed before the system went live. Evidence presented to the inquiry suggested the tool was systematically underestimating need, with reports of older Australians, including those with serious or degenerative conditions, having their support reduced. The Robodebt scandal, in which an automated debt-recovery system run by Services Australia issued more than 470,000 unlawful debt notices between 2016 and 2019 and was the subject of a 2023 Royal Commission, is the cautionary tale every Australian policy commentator now invokes. The aged care sector's algorithmic infrastructure is being built by a state apparatus that demonstrably has not learned its lesson.
The classic argument for surveillance and substitution technologies in care is that the people receiving them benefit, and that any inconvenience to autonomy is outweighed by safety. The problem with this argument is that it cannot be tested by the people on whom it is being made. A resident with moderate dementia cannot reliably explain to an inspector why the sensor in the corner of her room makes her feel watched, or whether she would prefer a human attendant to a tablet that pings someone who arrives nine minutes later. A non-verbal resident with advanced cognitive impairment cannot tell a researcher whether the companion robot is comforting her or merely keeping her quiet. The marketing literature sometimes claims that residents prefer the robots; the more careful research, including work by Neves and her collaborators in the Journal of Applied Gerontology in 2023, “Artificial Intelligence in Long-Term Care: Technological Promise, Aging Anxieties, and Sociotechnical Ageism”, finds that older adults' attitudes towards AI in their own care are considerably more ambivalent than the agetech sector implies, that they are acutely aware of being positioned as objects of management rather than subjects of care, and that they often experience monitoring as a loss of dignity rather than a gain in safety.
Cost Reduction Or Outcome Improvement And Who Carries The Risk
The business case for AI in aged care, in board meetings rather than press releases, is largely about labour. A monitoring system that allows a single night-shift carer to cover sixty residents instead of forty is, on paper, a workforce multiplier. A medication dispenser that prompts a resident through a regimen reduces the registered nursing time required for medication rounds. An automated care plan reduces the documentation burden on personal care workers. A companion robot, if it can hold attention, reduces the demand on staff for the relational work that has historically been the floor of dignified care. Each of these is a legitimate engineering goal in a sector where workforce shortage is real, severe and not going away. None of them is the same thing as improved outcomes for residents.
The distinction matters, because the risk of miscalibration falls asymmetrically. If a fall sensor's false-positive rate produces alarm fatigue and a real fall is missed, the cost is borne by the resident on the floor, not the procurement team that signed the contract. If a predictive deterioration model misses an unusual sepsis presentation in a resident with atypical baseline observations, the resident dies. If an automated care plan recommends a hydration schedule calibrated to a baseline weight two years out of date, the resident whose actual weight has dropped sharply goes thirsty. If a companion robot becomes the dominant social contact for a resident whose family visits have tapered, the human relationships that aged-care research consistently identifies as protective against decline are the ones that quietly disappear.
This asymmetry is what makes the cost-reduction framing dangerous. In a properly functioning market, the people who bear the risk of a product underperforming push back. In aged care, the people who bear the risk are very often unable to. The carers who notice that the system is not working, who see a resident on the floor long after the sensor said so, are positioned several layers below the procurement decisions that put the system there. They have, as the Neves paper notes, taken on additional cognitive labour interpreting the data the system generates, but they have lost discretion over whether the system should be used at all. The families who pay the bills are typically not on the floor when the system fails. The regulators who would, in theory, audit whether the technology was performing as advertised lack the technical capability and, increasingly, the inspection cadence.
A 2024 paper in Humanities and Social Sciences Communications titled “Paternalistic AI: the case of aged care” framed the underlying ethical structure crisply. AI systems in care, the authors argued, function as a particularly powerful form of soft paternalism. They purport to act in the interests of the person being cared for, but they remove from that person the practical opportunity to refuse, modify or contest the intervention. In the context of cognitive impairment, where soft paternalism shades into hard paternalism almost imperceptibly, the absence of accountability structures around the technology means that the ethical work that would normally be done by consent simply does not happen.
What Accountability Would Actually Look Like
If AI is going to be deployed at scale in aged care, the question is what would have to be in place before such deployment could be considered ethical. The honest answer is a layered structure, none of whose layers currently exist in anything like a complete form in any major jurisdiction.
The first layer is consent that is genuine, ongoing and revocable. Admission paperwork signed by a substitute decision-maker is not consent to a continuous monitoring regime. A robust framework would require that residents, where they have any capacity, are walked through what each technology in their environment does, in plain language, with the right to refuse specific elements without losing access to other care. Where capacity is absent, substitute decision-makers should be required to revisit consent on a defined cadence, and to weigh the technology's use against alternatives that include increased staffing. The recommendation, drawn from the 2025 Frontiers in Digital Health paper, of “easy-to-visualize dashboards and plain-language explanations” should be a procurement requirement, not a research aspiration.
The second layer is independent auditing, with statutory backing, of the actual performance of deployed systems in their actual settings. Vendor-supplied performance figures are, as the scoping reviews on fall detection make clear, systematically optimistic. An accountability regime worth the name would require providers to log false positives, false negatives, response times and downstream actions in a standardised format, and would require regulators, not vendors, to publish the resulting performance data. Australia's Aged Care Quality and Safety Commission, the CQC in England, CMS in the United States and their equivalents would need substantial additional resourcing and technical capability to conduct such audits credibly. None has it now.
The third layer is algorithmic transparency. Where an AI tool affects the allocation of care, including hours of staffing, level of monitoring, eligibility for funding or assignment to a particular care pathway, residents and their advocates should have a legal right to an explanation of how the system reached its conclusion, expressed in terms an ordinary person can understand. Article 22 of the General Data Protection Regulation in the European Union already prohibits decisions based solely on automated processing that produce significant legal or comparable effects. That principle needs to be operationalised specifically for aged care, with explicit recognition that algorithmic recommendations that substantially shape human decisions count, and that the convenient fiction of “human in the loop” cannot be used to launder automation.
The fourth layer is incident reporting. When an AI tool contributes to harm, whether by missing a fall, misallocating medication, displacing human contact or generating an unsafe care recommendation, the incident should be reportable, on the same statutory footing as a medication error, to the relevant regulator, with public aggregate reporting. The current regime, in which AI-related incidents are typically classified as either workflow events or clinical errors and never as software failures, makes systemic learning impossible.
The fifth layer is a hard ban on substitution where it matters most. The question of whether a companion robot should ever be the primary social contact for a person with dementia is not a question for procurement officers. The position taken by Sherry Turkle of MIT in her 2011 book Alone Together, and elaborated in subsequent work, is that the deployment of robots as substitutes for, rather than supplements to, human relational care is an abdication. That position should be encoded in regulation. Companion robots may have a role; they may not have a role that displaces the requirement for staffed human contact. Procurement should require evidence that a tool augments rather than replaces the relational work, and operational data should be auditable to confirm that what was contracted as augmentation has not, over time, drifted into substitution.
The sixth layer is procurement conditionality, and it is the lever that actually moves the others. Public funders of aged care, which in most jurisdictions means the state, have far more bargaining power than they currently use. Every procurement contract for an AI system in publicly funded aged care should carry conditions on consent processes, audit access, transparency, incident reporting, anti-substitution and a ceiling on the proportion of care time that may be displaced by the system. Vendors that decline to meet those conditions should not be funded. The market will adjust quickly when it has to.
The seventh layer is the one that the agetech sector finds least convenient to discuss. None of the above is a substitute for adequate staffing. Every accountability regime for AI in aged care has to be built on top of, not in place of, the staffing standards, pay levels and workforce protections that the Royal Commission, the Dash Review, the CMS rule and the Canadian Armed Forces report were calling for. AI deployed into an under-staffed environment cannot be made ethical by audits alone. The ethical baseline is a staffed floor.
A Reported Ending
It is tempting, when writing about technology and vulnerability, to land on a hopeful note. The honest reading of the evidence in April 2026 does not really support one. The Aged Care Act 2024 in Australia is in early implementation; the staffing minutes are being met on national average but missed in many individual facilities. The CQC in England is mid-restructure following the Dash operational review. The federal staffing rule in the United States has been repealed and is statutorily prohibited from re-implementation until at least 2034. The Canadian provinces have made limited structural progress since 2020. The agetech market continues to grow. The companies whose pitches Neves, Sanders and Mead analysed are not slowing down their fundraising rounds because the academic literature is cautious about their effect sizes.
What the Conversation article points at, and what the evidence on every category of agetech tool quietly confirms, is that the question of whether AI in aged care is ethical cannot be answered at the level of the individual product. PARO has uses. ElliQ helps some lonely people in Buffalo and Albany. A well-calibrated fall sensor, in a building with enough carers to respond inside three minutes, may well be a net good. None of those local truths bears on the systemic question, which is whether the deployment, in aggregate, is being driven by considerations that the people on whom it is deployed would endorse if they could.
The resident whose hip breaks while the sensor pings an empty corridor does not appear in any vendor case study. Her mobility does not fully return, in the way ninety-year-old mobility rarely does. The room in which she fell still has a sensor on the ceiling, and the sensor still pings when it sees a heat-map blob in the wrong place. The night shift on her floor is still, in April 2026, one registered nurse and two personal care workers covering upwards of forty residents, the kind of configuration that the inspectorate reports from three continents have documented as standard. The vendor's quarterly filings continue to note strong growth in the Asia-Pacific market and new partnerships with major residential care operators. None of those facts, on their own, is scandalous. Together they describe the architecture of a sector that has decided, without ever quite deciding, that the cheaper option is also the wiser one.
The accountability structures that would make AI in aged care ethical are not technically difficult. They are politically expensive. They require regulators to be staffed and funded to a level that no government has yet been willing to fund them to. They require public procurement to drive standards in a market where vendors have grown accustomed to selling unvalidated tools into desperate buyers. They require a public conversation about the proper role of human contact in care that the sector and the technology industry have, between them, been content to defer.
Until those structures exist, the most defensible position is the one Neves and her colleagues argue for: that AI in aged care, deployed primarily to manage the consequences of under-investment in human care, is not a solution to the crisis the Royal Commission documented. It is a way of making the crisis less visible. The sensor sees the ceiling. The ceiling is white. The blob on the floor is logged at a particular minute, and again two minutes later. Somewhere down the corridor, somebody is doing the work that the technology was sold as a substitute for, and somebody else is doing without the work because there was nobody to do it for them. The accounting we owe the residents is the one we have, so far, declined to do.
References & Further Information
- Neves, B. B., Sanders, A., & Mead, G. “AI companies promise to 'fix' aged care, but they're selling a false narrative.” The Conversation, 24 February 2026. https://theconversation.com/ai-companies-promise-to-fix-aged-care-but-theyre-selling-a-false-narrative-275822
- Royal Commission into Aged Care Quality and Safety. Final Report: Care, Dignity and Respect. Commonwealth of Australia, 1 March 2021. https://www.royalcommission.gov.au/aged-care/final-report
- Aged Care Act 2024 (Cth), Federal Register of Legislation, Australia. Commenced 1 November 2025. https://www.legislation.gov.au/C2024A00104/latest
- Australian Government. “Changes to Staffing rating for Star Ratings.” My Aged Care, October 2025. https://www.myagedcare.gov.au/news-and-updates/changes-staffing-rating-star-ratings
- Fortune Business Insights. “Elderly Care Market Size, Share & Trends Analysis Report, 2026-2034.” https://www.fortunebusinessinsights.com/elderly-care-market-111477
- Dash, P. “Review into the Operational Effectiveness of the Care Quality Commission: Full Report.” UK Department of Health and Social Care, October 2024. https://www.gov.uk/government/publications/review-into-the-operational-effectiveness-of-the-care-quality-commission-full-report
- Skills for Care. “The State of the Adult Social Care Sector and Workforce in England 2025.” https://www.skillsforcare.org.uk/Adult-Social-Care-Workforce-Data/workforceintelligence/resources/Reports/National/The-state-of-the-adult-social-care-sector-and-workforce-in-England-2025-Executive-Summary.pdf
- Centres for Medicare and Medicaid Services. “Medicare and Medicaid Programs; Minimum Staffing Standards for Long-Term Care Facilities.” Federal Register, 10 May 2024. https://www.federalregister.gov/documents/2024/05/10/2024-08273/medicare-and-medicaid-programs-minimum-staffing-standards-for-long-term-care-facilities
- Centres for Medicare and Medicaid Services. “Repeal of Minimum Staffing Standards for Long-Term Care Facilities.” Federal Register, 3 December 2025. https://www.federalregister.gov/documents/2025/12/03/2025-21792/medicare-and-medicaid-programs-repeal-of-minimum-staffing-standards-for-long-term-care-facilities
- Canadian Armed Forces. “Op LASER: JTFC Observations in Long Term Care Facilities in Ontario.” Public Safety Canada, May 2020. https://www.publicsafety.gc.ca/cnt/trnsprnc/brfng-mtrls/prlmntry-bndrs/20200831/069/index-en.aspx
- Hung, L., et al. “Effectiveness of Companion Robot Care for Dementia: A Systematic Review and Meta-Analysis.” Innovation in Aging, 5(2), 2021. https://academic.oup.com/innovateage/article/5/2/igab013/6249558
- Yu, C., et al. “The effectiveness of a therapeutic robot, 'Paro', on behavioural and psychological symptoms, medication use, total sleep time and sociability in older adults with dementia: A systematic review and meta-analysis.” International Journal of Nursing Studies, 2023. https://www.sciencedirect.com/science/article/abs/pii/S0020748923000950
- Broxvall, M., et al. “ElliQ, an AI-Driven Social Robot to Alleviate Loneliness: Progress and Lessons Learned.” 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC10917141/
- New York State Office for the Aging. “ElliQ Proactive Care Companion Initiative: Project Update 2026.” February 2026. https://aging.ny.gov/system/files/documents/2026/02/nysofa-elliq-project-update-2026.pdf
- “AI-Driven Inpatient Fall Prevention Using Continuous Monitoring: From Early Detection to Workflow-Integrated Decision Support: A Scoping Review.” Applied Sciences, MDPI, 2025. https://www.mdpi.com/2076-3417/16/7/3383
- “Digital Healthcare Approaches for Fall Detection and Prediction in Older Adults: A Systematic Review of Evidence from Hospital and Long-Term Care Settings.” Medicina, MDPI, 2025. https://www.mdpi.com/1648-9144/61/11/1926
- Wong, A., et al. “External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients.” JAMA Internal Medicine, 2023.
- Neves, B. B., Petersen, A., Vered, M., Carter, A., & Omori, M. “Artificial Intelligence in Long-Term Care: Technological Promise, Aging Anxieties, and Sociotechnical Ageism.” Journal of Applied Gerontology, 2023. https://journals.sagepub.com/doi/10.1177/07334648231157370
- “Designing for dignity: ethics of AI surveillance in older adult care.” Frontiers in Digital Health, 2025. https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1643238/full
- “Paternalistic AI: the case of aged care.” Humanities and Social Sciences Communications, 2024. https://www.nature.com/articles/s41599-024-03282-0
- Cordoba, S., et al. “First Robodebt, now NDIS and aged care: how computers still decide who gets care.” The Conversation, March 2026. https://theconversation.com/first-robodebt-now-ndis-and-aged-care-how-computers-still-decide-who-gets-care-280711
- Royal Commission into the Robodebt Scheme. Report of the Royal Commission into the Robodebt Scheme. Commonwealth of Australia, July 2023.
- General Data Protection Regulation, Article 22, “Automated individual decision-making, including profiling.” Regulation (EU) 2016/679. https://gdpr-info.eu/art-22-gdpr/
- Turkle, S. Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books, 2011.
- Australian Institute of Health and Welfare. Dementia in Australia 2024. https://www.aihw.gov.au/reports/dementia/dementia-in-australia

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
Listen to the free weekly SmarterArticles Podcast