The People AI Cannot Hear: Why Disability Exposes a Training Data Problem

The voice command is simple. “Call my sister.” The user, sitting at a kitchen table in south London, says it three times, each time more slowly, each time more carefully. The smart speaker responds each time with the same brisk cheerfulness. It has heard “call Maria.” It has heard “call the sister.” It has, on the third attempt, offered to play a song called “Sister” by an artist she has never heard of. What it has not done is the thing she asked. Her speech, shaped by a neuromuscular condition that makes consonants softer and vowels longer, sits just outside the envelope of audio the model was trained to recognise. To the speaker's statistical ear, she is not quite a person giving an instruction. She is noise, or close enough to noise that the cheapest path is to guess.
On any given morning, a version of that scene plays out in hundreds of thousands of homes. It is one of the quieter harms of the current artificial intelligence moment, a harm so ordinary that the people experiencing it have mostly stopped complaining about it. You learn, over time, to pitch your voice higher. You learn to flatten your accent. You learn which words the machine prefers and which ones it cannot parse. You build, in other words, a second self, optimised for the model. And then you watch the technology press describe the same model as an assistant that understands natural language.
The argument that this is structural rather than a peripheral bug appeared in sharp form in a January 2026 Forbes analysis by Gus Alexiou, a long-time disability inclusion contributor who has written about assistive technology since his own multiple-sclerosis diagnosis. Alexiou framed disability as the ultimate stress test for AI. When a system optimises for uniform productivity and standardised interaction, the argument ran, it does not merely under-serve the approximately 1.3 billion people the World Health Organization counts as experiencing significant disability. It structurally excludes them, through the quiet accumulation of design defaults that never imagined them in the room.
Three weeks later, on 3 February 2026, the British government put that argument into institutional form. The Department for Work and Pensions, under Secretary Pat McFadden, convened a roundtable that brought Google, Meta, Microsoft and Amazon into the same room as Scope, Guide Dogs UK, AbilityNet, Disability Rights UK, the Business Disability Forum, RNID, the Lightyear Foundation, the Regional Stakeholder Network, the Global Disability Innovation Hub and the Atech Policy Lab. The framing was employment-focused, a conversation about closing the disability employment gap. The subtext was the one Alexiou had articulated in Forbes: the distance between what AI could do for disabled workers and what it currently does, and the question of who was on the hook for closing it.
Around the same time, a framework appeared in Frontiers in Digital Health. Written by Gabriella Waters of the Cognitive and Neurodiversity AI Lab at Virginia State University and Morgan State's Center for Equitable AI and Machine Learning Systems, the paper argued something that should have been obvious: there is no standardised methodology for evaluating whether AI systems work for disabled users, and the dominant evaluation practices actively disguise their failure to do so. Accuracy and generalisability, the two metrics that have governed machine-learning benchmarks for a decade, treat disabled users as statistical noise rather than as a population whose performance matters. Waters' proposed framework covers red teaming, model testing, field testing and usability testing with disabled participants, dressed up in the methodological language the field demands before it will listen.
Picking up the same thread from a different angle, a Rest of World investigation by deputy editor Rina Chandran, published 12 March 2026, traced the failure of Western AI models in overseas agricultural settings. Catherine Nakalembe, the University of Maryland geographer who won the 2020 Africa Food Prize, had to collect more than five million helmet-mounted camera images across Kenya and Uganda to train a system to recognise maize, beans and cassava because existing models could not. Arti Dhar, co-founder of Farmers for Forests, found that a widely used open-source segmentation model missed more than half the trees in a Maharashtra forest because it had been trained on North American species. Digital Green's FarmerChat, by contrast, works across 16 vernacular languages in South Asia and Africa because it was built with the farmers rather than for them.
The four sources do not look, at first, like they are telling the same story. Disability, workplace inclusion, testing methodology, smallholder agriculture in the global South. They are. When a technology scales by averaging, the people at the edges of the average pay the cost, and those people are not a minor accounting category. They are the majority of humanity, unevenly distributed. Add non-English speakers, rural users, people with accents the model was never tuned on, and older adults, and the exclusion is not a rounding error. It is the shape of the market.
The question the Forbes piece really asked, and that the UK convening declined to answer directly, is who currently has the power and the incentive to change any of this. That question has an uncomfortable answer.
The default is the person the model imagined
To see why disability is a stress test rather than a special case, it helps to understand how the defaults were set. A large language model is a very expensive statistical summary of text its builders chose, or could afford, to scrape. A voice recognition system is a very expensive statistical summary of speech its builders recorded, or bought, or licensed. A vision model is a very expensive statistical summary of images its builders labelled, or hired people to label, or used labels some earlier researcher had published. At every step, the system's eventual performance on any given user is roughly a function of how well that user resembles the median entity in the training distribution.
The median entity in most training distributions is, to a close approximation, a neurotypical, non-disabled, native-English-speaking adult in North America or Western Europe, interacting via keyboard, mouse or clear spoken English in a quiet room. Meredith Ringel Morris, the former Microsoft researcher who founded the Ability Research Group and now leads HCI research at Google DeepMind, has spent more than a decade cataloguing how that default haunts the systems built around it. Her 2020 Communications of the ACM paper, co-authored with Shari Trewin of IBM, laid out a research roadmap other researchers have been filling in ever since: self-driving car pedestrian detection that does not register wheelchair users; hiring screeners that discount autistic candidates because their facial expressions score as inauthentic; voice assistants that cannot parse speech disabilities; captioning systems that fail on British Sign Language because the model only trained on spoken audio.
Waters' Frontiers paper takes this catalogue as its starting point and asks what evaluation would have to look like to catch these failures before deployment rather than after. Her answer runs to seven specialised metrics, among them an Inclusive Accuracy Rate that requires systems to report performance broken down by user characteristics, an Accessibility Disparity Index that quantifies the gap between best and worst performing groups, an Assistive Technology Compatibility Score, a Cognitive Load Index for neurodivergent users, an Error Recovery Rate, and several others. None of them is exotic. What is exotic, at least in the dominant benchmarking culture, is the assumption that they should be reported at all.
At present, when a model card for a frontier system is published, it will typically include accuracy on standardised benchmarks, sometimes broken down by language or domain, very rarely broken down by user characteristics, and almost never broken down by disability. The ImageNet paper that became the benchmark for vision research did not report performance by subject characteristics. The GLUE paper that became the benchmark for natural-language understanding did not either. Partly this is because the benchmarks did not have that data. Partly it is because the benchmarks assumed the user did not matter. Either way, by the time a commercial system is built on these foundations, the places where performance falls apart have already been defined out of the evaluation.
The Forbes stress-test framing is, in that sense, a good one. A stress test does not tell you anything the system could not, in principle, have told you about itself. It just forces the information into the open. Run a text-to-image system on prompts including disability terms, as Ashley Shew of Virginia Tech and others have done, and you get outputs in which wheelchairs are rusting wrecks, guide dogs are ominous shadows, and prosthetic limbs are rendered as torture devices. Run a sentiment classifier on sentences containing “disability” and the valence collapses, sometimes by several standard deviations. Run a caption model on an image of a person signing and it will tell you about the person's clothes. None of this is a failure mode anyone intended. All of it is a failure mode anyone with a bias audit could have found. The absence of the audit is the choice.
Why employment is the leverage point
The UK government's convening on 3 February was, in one sense, about a much older problem than AI. The disability employment gap in Britain has hovered around 28 percentage points for a decade, and Scope's chief executive Mark Hodgkinson used his remarks at the roundtable to point out that roughly a million disabled people who want to work are currently out of the labour market, many of them not through any lack of capability but because the workplaces and the tools they are asked to work with do not accommodate them.
The reason the meeting mattered is that workplace AI is, right now, the sharp end of a different stress test. The workplace is where biased hiring algorithms live. It is where productivity monitoring systems penalise users whose working patterns deviate from the average. It is where speech recognition is deployed to take meeting minutes that then become performance evidence. It is where agentic tools are being rolled out as personal assistants that assume their user's schedule, preferences and interaction style map neatly onto the same neurotypical default. For a disabled worker, a badly designed AI system is not merely an inconvenience. It is, increasingly, a gatekeeper standing between them and the capacity to work at all.
Maxine Williams, vice president of accessibility at Meta, used the roundtable to announce that the company's AI-powered wearables had added real-time environmental description for blind and low-vision users. This is genuinely useful. It is also the kind of announcement that quietly obscures a harder question. The same company's advertising-targeting algorithms, the same platform's automated content moderation, the same recommender systems that decide what a disabled creator gets shown to, are built on training pipelines that have no comparable accessibility evaluation. A pair of smart glasses that describes a room to a blind user is a product feature. A newsfeed ranking system that deprioritises disability-related content because it scores as low-sentiment is an infrastructural choice. The first is visible, marketable and reputationally valuable. The second is invisible, cheap to leave alone, and reputationally expensive only if someone does the audit.
The same asymmetry runs through the Microsoft, Google and Amazon presentations at the DWP meeting. Amazon's Jaqui Sampson spoke about the company's neurodiversity hiring programme, which has placed several hundred people on the autism spectrum into warehouse and technical roles. Google's team highlighted Project Relate, the speech-recognition model fine-tuned on non-standard speech that grew out of its Project Euphonia research. Microsoft talked about the Seeing AI app and the Immersive Reader. Every one of these is, in isolation, a real contribution. Every one of them also sits alongside the same companies' core AI infrastructure, which is not accessibility-evaluated and is, in most cases, the foundation on which third parties are building the workplace tools disabled workers have to use. Assistive features at the edge, inaccessible scaffolding at the core. That is the pattern.
Guide Dogs UK's Alex Pepper said the quiet part out loud. Assistive technology, she told the room, can remove barriers at work, but it is not a solution on its own. Translation: the industry is very good at giving disability advocates product demos and very bad at changing the way the substrate systems are trained, evaluated and governed. A pair of AI glasses does not fix a hiring pipeline whose screening tool discards autistic applicants at the CV stage. A captioning feature does not fix a workplace analytics system that penalises a deaf employee for the below-average meeting participation metrics the same captioning layer enabled.
The colonial analogy is not a metaphor
One of the more striking aspects of reading the Forbes argument alongside the Rest of World investigation is how structurally similar the failure modes are. Chandran's reporting from March 2026 does not use the word “disability” once. It does not need to. The mechanism it describes, a Western model trained on Western data fails to recognise the objects, practices and languages of people outside the training distribution, is the same mechanism Morris and Waters and every disability AI researcher has been describing for years.
Nakalembe's 5 million-image dataset of smallholder crops in East Africa exists because the existing computer vision literature on agriculture had been trained almost entirely on North American and European industrial farms. Maize, in the Midwest sense, looks one way; maize, in the Rwandan sense, looks another. A segmentation model that was never shown the second cannot see it. Dhar's forest-monitoring tool that missed half the trees in Maharashtra was not malfunctioning by some local standard. It was doing exactly what it had been trained to do. It had simply never been trained on the relevant world.
Digital Green's FarmerChat, which now reaches a million farmers across South Asia and Africa, is instructive because of the corrective it represents. It works by doing what Waters' framework would require: sourcing the training data from the users who will actually use the tool, evaluating the model against their real queries rather than against a benchmark designed somewhere else, and building with the users rather than for them. In other words, it drops the default. It treats the smallholder farmer in vernacular Telugu or Swahili as the person the model is for, rather than as a tolerated edge case.
What the agricultural story shares with the disability story is the structural dynamic. In both, the dominant models were trained by institutions whose own users were implicitly treated as the species for which the tool was being built. In both, the populations outside that implicit species had to assemble their own data, build their own pipelines and argue, repeatedly, for the legitimacy of their inclusion. In both, the industry's preferred response has been to offer add-ons, localisation layers, assistive features bolted onto a substrate that remains unchanged. In both, the add-on strategy is cheaper than the redesign. In neither does the add-on strategy actually solve the underlying problem, which is that the training substrate itself encoded a worldview.
The reason this matters for the power question is that it connects two movements that have mostly operated in isolation. The disability AI community, led by researchers like Morris, Shew, Jutta Treviranus at OCAD University in Toronto, and Catherine Holloway at the Global Disability Innovation Hub at UCL, has been asking for inclusive training data and participatory evaluation for the better part of a decade. The global-south AI community, represented by researchers like Nakalembe, Timnit Gebru at the Distributed AI Research Institute, and the authors of the 2021 Stochastic Parrots paper, has been asking for the same thing from a different direction. When those two arguments are recognised as the same argument, the constituency pushing for structural change becomes considerably larger than either community looks on its own.
Who currently has the power
The honest answer to the question of who has the power to deliver genuine inclusion, and who has the incentive, does not flatter the industry.
The people who have the most power to change how models are trained, evaluated and deployed are the half-dozen companies whose frontier systems underpin most of the AI economy: OpenAI, Anthropic, Google DeepMind, Meta, Microsoft and, at the infrastructure layer, Nvidia. These are the entities whose training data choices, whose evaluation benchmarks, whose model cards and whose API behaviours determine what everyone downstream inherits. If any one of them chose, tomorrow, to publish accessibility-disaggregated performance data on its next frontier release, or to make Waters' Inclusive Accuracy Rate a standard reporting field, the rest of the industry would have to follow, because the procurement contracts and the regulatory filings and the research community would start asking for it. None of them currently does.
They have the power. They do not have the incentive, or at least not a large enough one to outweigh the cost. The cost, which is substantial, is what the disability community calls data work: assembling inclusive datasets, running participatory design processes, building evaluation suites that require human effort rather than automatable benchmarks, and, most awkwardly, publishing disparity metrics that will make the model look worse on average. The revenue upside of any of this for the model provider is real but small compared to the upside of the next efficiency frontier or the next reasoning benchmark. The downside, legally and reputationally, has so far been manageable.
The people who have the incentive but not the power are disabled users, disability-led organisations, and the researchers working alongside them. Scope, Disability Rights UK, RNID and the others who sat across the DWP table from the tech companies in February are unambiguously motivated to deliver genuine inclusion. They do not control the training pipelines. They do not sit on the model cards. They can, at best, act as a feedback channel, and only if the companies choose to listen. Catherine Holloway's Global Disability Innovation Hub and its Centre for Digital Language Inclusion, launched in partnership with the Royal Academy of Engineering and the University of Ghana in 2025, is building non-standard speech datasets precisely because no frontier lab has produced one at the scale required. That work is essential, and it is happening largely on philanthropic and academic funding while the companies that could resource it two orders of magnitude better continue not to.
The people who have some of both are the regulators, and regulators are the reason any of this is shifting at all. The European Accessibility Act, which came into force on 28 June 2025, requires a wide range of consumer-facing products and services to meet accessibility standards, including, increasingly, AI-enabled ones. The UK's Equality Act has always prohibited discrimination in the provision of goods and services, and the Equality and Human Rights Commission has been explicit that algorithmic discrimination is within its remit. The EU AI Act's high-risk categorisation for employment-related AI systems carries obligations that include bias mitigation and fundamental-rights impact assessments, and disability discrimination is among the rights it protects. The US Section 508 refresh, updated through 2024 and 2025, now covers procurement rules for federal AI systems. The 24 April 2026 deadline for US state and local government agencies to comply with Title II of the Americans with Disabilities Act at WCAG 2.1 Level AA is a week away from the moment this piece is being written, and will push a large volume of procurement activity toward accessibility-audited vendors.
The regulatory pressure is building, slowly, against the incentive gradient of the industry. Whether it builds fast enough to shift the default before the current generation of AI systems has ossified into infrastructure is the open question.
What genuine inclusion would require
Set aside the rhetoric of empowerment and the photograph-friendly accessibility features, and the substance of what inclusion requires is reasonably well understood. The academic and advocacy literature converges on roughly five things, and the striking feature of the convergence is that none of the five is technically hard. They are organisationally and commercially unwelcome.
The first is inclusive training data, collected through participatory processes with the populations the system is expected to serve. Digital Green's 120,000-query corpus of farmer questions in 16 languages is the template for what that looks like in agriculture. The Speech Accessibility Project at the University of Illinois, which has been collecting non-standard speech recordings from people with Parkinson's, cerebral palsy, ALS and Down syndrome since 2022, is the template for what it looks like in voice. The 5 million-image East African crop dataset Nakalembe assembled is the template for what it looks like in vision. None of these was built by the frontier labs. All of them should be.
The second is disaggregated evaluation, reported publicly, with performance broken down by user characteristics and context. Waters' seven metrics are a starting point. Morris' research roadmap offers another. At minimum, model cards for any deployed AI system should report performance on named disability-relevant evaluations, should state the demographic composition of the evaluation set, and should be updated when the underlying model is updated. At present, they typically do none of this.
The third is participatory design, built into the product cycle rather than retrofitted afterwards. Jutta Treviranus' Inclusive Design Research Centre has a motto, “design with not for”, that has circulated in the accessibility community for twenty years. In AI, it is still largely aspirational. The UK's Atech Policy Lab, which sat at the DWP roundtable, is one of the few bodies attempting to hard-code participatory practice into the AI assistive-technology pipeline. Most of the major labs' partnership structures with disability organisations remain advisory rather than decisional.
The fourth is interoperability with assistive technologies. The systems that disabled users already rely on, screen readers, switch controls, eye-gaze interfaces, alternative input devices, bespoke communication aids, need to be first-class citizens in the AI interaction model, not afterthoughts. This is a straightforwardly technical requirement, and it is the one area where mandatory standards are starting to bite. The W3C's WAI-AI task force, formed in 2024 and producing guidance through 2025 and 2026, is doing the unglamorous work of defining what accessible AI means in a way developers can actually implement.
The fifth is accountability when systems fail, meaning both the right of redress for the user harmed and the obligation of the provider to fix the underlying issue rather than adding yet another accessibility plug-in. This is where regulation becomes indispensable. Voluntary commitments have been on offer from the industry for a decade. The pattern the DWP roundtable demonstrated, tech companies promising product features while their core training pipelines remain unaudited, is what voluntary commitments reliably produce. Mandatory disclosure, backed by enforcement, is what changes the substrate.
Each of these five can be characterised as a cost centre, and each, properly executed, is a correction to a market failure currently paid for by disabled users, global-south users, and, eventually, the employers and public services on the receiving end of the systems' underperformance. A stress test does not create the fragility. It exposes it. The fragility was always there.
The architecture of the next decade
There is a version of the next few years in which the story gets worse before it gets better. More AI systems will be deployed into more workplaces, more public services, more clinical and educational and legal settings, before the regulatory substrate and the evaluation methodology have caught up. Disabled users, like global-south users, will continue to bear the cost of the gap between the technology's performance on the median user and its performance on them. The assistive add-on will continue to be the preferred industry response. The companies that could do the deeper work will continue to choose not to, because the incentive structure has not yet reversed.
There is also a version in which a combination of regulatory enforcement, procurement leverage from large public buyers, and sustained pressure from the research community and disability-led organisations starts to shift the defaults. The EU AI Act's first full enforcement cycle begins to bite through 2026 and 2027. The US ADA Title II deadline produces a wave of vendor audits. The DWP-style convenings, if they become regular rather than ceremonial, put the same five questions to the same companies repeatedly until the answers change. The Frontiers framework gets picked up by NIST, by the Alan Turing Institute's AI Safety Institute, by standards bodies that have the authority to make accessibility evaluation a default reporting field. The 5 million-image datasets and the 120,000-query corpora stop being heroic one-off efforts and start being line items in platform R&D budgets.
Which of these futures arrives depends on something more prosaic than a technical breakthrough. It depends on whether the power and the incentive can be brought into alignment, and whether the regulatory architecture is built fast enough to do the aligning. The power currently sits with the labs. The incentive currently does not. The gap between the two is where the exclusion lives, and it will continue to live there until someone, regulator or buyer or coalition of both, moves it.
Return, finally, to the kitchen table in south London. The woman asking her smart speaker to call her sister is not, in the technology industry's current accounting, a failure case. She is not a test the product was designed to pass. She is, from the system's point of view, slightly outside the curve, which is another way of saying slightly outside the imagination of the people who built it. Her experience is what the Forbes analysis meant by a stress test. If she works, the system works. If she does not, the system is not ready. The point of treating her as the test case is that it produces a higher-quality system for everyone, because the default person the models were trained to serve is not, in fact, most of the people the models are being sold to.
Alexiou's insight, and the convergence with Waters' framework and Chandran's reporting and the UK government's cautious, cautiously serious convening, is that inclusion is not a niche product requirement. It is the infrastructure question the next decade of this technology turns on. Who has the power to deliver it is mostly the labs. Who has the incentive is mostly the users, the regulators and the advocates. Closing that gap is the work. Nobody is going to close it by accident, and nobody is going to close it because it would be the nice thing to do. It will close when the cost of not closing it is made legible, and that is a job for law, for procurement, for journalism and for the disability community itself, not for a press release about a new AI-powered wearable. The kitchen-table speaker will start hearing her properly on the day the company that makes it has to report, publicly and disaggregated, how often it does not. Until then, she will keep pitching her voice higher and the rest of the market will keep pretending the model understands natural language.
References & Sources
- Alexiou, G. “Why Disability Is The Ultimate Stress Test For Artificial Intelligence.” Forbes, January 2026. https://www.forbes.com/sites/gusalexiou/
- Department for Work and Pensions. “Tech giants meet disability sector to break down barriers at work.” GOV.UK, 3 February 2026. https://www.gov.uk/government/news/tech-giants-meet-disability-sector-to-break-down-barriers-at-work
- Waters, G. “AI testing, evaluation, verification and validation for accessibility: a comprehensive framework.” Frontiers in Digital Health, Volume 7, 2025-2026. DOI: 10.3389/fdgth.2025.1679603. https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1679603/full
- Chandran, R. “AI models from Western tech giants fail in overseas agricultural settings.” Rest of World, 12 March 2026. https://restofworld.org/2026/ai-agriculture-local-data/
- World Health Organization. “Disability.” WHO Fact Sheet, 2023 (updated 2024-2025). https://www.who.int/news-room/fact-sheets/detail/disability-and-health
- Morris, M. R. “AI and Accessibility: A Discussion of Ethical Considerations.” Communications of the ACM, Vol. 63 No. 6, June 2020. https://cacm.acm.org/magazines/2020/6/245157-ai-and-accessibility/fulltext
- Trewin, S., Basson, S., Muller, M., Branham, S., Treviranus, J., Gruen, D., Hebert, D., Lyckowski, N. and Manser, E. “Considerations for AI Fairness for People with Disabilities.” AI Matters, 5(3), 2019. https://sigai.acm.org/static/aimatters/5-3/AIMatters-5-3-05-Trewin.pdf
- Microsoft Research. “AI Fairness and Disability.” Project page and publications. https://www.microsoft.com/en-us/research/project/ai-fairness-and-disability/publications/
- Microsoft. “Shrinking the 'data desert': Inside efforts to make AI systems more inclusive of people with disabilities.” Microsoft Source. https://news.microsoft.com/source/features/diversity-inclusion/shrinking-the-data-desert/
- Global Disability Innovation Hub. “Changing lives through AI: a new Centre for Digital Language Inclusion.” 2025. https://www.disabilityinnovation.com/news/cdli
- Global Disability Innovation Hub. “Prof Catherine Holloway profile.” https://www.disabilityinnovation.com/who-we-are/our-team/dr-catherine-holloway
- Speech Accessibility Project, University of Illinois Urbana-Champaign. https://speechaccessibilityproject.beckman.illinois.edu/
- Nakalembe, C. et al. NASA Harvest and University of Maryland work on East African smallholder agriculture, as reported in Rest of World, 12 March 2026.
- Scope UK. Statements by Mark Hodgkinson, Chief Executive, at DWP roundtable, 3 February 2026. https://www.scope.org.uk/
- European Accessibility Act. Directive (EU) 2019/882, in force 28 June 2025. https://ec.europa.eu/social/main.jsp?catId=1202
- European Union. “Artificial Intelligence Act.” Regulation (EU) 2024/1689. https://artificialintelligenceact.eu/
- US Access Board. “Section 508 Standards.” https://www.access-board.gov/ict/
- US Department of Justice. “ADA Title II Web Content and Mobile Apps Rule.” Final rule published April 2024, compliance dates 2026-2027. https://www.ada.gov/resources/2024-03-08-web-rule/
- W3C. “Accessibility of AI Systems (AI-Accessibility).” WAI task force, 2024-2026. https://www.w3.org/WAI/
- Bender, E. M., Gebru, T., McMillan-Major, A. and Shmitchell, S. “On the Dangers of Stochastic Parrots.” FAccT 2021. https://dl.acm.org/doi/10.1145/3442188.3445922
- Shew, A. “Against Technoableism: Rethinking Who Needs Improvement.” W. W. Norton, 2023.
- Treviranus, J. “The Three Dimensions of Inclusive Design.” Inclusive Design Research Centre, OCAD University. https://legacy.inclusivedesign.ca/
- Muckrack. “Gus Alexiou journalist profile.” https://muckrack.com/gus-alexiou/articles
- Equality and Human Rights Commission. “Artificial intelligence, machine learning and automated decision-making: our guidance.” 2024-2025. https://www.equalityhumanrights.com/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
Listen to the free weekly SmarterArticles Podcast








