SmarterArticles

fairnessandbias

Walk into any modern supermarket and you're being watched, analysed, and optimised. Not by human eyes, but by autonomous systems that track your movements, predict your preferences, and adjust their strategies in real-time. The cameras don't just watch for shoplifters anymore; they feed data into machine learning models that determine which products appear on which shelves, how much they cost, and increasingly, which version of reality you see when you shop.

This isn't speculative fiction. By the end of 2025, more than half of consumers anticipate using AI assistants for shopping, according to Adobe, whilst 73% of top-performing retailers now rely on autonomous AI systems to handle core business functions. We're not approaching an AI-powered retail future; we're already living in it. The question isn't whether artificial intelligence will reshape how we shop, but whether this transformation serves genuine human needs or simply makes us easier to manipulate.

As retail embraces what industry analysts call “agentic AI” – systems that can reason, plan, and act independently towards defined goals – we face a profound shift in the balance of power between retailers and consumers. These systems don't just recommend products; they autonomously manage inventory, set prices, design store layouts, and curate individualised shopping experiences with minimal human oversight. They're active participants making consequential decisions about what we see, what we pay, and ultimately, what we buy.

The uncomfortable truth is that 72% of global shoppers report concern over privacy issues whilst interacting with AI during their shopping journeys, according to research from NVIDIA and UserTesting. Another survey found that 81% of consumers believe information collected by AI companies will be used in ways people find uncomfortable. Yet despite this widespread unease, the march towards algorithmic retail continues unabated. Gartner forecasts that by 2028, AI agents will autonomously handle about 15% of everyday business decisions, whilst 80% of retail executives expect their companies to adopt AI-powered intelligent automation by 2027.

Here's the central tension: retailers present AI as a partnership technology that enhances customer experience, offering personalised recommendations and seamless transactions. But strip away the marketing language and you'll find systems fundamentally designed to maximise profit, often through psychological manipulation that blurs the line between helpful suggestion and coercive nudging. When Tesco chief executive Ken Murphy announced plans to use Clubcard data and AI to “nudge” customers toward healthier choices at a September 2024 conference, the backlash was immediate. Critics noted this opened the door for brands to pay for algorithmic influence, creating a world where health recommendations might reflect the highest bidder rather than actual wellbeing.

This controversy illuminates a broader question: As AI systems gain autonomy over retail environments, who ensures they serve consumers rather than merely extract maximum value from them? Transparency alone, the industry's favourite answer, proves woefully inadequate. Knowing that an algorithm set your price doesn't tell you whether that price is fair, whether you're being charged more than the person next to you, or whether the system is exploiting your psychological vulnerabilities.

The Autonomy Paradox

The promise of AI-powered retail sounds seductive: shops that anticipate your needs before you articulate them, inventory systems that ensure your preferred products are always in stock, pricing that reflects real-time supply and demand rather than arbitrary markup. Efficiency, personalisation, and convenience, delivered through invisible computational infrastructure.

Reality proves more complicated. Behind the scenes, agentic AI systems are making thousands of autonomous decisions that shape consumer behaviour whilst remaining largely opaque to scrutiny. These systems analyse your purchase history, browsing patterns, location data, demographic information, and countless other signals to build detailed psychological profiles. They don't just respond to your preferences; they actively work to influence them.

Consider Amazon's Just Walk Out technology, promoted as revolutionary friction-free shopping powered by computer vision and machine learning. Walk in, grab what you want, walk out – the AI handles everything. Except reports revealed the system relied on more than 1,000 people in India watching and labelling videos to ensure accurate checkouts. Amazon countered that these workers weren't watching live video to generate receipts, that computer vision algorithms handled checkout automatically. But the revelation highlighted how “autonomous” systems often depend on hidden human labour whilst obscuring the mechanics of decision-making from consumers.

The technology raised another concern: biometric data collection without meaningful consent. Customers in New York City filed a lawsuit against Amazon in 2023 alleging unauthorised use of biometric data. Target faced similar legal action from customers claiming the retailer used biometric data without consent. These cases underscore a troubling pattern: AI systems collect and analyse personal information at unprecedented scale, often without customers understanding what data is gathered, how it's processed, or what decisions it influences.

The personalisation enabled by these systems creates what researchers call the “autonomy paradox.” AI-based recommendation algorithms may facilitate consumer choice and boost perceived autonomy, giving shoppers the feeling they're making empowered decisions. But simultaneously, these systems may undermine actual autonomy, guiding users toward options that serve the retailer's objectives whilst creating the illusion of independent choice. Academic research has documented this tension extensively, with one study finding that overly aggressive personalisation tactics backfire, with consumers feeling their autonomy is undermined, leading to decreased trust.

Consumer autonomy, defined by researchers as “the ability of consumers to make independent informed decisions without undue influence or excessive power exerted by the marketer,” faces systematic erosion from AI systems designed explicitly to exert influence. The distinction between helpful recommendation and manipulative nudging becomes increasingly blurred when algorithms possess granular knowledge of your psychological triggers, financial constraints, and decision-making patterns.

Walmart provides an instructive case study in how this automation transforms both worker and consumer experiences. The world's largest private employer, with 2.1 million retail workers globally, has invested billions into automation. The company's AI systems can automate up to 90% of routine tasks. By the company's own estimates, about 65% of Walmart stores will be serviced by automation within five years. CEO Doug McMillon acknowledged in 2024 that “maybe there's a job in the world that AI won't change, but I haven't thought of it.”

Walmart's October 2024 announcement of its “Adaptive Retail” strategy revealed the scope of algorithmic transformation: proprietary AI systems creating “hyper-personalised, convenient and engaging shopping experiences” through generative AI, augmented reality, and immersive commerce platforms. The language emphasises consumer benefit, but the underlying objective is clear: using AI to increase sales and reduce costs. The company has been relatively transparent about employment impacts, offering free AI training through a partnership with OpenAI to prepare workers for “jobs of tomorrow.” Chief People Officer Donna Morris told employees the company's goal is helping everyone “make it to the other side.”

Yet the “other side” remains undefined. New positions focus on technology management, data analysis, and AI system oversight – roles requiring different skills than traditional retail positions. Whether this represents genuine opportunity or a managed decline of human employment depends largely on how honestly we assess AI's capabilities and limitations. What's certain is that as algorithmic systems make more decisions, fewer humans understand the full context of those decisions or possess authority to challenge them.

What's undeniable is that as these systems gain autonomy, human workers have less influence over retail operations whilst AI-driven decisions become harder to question or override. A store associate may see that an AI pricing algorithm is charging vulnerable customers more, but lack authority to intervene. A manager may recognise that automated inventory decisions are creating shortages in lower-income neighbourhoods, but have no mechanism to adjust algorithmic priorities. The systems operate at a scale and speed that makes meaningful human oversight practically impossible, even when it's theoretically required.

This erosion of human agency extends to consumers. When you walk through a “smart” retail environment, systems are making autonomous decisions about what you see and how you experience the space. Digital displays might show different prices to different customers based on their profiles. Promotional algorithms might withhold discounts from customers deemed willing to pay full price. Product placement might be dynamically adjusted based on real-time analysis of your shopping pattern. The store becomes a responsive environment, but one responding to the retailer's optimisation objectives, not your wellbeing.

You're not just buying products; you're navigating an environment choreographed by algorithms optimising for outcomes you may not share. The AI sees you as a probability distribution, a collection of features predicting your behaviour. It doesn't care about your wellbeing beyond how that affects your lifetime customer value. This isn't consciousness or malice; it's optimisation, which in some ways makes it more concerning. A human salesperson might feel guilty about aggressive tactics. An algorithm feels nothing whilst executing strategies designed to extract maximum value.

The scale of this transformation matters. We're not talking about isolated experiments or niche applications. A McKinsey report found that retailers using autonomous AI grew 50% faster than their competitors, creating enormous pressure on others to adopt similar systems or face competitive extinction. Early adopters capture 5–10% revenue increases through AI-powered personalisation and 30–40% productivity gains in marketing. These aren't marginal improvements; they're transformational advantages that reshape market dynamics and consumer expectations.

The Fairness Illusion

If personalisation represents AI retail's seductive promise, algorithmic discrimination represents its toxic reality. The same systems that enable customised shopping experiences also enable customised exploitation, charging different prices to different customers based on characteristics that may include protected categories like race, location, or economic status.

Dynamic pricing, where algorithms adjust prices based on demand, user behaviour, and contextual factors, has become ubiquitous. Retailers present this as market efficiency, prices reflecting real-time supply and demand. But research reveals more troubling patterns. AI pricing systems can adjust prices based on customer location, assuming consumers in wealthier neighbourhoods can afford more, leading to discriminatory pricing where lower-income individuals or marginalised groups are charged higher prices for the same goods.

According to a 2021 Deloitte survey, 75% of consumers said they would stop using a company's products if they learned its AI systems treated certain customer groups unfairly. Yet a 2024 Deloitte report found that only 20% of organisations have formal bias testing processes for AI models, even though more than 75% use AI in customer-facing decisions. This gap between consumer expectations and corporate practice reveals the depth of the accountability crisis.

The mechanisms of algorithmic discrimination often remain hidden. Unlike historical forms of discrimination where prejudiced humans made obviously biased decisions, algorithmic bias emerges from data patterns, model architecture, and optimisation objectives that seem neutral on the surface. An AI system never explicitly decides to charge people in poor neighbourhoods more. Instead, it learns from historical data that people in certain postcodes have fewer shopping alternatives and adjusts prices accordingly, maximising profit through mathematical patterns that happen to correlate with protected characteristics.

This creates what legal scholars call “proxy discrimination” – discrimination that operates through statistically correlated variables rather than direct consideration of protected characteristics. The algorithm doesn't know you're from a marginalised community, but it knows your postcode, your shopping patterns, your browsing history, and thousands of other data points that collectively reveal your likely demographic profile with disturbing accuracy. It then adjusts prices, recommendations, and available options based on predictions about your price sensitivity, switching costs, and alternatives.

Legal and regulatory frameworks struggle to address this dynamic. Traditional anti-discrimination law focuses on intentional bias and explicit consideration of protected characteristics. But algorithmic systems can discriminate without explicit intent, through proxy variables and emergent patterns in training data. Proving discrimination requires demonstrating disparate impact, but when pricing varies continuously across millions of transactions based on hundreds of variables, establishing patterns becomes extraordinarily difficult.

The European Union has taken the strongest regulatory stance. The EU AI Act, which entered into force on 1 August 2024, elevates retail algorithms to “high-risk” in certain applications, requiring mandatory transparency, human oversight, and impact assessment. Violations can trigger fines up to 7% of global annual turnover for banned applications. Yet the Act won't be fully applicable until 2 August 2026, giving retailers years to establish practices that may prove difficult to unwind. Meanwhile, enforcement capacity remains uncertain. Member States have until 2 August 2025 to designate national competent authorities for oversight and market surveillance.

More fundamentally, the Act's transparency requirements may not translate to genuine accountability. Retailers can publish detailed technical documentation about AI systems whilst keeping the actual decision-making logic proprietary. They can demonstrate that systems meet fairness metrics on training data whilst those systems discriminate in deployment. They can establish human oversight that's purely ceremonial, with human reviewers lacking time, expertise, or authority to meaningfully evaluate algorithmic decisions.

According to a McKinsey report, only 18% of organisations have enterprise-wide councils for responsible AI governance. This suggests that even as regulations demand accountability, most retailers lack the infrastructure and commitment to deliver it. The AI market in retail is projected to grow from $14.24 billion in 2025 to $96.13 billion by 2030, registering a compound annual growth rate of 46.54%. That explosive growth far outpaces development of effective governance frameworks, creating a widening gap between technological capability and ethical oversight.

The technical challenges compound the regulatory ones. AI bias isn't simply a matter of bad data that can be cleaned up. Bias emerges from countless sources: historical data reflecting past discrimination, model architectures that amplify certain patterns, optimisation metrics that prioritise profit over fairness, deployment contexts where systems encounter situations unlike training data. Even systems that appear fair in controlled testing can discriminate in messy reality when confronted with edge cases and distributional shifts.

Research on algorithmic pricing highlights these complexities. Dynamic pricing exploits individual preferences and behavioural patterns, increasing information asymmetry between retailers and consumers. Techniques that create high search costs undermine consumers' ability to compare prices, lowering overall welfare. From an economic standpoint, these aren't bugs in the system; they're features, tools for extracting consumer surplus and maximising profit. The algorithm isn't malfunctioning when it charges different customers different prices; it's working exactly as designed.

When Tesco launched its “Your Clubcard Prices” trial, offering reduced prices on selected products based on purchase history, it presented the initiative as customer benefit. But privacy advocates questioned whether using AI to push customers toward specific choices went too far. In early 2024, consumer group Which? reported Tesco to the Competition and Markets Authority, claiming the company could be breaking the law with how it displayed Clubcard pricing. Tesco agreed to change its practices, but the episode illustrates how AI-powered personalisation can cross the line from helpful to manipulative, particularly when economic incentives reward pushing boundaries.

The Tesco controversy also revealed how difficult it is for consumers to understand whether they're benefiting from personalisation or being exploited by it. If the algorithm offers you a discount, is that because you're a valued customer or because you've been identified as price-sensitive and would defect to a competitor without the discount? If someone else doesn't receive the same discount, is that unfair discrimination or efficient price discrimination that enables the retailer to serve more customers? These questions lack clear answers, but the asymmetry of information means retailers know far more about what's happening than consumers ever can.

Building Genuine Accountability

If 80% of consumers express unease about data privacy and algorithmic fairness, yet retail AI adoption accelerates regardless, we face a clear accountability gap. The industry's default response – “we'll be more transparent” – misses the fundamental problem: transparency without power is performance, not accountability.

Knowing how an algorithm works doesn't help if you can't challenge its decisions, opt out without losing essential services, or choose alternatives that operate differently. Transparency reports are worthless if they're written in technical jargon comprehensible only to specialists, or if they omit crucial details as proprietary secrets. Human oversight means nothing if humans lack authority to override algorithmic decisions or face pressure to defer to the system's judgment.

Genuine accountability requires mechanisms that redistribute power, not just information. Several frameworks offer potential paths forward, though implementing them demands political will that currently seems absent:

Algorithmic Impact Assessments with Teeth: The EU AI Act requires impact assessments for high-risk systems, but these need enforcement mechanisms beyond fines. Retailers deploying AI systems that significantly affect consumers should conduct thorough impact assessments before deployment, publish results in accessible language, and submit to independent audits. Crucially, assessments should include input from affected communities, not just technical teams and legal departments.

The Institute of Internal Auditors has developed an AI framework covering governance, data quality, performance monitoring, and ethics. ISACA's Digital Trust Ecosystem Framework provides guidance for auditing AI systems against responsible AI principles. But as a 2024 study noted, auditing for compliance currently lacks agreed-upon practices, procedures, taxonomies, and standards. Industry must invest in developing mature auditing practices that go beyond checkbox compliance to genuinely evaluate whether systems serve consumer interests. This means auditors need access to training data, model architectures, deployment metrics, and outcome data – information retailers currently guard jealously as trade secrets.

Mandatory Opt-Out Rights with Meaningful Alternatives: Current approaches to consent are fictions. When retailers say “you consent to algorithmic processing by using our services,” and the alternative is not shopping for necessities, that's coercion, not consent. Genuine accountability requires that consumers can opt out of algorithmic systems whilst retaining access to equivalent services at equivalent prices.

This might mean retailers must maintain non-algorithmic alternatives: simple pricing not based on individual profiling, human customer service representatives who can override automated decisions, store layouts not dynamically adjusted based on surveillance. Yes, this reduces efficiency. That's precisely the point. The question isn't whether AI can optimise operations, but whether optimisation should override human agency. The right to shop without being surveilled, profiled, and psychologically manipulated should be as fundamental as the right to read without government monitoring or speak without prior restraint.

Collective Bargaining and Consumer Representation: Individual consumers lack power to challenge retail giants' AI systems. The imbalance resembles labour relations before unionisation. Perhaps we need equivalent mechanisms for consumer power: organisations with resources to audit algorithms, technical expertise to identify bias and manipulation, legal authority to demand changes, and bargaining power to make demands meaningful.

Some European consumer protection groups have moved this direction, filing complaints about AI systems and bringing legal actions challenging algorithmic practices. But these efforts remain underfunded and fragmented. Building genuine consumer power requires sustained investment and political support, including legal frameworks that give consumer organisations standing to challenge algorithmic practices, access to system documentation, and ability to compel changes when bias or manipulation is demonstrated.

Algorithmic Sandboxes for Public Benefit: Retailers experiment with AI systems on live customers, learning from our behaviour what manipulation techniques work best. Perhaps we need public-interest algorithmic sandboxes where systems are tested for bias, manipulation, and privacy violations before deployment. Independent researchers would have access to examine systems, run adversarial tests, and publish findings.

Industry will resist, claiming proprietary concerns. But we don't allow pharmaceutical companies to skip clinical trials because drug formulas are trade secrets. If AI systems significantly affect consumer welfare, we can demand evidence they do more good than harm before permitting their use on the public. This would require regulatory frameworks that treat algorithmic systems affecting millions of people with the same seriousness we treat pharmaceutical interventions or financial products.

Fiduciary Duties for Algorithmic Retailers: Perhaps the most radical proposal is extending fiduciary duties to retailers whose AI systems gain significant influence over consumer decisions. When a system knows your preferences better than you consciously do, when it shapes what options you consider, when it's designed to exploit your psychological vulnerabilities, it holds power analogous to a financial adviser or healthcare provider.

Fiduciary relationships create legal obligations to act in the other party's interest, not just avoid overt harm. An AI system with fiduciary duties couldn't prioritise profit maximisation over consumer welfare. It couldn't exploit vulnerabilities even if exploitation increased sales. It would owe affirmative obligations to educate consumers about manipulative practices and bias. This would revolutionise retail economics. Profit margins would shrink. Growth would slow. Many current AI applications would become illegal. Precisely. The question is whether retail AI should serve consumers or extract maximum value from them. Fiduciary duties would answer clearly: serve consumers, even when that conflicts with profit.

The Technology-as-Partner Myth

Industry rhetoric consistently frames AI as a “partner” that augments human capabilities rather than replacing human judgment. Walmart's Donna Morris speaks of helping workers reach “the other side” through AI training. Technology companies describe algorithms as tools that empower retailers to serve customers better. The European Union's regulatory framework aims to harness AI benefits whilst mitigating risks.

This partnership language obscures fundamental power dynamics. AI systems in retail don't partner with consumers; they're deployed by retailers to advance retailer interests. The technology isn't neutral infrastructure that equally serves all stakeholders. It embodies the priorities and values of those who design, deploy, and profit from it.

Consider the economics. BCG data shows that 76% of retailers are increasing investment in AI, with 43% already piloting autonomous AI systems and another 53% evaluating potential uses. These economic incentives drive development priorities. Retailers invest in AI systems that increase revenue and reduce costs. Systems that protect consumer privacy, prevent manipulation, or ensure fairness receive investment only when required by regulation or consumer pressure. The natural evolution of retail AI trends toward sophisticated behaviour modification and psychological exploitation, not because retailers are malicious, but because profit maximisation rewards these applications.

Academic research consistently finds that AI-enabled personalisation practices simultaneously enable increased possibilities for exerting hidden interference and manipulation on consumers, reducing consumer autonomy. Retailers face economic pressure to push boundaries, testing how much manipulation consumers tolerate before backlash threatens profits. The partnership framing obscures this dynamic, presenting what's fundamentally an adversarial optimisation problem as collaborative value creation.

The partnership framing also obscures questions about whether certain AI applications should exist at all. Not every technical capability merits deployment. Not every efficiency gain justifies its cost in human agency, privacy, or fairness. Not every profitable application serves the public interest.

When Tesco's chief executive floated using AI to nudge dietary choices, the appropriate response wasn't “how can we make this more transparent” but “should retailers have this power?” When Amazon develops systems to track customers through stores, analysing their movements and expressions, we shouldn't just ask “is this disclosed” but “is this acceptable?” When algorithmic pricing enables unprecedented price discrimination, the question isn't merely “is this fair” but “should this be legal?”

The technology-as-partner myth prevents us from asking these fundamental questions. It assumes AI deployment is inevitable progress, that our role is managing risks rather than making fundamental choices about what kind of retail environment we want. It treats consumer concerns about manipulation and surveillance as communication failures to be solved through better messaging rather than legitimate objections to be respected through different practices.

Reclaiming Democratic Control

The deeper issue is that retail AI development operates almost entirely outside public interest considerations. Retailers deploy systems based on profit calculations. Technology companies build capabilities based on market demand. Regulators respond to problems after they've emerged. At no point does anyone ask: What retail environment would best serve human flourishing? How should we balance efficiency against autonomy, personalisation against privacy, convenience against fairness? Who should make these decisions and through what process?

These aren't technical questions with technical answers. They're political and ethical questions requiring democratic deliberation. Yet we've largely delegated retail's algorithmic transformation to private companies pursuing profit, constrained only by minimal regulation and consumer tolerance.

Some argue that markets solve this through consumer choice. If people dislike algorithmic retail, they'll shop elsewhere, creating competitive pressure for better practices. But this faith in market solutions ignores the problem of market power. When most large retailers adopt similar AI systems, when small retailers lack capital to compete without similar technology, when consumers need food and clothing regardless of algorithmic practices, market choice becomes illusory.

The survey data confirms this. Despite 72% of shoppers expressing privacy concerns about retail AI, despite 81% believing AI companies will use information in uncomfortable ways, despite 75% saying they won't purchase from organisations they don't trust with data, retail AI adoption accelerates. This isn't market equilibrium reflecting consumer preferences; it's consumers accepting unpleasant conditions because alternatives don't exist or are too costly.

We need public interest involvement in retail AI development. This might include governments and philanthropic organisations funding development of AI systems designed around different values – privacy-preserving recommendation systems, algorithms that optimise for consumer welfare rather than profit, transparent pricing models that reject behavioural discrimination. These wouldn't replace commercial systems but would provide proof-of-concept for alternatives and competitive pressure toward better practices.

Public data cooperatives could give consumers collective ownership of their data, ability to demand its deletion, power to negotiate terms for its use. This would rebalance power between retailers and consumers whilst enabling beneficial AI applications. Not-for-profit organisations could develop retail AI with explicit missions to benefit consumers, workers, and communities rather than maximise shareholder returns. B-corp structures might provide middle ground, profit-making enterprises with binding commitments to broader stakeholder interests.

None of these alternatives are simple or cheap. All face serious implementation challenges. But the current trajectory, where retail AI develops according to profit incentives alone, is producing systems that concentrate power, erode autonomy, and deepen inequality whilst offering convenience and efficiency as compensation.

The Choice Before Us

Retail AI's trajectory isn't predetermined. We face genuine choices about how these systems develop and whose interests they serve. But making good choices requires clear thinking about what's actually happening beneath the marketing language.

Agentic AI systems are autonomous decision-makers, not neutral tools. They're designed to influence behaviour, not just respond to preferences. They optimise for objectives set by retailers, not consumers. As these systems gain sophistication and autonomy, they acquire power to shape individual behaviour and market dynamics in ways that can't be addressed through transparency alone.

The survey data showing widespread consumer concern about AI privacy and fairness isn't irrational fear of technology. It's reasonable response to systems designed to extract value through psychological manipulation and information asymmetry. The fact that consumers continue using these systems despite concerns reflects lack of alternatives, not satisfaction with the status quo.

Meaningful accountability requires more than transparency. It requires power redistribution through mechanisms like mandatory impact assessments with independent audits, genuine opt-out rights with equivalent alternatives, collective consumer representation with bargaining power, public-interest algorithmic testing, and potentially fiduciary duties for systems that significantly influence consumer decisions.

The EU AI Act represents progress but faces challenges in implementation and enforcement. Its transparency requirements may not translate to genuine accountability if human oversight is ceremonial and bias testing remains voluntary for most retailers. The gap between regulatory ambition and enforcement capacity creates space for practices that technically comply whilst undermining regulatory goals.

Perhaps most importantly, we need to reclaim agency over retail AI's development. Rather than treating algorithmic transformation as inevitable technological progress, we should recognise it as a set of choices about what kind of retail environment we want, who should make decisions affecting millions of consumers, and whose interests should take priority when efficiency conflicts with autonomy, personalisation conflicts with privacy, and profit conflicts with fairness.

None of this suggests that retail AI is inherently harmful or that algorithmic systems can't benefit consumers. Genuinely helpful applications exist: systems that reduce food waste through better demand forecasting, that help workers avoid injury through ergonomic analysis, that make products more accessible through improved logistics. The question isn't whether to permit retail AI but how to ensure it serves public interests rather than merely extracting value from the public.

That requires moving beyond debates about transparency and risk mitigation to fundamental questions about power, purpose, and the role of technology in human life. It requires recognising that some technically feasible applications shouldn't exist, that some profitable practices should be prohibited, that some efficiencies cost too much in human dignity and autonomy.

The invisible hand of algorithmic retail is rewriting the rules of consumer choice. Whether we accept its judgments or insist on different rules depends on whether we continue treating these systems as partners in progress or recognise them as what they are: powerful tools requiring democratic oversight and public-interest constraints.

By 2027, when hyperlocal commerce powered by autonomous AI becomes ubiquitous, when most everyday shopping decisions flow through algorithmic systems, when the distinction between genuine choice and choreographed behaviour has nearly dissolved, we'll have normalised one vision of retail's future. The question is whether it's a future we actually want, or simply one we've allowed by default.


Sources and References

Industry Reports and Market Research

  1. Adobe Digital Trends 2025: Consumer AI shopping adoption trends. Adobe Digital Trends Report, 2025. Available at: https://business.adobe.com/resources/digital-trends-2025.html

  2. NVIDIA and UserTesting: “State of AI in Shopping 2024”. Research report on consumer AI privacy concerns (72% expressing unease). Available at: https://www.nvidia.com/en-us/ai-data-science/generative-ai/

  3. Gartner: “Forecast: AI Agents in Business Decision Making Through 2028”. Gartner Research, October 2024. Predicts 15% autonomous decision-making by AI agents in everyday business by 2028.

  4. McKinsey & Company: “The State of AI in Retail 2024”. McKinsey Digital, 2024. Reports 50% faster growth for retailers using autonomous AI and 5-10% revenue increases through AI-powered personalisation. Available at: https://www.mckinsey.com/industries/retail/our-insights

  5. Boston Consulting Group (BCG): “AI in Retail: Investment Trends 2024”. BCG reports 76% of retailers increasing AI investment, with 43% piloting autonomous systems. Available at: https://www.bcg.com/industries/retail

  6. Deloitte: “AI Fairness and Bias Survey 2021”. Deloitte Digital, 2021. Found 75% of consumers would stop using products from companies with unfair AI systems.

  7. Deloitte: “State of AI in the Enterprise, 7th Edition”. Deloitte, 2024. Reports only 20% of organisations have formal bias testing processes for AI models.

  8. Mordor Intelligence: “AI in Retail Market Size & Share Analysis”. Industry report projecting growth from $14.24 billion (2025) to $96.13 billion (2030), 46.54% CAGR. Available at: https://www.mordorintelligence.com/industry-reports/artificial-intelligence-in-retail-market

Regulatory Documentation

  1. European Union: “Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act)”. Official Journal of the European Union, 1 August 2024. Full text available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj

  2. Competition and Markets Authority (UK): Tesco Clubcard Pricing Investigation Records, 2024. CMA investigation into Clubcard pricing practices following Which? complaint.

  1. Amazon Biometric Data Lawsuit: New York City consumers vs. Amazon, filed 2023. Case concerning unauthorised biometric data collection through Just Walk Out technology. United States District Court, Southern District of New York.

  2. Target Biometric Data Class Action: Class action lawsuit alleging unauthorised biometric data use, 2024. Multiple state courts.

Corporate Statements and Documentation

  1. Walmart: “Adaptive Retail Strategy Announcement”. Walmart corporate press release, October 2024. Details on hyper-personalised AI shopping experiences and automation roadmap.

  2. Walmart: CEO Doug McMillon public statements on AI and employment transformation, 2024. Walmart investor relations communications.

  3. Walmart: Chief People Officer Donna Morris statements on AI training partnerships with OpenAI, 2024. Available through Walmart corporate communications.

  4. Tesco: CEO Ken Murphy speech at conference, September 2024. Discussed AI-powered health nudging using Clubcard data.

Technical and Academic Research Frameworks

  1. Institute of Internal Auditors (IIA): “Global Artificial Intelligence Auditing Framework”. IIA, 2024. Covers governance, data quality, performance monitoring, and ethics. Available at: https://www.theiia.org/

  2. ISACA: “Digital Trust Ecosystem Framework”. ISACA, 2024. Guidance for auditing AI systems against responsible AI principles. Available at: https://www.isaca.org/

  3. Academic Research on Consumer Autonomy: Multiple peer-reviewed studies on algorithmic systems' impact on consumer autonomy, including research on the “autonomy paradox” where AI recommendations simultaneously boost perceived autonomy whilst undermining actual autonomy. Key sources include:

    • Journal of Consumer Research: Studies on personalisation and consumer autonomy
    • Journal of Marketing: Research on algorithmic manipulation and consumer welfare
    • Information Systems Research: Technical analyses of recommendation system impacts
  4. Economic Research on Dynamic Pricing: Academic literature on algorithmic pricing, price discrimination, and consumer welfare impacts. Sources include:

    • Journal of Political Economy: Economic analyses of algorithmic pricing
    • American Economic Review: Research on information asymmetry in algorithmic markets
    • Management Science: Studies on dynamic pricing strategies and consumer outcomes

Additional Data Sources

  1. Survey on Consumer AI Trust: Multiple surveys cited reporting 81% of consumers believe AI companies will use information in uncomfortable ways. Meta-analysis of consumer sentiment research 2023-2024.

  2. Retail AI Adoption Statistics: Industry surveys showing 73% of top-performing retailers relying on autonomous AI systems, and 80% of retail executives expecting intelligent automation adoption by 2027.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...

#HumanInTheLoop #RetailAI #AlgorithmicManipulation #FairnessAndBias

Picture this: You open your favourite AI image generator, type “show me a CEO,” and hit enter. What appears? If you've used DALL-E 2, you already know the answer. Ninety-seven per cent of the time, it generates images of white men. Not because you asked for white men. Not because you specified male. But because somewhere in the algorithmic depths, someone's unexamined assumptions became your default reality.

Now imagine a different scenario. Before you can type anything, a dialogue box appears: “Please specify: What is this person's identity? Their culture? Their ability status? Their expression?” No bypass button. No “skip for now” option. No escape hatch.

Would you rage-quit? Call it unnecessary friction? Wonder why you're being forced to think about things that should “just work”?

That discomfort you're feeling? That's the point.

Every time AI generates a “default” human, it's making a choice. It's just not your choice. It's not neutral. And it certainly doesn't represent the actual diversity of human existence. It's a choice baked into training data, embedded in algorithmic assumptions, and reinforced every time we accept it without question.

The real question isn't whether AI should force us to specify identity, culture, ability, and expression. The real question is: why are we so comfortable letting AI make those choices for us?

The Invisible Default

Let's talk numbers, because the data is damning.

When researchers tested Stable Diffusion with the prompt “software developer,” the results were stark: one hundred per cent male, ninety-nine per cent light-skinned. The reality in the United States? One in five software developers identify as female, only about half identify as white. The AI didn't just miss the mark. It erased entire populations from professional existence.

The Bloomberg investigation into generative AI bias found similar patterns across platforms. “An attractive person” consistently generated light-skinned, light-eyed, thin people with European features. “A happy family”? Mostly smiling, white, heterosexual couples with kids. The tools even amplified stereotypes beyond real-world proportions, portraying almost all housekeepers as people of colour and all flight attendants as women.

A 2024 study examining medical professions found that Midjourney and Stable Diffusion depicted ninety-eight per cent of surgeons as white men. DALL-E 3 generated eighty-six per cent of cardiologists as male and ninety-three per cent with light skin tone. These aren't edge cases. These are systematic patterns.

The under-representation is equally stark. Female representations in occupational imagery fell significantly below real-world benchmarks: twenty-three per cent for Midjourney, thirty-five per cent for Stable Diffusion, forty-two per cent for DALL-E 2, compared to women making up 46.8 per cent of the actual U.S. labour force. Black individuals showed only two per cent representation in DALL-E 2, five per cent in Stable Diffusion, nine per cent in Midjourney, against a real-world baseline of 12.6 per cent.

But the bias extends to socioeconomic representations in disturbing ways. Ask Stable Diffusion for photos of an attractive person? Results were uniformly light-skinned. Ask for a poor person? Usually dark-skinned. While in 2020, sixty-three per cent of food stamp recipients were white and twenty-seven per cent were Black, AI asked to generate someone receiving social services generated only non-white, primarily darker-skinned people.

This is the “default human” in AI: white, male, able-bodied, thin, young, hetero-normative, and depending on context, either wealthy and professional or poor and marginalised based on skin colour alone.

The algorithms aren't neutral. They're just hiding their choices better than we are.

The Developer's Dilemma

Here's the thought experiment: would you ship an AI product that refused to generate anything until users specified identity, culture, ability, and expression?

Be honest. Your first instinct is probably no. And that instinct reveals everything.

You're already thinking about user friction. Abandonment rates. Competitor advantage. Endless complaints. One-star reviews, angry posts, journalists asking why you're making AI harder to use.

But flip that question: why is convenience more important than representation? Why is speed more valuable than accuracy? Why is frictionless more critical than ethical?

We've optimised for the wrong things. Built systems that prioritise efficiency over equity, called it progress. Designed for the path of least resistance, then acted surprised when that path runs straight through the same biases we've always had.

UNESCO's 2024 study found that major language models associate women with “home” and “family” four times more often than men, whilst linking male-sounding names to “business,” “career,” and “executive” roles. Women were depicted as younger with more smiles, men as older with neutral expressions and anger. These aren't bugs. They're features of systems trained on a world that already has these biases.

A University of Washington study in 2024 investigated bias in resume-screening AI. They tested identical resumes, varying only names to reflect different genders and races. The AI favoured names associated with white males. Resumes with Black male names were never ranked first. Never.

This is what happens when we don't force ourselves to think about who we're building for. We build for ghosts of patterns past and call it machine learning.

The developer who refuses to ship mandatory identity specification is making a choice. They're choosing to let algorithmic biases do the work, so they don't have to. Outsourcing discomfort to the AI, then blaming training data when someone points out the harm.

Every line of code is a decision. Every default value is a choice. Every time you let the model decide instead of the user, you're making an ethical judgement about whose representation matters.

Would you ship it? Maybe the better question is: can you justify not shipping it?

The Designer's Challenge

For designers, the question cuts deeper. Would you build the interface that forces identity specification? Would it feel like good design, or moral design? Is there a difference?

Design school taught you to reduce friction. Remove barriers. Make things intuitive, seamless, effortless. The fewer clicks, the better. Less thinking required, more successful the design. User experience measured in conversion rates and abandonment statistics.

But what if good design and moral design aren't the same thing? What if the thing that feels frictionless is actually perpetuating harm?

Research on intentional design friction suggests there's value in making users pause. Security researchers found that friction can reduce errors and support health behaviour change by disrupting automatic, “mindless” interactions. Agonistic design, an emerging framework, seeks to support agency over convenience. The core principle? Friction isn't always the enemy. Sometimes it's the intervention that creates space for better choices.

The Partnership on AI developed Participatory and Inclusive Demographic Data Guidelines for exactly this terrain. Their key recommendation: organisations should work with communities to understand their expectations of “fairness” when collecting demographic data. Consent processes must be clear, approachable, accessible, particularly for those most at risk of harm.

This is where moral design diverges from conventional good design. Good design makes things easy. Moral design makes things right. Sometimes those overlap. Often they don't.

Consider what mandatory identity specification would actually look like as interface. Thoughtful categories reflecting real human diversity, not limited demographic checkboxes. Language respecting how people actually identify, not administrative convenience. Options for multiplicity, intersectionality, the reality that identity isn't a simple dropdown menu.

This requires input from communities historically marginalised by technology. Understanding that “ability” isn't binary, “culture” isn't nationality, “expression” encompasses more than presentation. It requires, fundamentally, that designers acknowledge they don't have all the answers.

The European Union's ethics guidelines specify that personal and group data should account for diversity in gender, race, age, sexual orientation, national origin, religion, health and disability, without prejudiced, stereotyping, or discriminatory assumptions.

But here's the uncomfortable truth: neutrality is a myth. Every design choice carries assumptions. The question is whether those assumptions are examined or invisible.

When Stable Diffusion defaulted to depicting a stereotypical suburban U.S. home for general prompts, it wasn't being neutral. It revealed that North America was the system's default setting despite more than ninety per cent of people living outside North America. That's not a technical limitation. That's a design failure.

The designer who builds an interface for mandatory identity specification isn't adding unnecessary friction. They're making visible a choice that was always being made. Refusing to hide behind the convenience of defaults. Saying: this matters enough to slow down for.

Would it feel like good design? Maybe not at first. Would it be moral design? Absolutely. Maybe it's time we redefined “good” to include “moral” as prerequisite.

The User's Resistance

Let's address the elephant: most users would absolutely hate this.

“Why do I have to specify all this just to generate an image?” “I just want a picture of a doctor, why are you making this complicated?” “This is ridiculous, I'm using the other tool.”

That resistance? It's real, predictable, and revealing.

We hate being asked to think about things we've been allowed to ignore. We resist friction because we've been conditioned to expect technology should adapt to us, not the other way round. We want tools that read our minds, not tools that make us examine assumptions.

But pause. Consider what that resistance actually means. When you're annoyed at being asked to specify identity, culture, ability, and expression, what you're really saying is: “I was fine with whatever default the AI was going to give me.”

That's the problem.

For people who match that default, the system works fine. White, male, able-bodied, hetero-normative users can type “show me a professional” and see themselves reflected back. The tool feels intuitive because it aligns with their reality. The friction is invisible because the bias works in their favour.

But for everyone else? Every default is a reminder the system wasn't built with them in mind. Every white CEO when they asked for a CEO, full stop, is a signal about whose leadership is considered normal. Every able-bodied athlete, every thin model, every heterosexual family is a message about whose existence is default and whose requires specification.

The resistance to mandatory identity specification is often loudest from people who benefit most from current defaults. That's not coincidence. It's how privilege works. When you're used to seeing yourself represented, representation feels like neutrality. When systems default to your identity, you don't notice they're making a choice at all.

Research on algorithmic fairness emphasises that involving not only data scientists and developers but also ethicists, sociologists, and representatives of affected groups is essential. But users are part of that equation. The choices we make, the resistance we offer, the friction we reject all shape what gets built and abandoned.

There's another layer worth examining: learnt helplessness. We've been told for so long that algorithms are neutral, that AI just reflects data, that these tools are objective. So when faced with a tool that makes those decisions visible, that forces us to participate in representation rather than accept it passively, we don't know what to do with that responsibility.

“I don't know how to answer these questions,” a user might say. “What if I get it wrong?” That discomfort, that uncertainty, that fear of getting representation wrong is actually closer to ethical engagement than the false confidence of defaults.

The U.S. Equal Employment Opportunity Commission's AI initiative acknowledges that fairness isn't something you can automate. It requires ongoing engagement, user input, and willingness to sit with discomfort.

Yes, users would resist. Yes, some would rage-quit. Yes, adoption rates might initially suffer. But the question isn't whether users would like it. The question is whether we're willing to build technology that asks more of us than passive acceptance of someone else's biases.

The Training Data Trap

The standard response to AI bias: we need better training data. More diverse data. More representative data. Fix the input, fix the output. Problem solved.

Except it's not that simple.

Yes, bias happens when training data isn't diverse enough. But the problem isn't just volume or variety. It's about what counts as data in the first place.

More data is gathered in Europe than in Africa, even though Africa has a larger population. Result? Algorithms that perform better for European faces than African faces. Free image databases for training AI to diagnose skin cancer contain very few images of darker skin. Researchers call this “Health Data Poverty,” where groups underrepresented in health datasets are less able to benefit from data-driven innovations.

You can't fix systematic exclusion with incremental inclusion. You can't balance a dataset built on imbalanced power structures and expect equity to emerge. The training data isn't just biased. It's a reflection of a biased world, captured through biased collection methods, labelled by biased people, and deployed in systems that amplify those biases.

Researchers at the University of Southern California have used quality-diversity algorithms to create diverse synthetic datasets that strategically “plug the gaps” in real-world training data. But synthetic data can only address representation gaps, not the deeper question of whose representation matters and how it gets defined.

Data augmentation techniques like rotation, scaling, flipping, and colour adjustments can create additional diverse examples. But if your original dataset assumes a “normal” body is able-bodied, augmentation just gives you more variations on that assumption.

The World Health Organisation's guidance on large multi-modal models recommends mandatory post-release auditing by independent third parties, with outcomes disaggregated by user type including age, race, or disability. This acknowledges that evaluating fairness isn't one-time data collection. It's ongoing measurement, accountability, and adjustment.

But here's what training data alone can't fix: the absence of intentionality. You can have the most diverse dataset in the world, but if your model defaults to the most statistically common representation for ambiguous prompts, you're back to the same problem. Frequency isn't fairness. Statistical likelihood isn't ethical representation.

This is why mandatory identity specification isn't about fixing training data. It's about refusing to let statistical patterns become normative defaults. Recognising that “most common” and “most important” aren't the same thing.

The Partnership on AI's guidelines emphasise that organisations should focus on the needs and risks of groups most at risk of harm throughout the demographic data lifecycle. This isn't something you can automate. It requires human judgement, community input, and willingness to prioritise equity over efficiency.

Training data is important. Diversity matters. But data alone won't save us from the fundamental design choice we keep avoiding: who gets to be the default?

The Cost of Convenience

Let's be specific about who pays the price when we prioritise convenience over representation.

People with disabilities are routinely erased from AI-generated imagery unless explicitly specified. Even then, representation often falls into stereotypes: wheelchair users depicted in ways that centre the wheelchair rather than the person, prosthetics shown as inspirational rather than functional, neurodiversity rendered invisible because it lacks visual markers that satisfy algorithmic pattern recognition.

Cultural representation defaults to Western norms. When Stable Diffusion generates “a home,” it shows suburban North American architecture. “A meal” becomes Western food. For billions whose homes, meals, and traditions don't match these patterns, every default is a reminder the system considers their existence supplementary.

Gender representation extends beyond the binary in reality, but AI systems struggle with this. Non-binary, genderfluid, and trans identities are invisible in defaults or require specific prompting others don't need. The same UNESCO study that found women associated with home and family four times more often than men didn't even measure non-binary representation, because the training data and output categories didn't account for it.

Age discrimination appears through consistent skewing towards younger representations in positive contexts. “Successful entrepreneur” generates someone in their thirties. “Wise elder” generates seventies. The idea that older adults are entrepreneurs or younger people are wise doesn't compute in default outputs.

Body diversity is perhaps the most visually obvious absence. AI-generated humans are overwhelmingly thin, able-bodied, and conventionally attractive by narrow, Western-influenced standards. When asked to depict “an attractive person,” tools generate images that reinforce harmful beauty standards rather than reflect actual human diversity.

Socioeconomic representation maps onto racial lines in disturbing ways. Wealth and professionalism depicted as white. Poverty and social services depicted as dark-skinned. These patterns don't just reflect existing inequality. They reinforce it, creating a visual language that associates race with class in ways that become harder to challenge when automated.

The cost isn't just representational. It's material. When AI resume-screening tools favour white male names, that affects who gets job interviews. When medical AI is trained on datasets without diverse skin tones, that affects diagnostic accuracy. When facial recognition performs poorly on darker skin, that affects who gets falsely identified, arrested, or denied access.

Research shows algorithmic bias has real-world consequences across employment, healthcare, criminal justice, and financial services. These aren't abstract fairness questions. They're about who gets opportunities, care, surveillance, and exclusion.

Every time we choose convenience over mandatory specification, we're choosing to let those exclusions continue. We're saying the friction of thinking about identity is worse than the harm of invisible defaults. We're prioritising the comfort of users who match existing patterns over the dignity of those who don't.

Inclusive technology development requires respecting human diversity at stages of data collection, fairness decisions, and outcome explanations. But respect requires visibility. You can't include people you've made structurally invisible.

This is the cost of convenience: entire populations treated as edge cases, their existence acknowledged only when explicitly requested, their representation always contingent on someone remembering to ask for it.

The Ethics of Forcing Choice

We've established the problem, explored the resistance, counted the cost. But there's a harder question: is mandatory identity specification actually ethical?

Because forcing users to categorise people has its own history of harm. Census categories used for surveillance and discrimination. Demographic checkboxes reducing complex identities to administrative convenience. Identity specification weaponised against the very populations it claims to count.

There's real risk that mandatory specification could become another form of control rather than liberation. Imagine a system requiring you to choose from predetermined categories that don't reflect how you actually understand identity. Being forced to pick labels that don't fit, to quantify aspects of identity that resist quantification.

The Partnership on AI's guidelines acknowledge this tension. They emphasise that consent processes must be clear, approachable, accessible, particularly for those most at risk of harm. This suggests mandatory specification only works if the specification itself is co-designed with the communities being represented.

There's also the question of privacy. Requiring identity specification means collecting information that could be used for targeting, discrimination, or surveillance. In contexts where being identified as part of a marginalised group carries risk, mandatory disclosure could cause harm rather than prevent it.

But these concerns point to implementation challenges, not inherent failures. The fundamental question remains: should AI generate human representations at all without explicit user input about who those humans are?

One alternative: refusing to generate without specification. Instead of defaults and instead of forcing choice, the tool simply doesn't produce output for ambiguous prompts. “Show me a CEO” returns: “Please specify which CEO you want to see, or provide characteristics that matter to your use case.”

This puts cognitive labour back on the user without forcing them through predetermined categories. It makes the absence of defaults explicit rather than invisible. It says: we won't assume, and we won't let you unknowingly accept our assumptions either.

Another approach is transparent randomisation. Instead of defaulting to the most statistically common representation, the AI randomly generates across documented dimensions of diversity. Every request for “a doctor” produces genuinely unpredictable representation. Over time, users would see the full range of who doctors actually are, rather than a single algorithmic assumption repeated infinitely.

The ethical frameworks emerging from UNESCO, the European Union, and the WHO emphasise transparency, accountability, inclusivity, and long-term societal impact. They stress that inclusivity must guide model development, actively engaging underrepresented communities to ensure equitable access to decision-making power.

The ethics of mandatory specification depend on who's doing the specifying and who's designing the specification process. A mandatory identity form designed by a homogeneous tech team would likely replicate existing harms. A co-designed specification process built with meaningful input from diverse communities might actually achieve equitable representation.

The question isn't whether mandatory specification is inherently ethical. The question is whether it can be designed ethically, and whether the alternative, continuing to accept invisible, biased defaults, is more harmful than the imperfect friction of being asked to choose.

What Comes After Default

What would it actually look like to build AI systems that refuse to generate humans without specified identity, culture, ability, and expression?

First, fundamental changes to how we think about user input. Instead of treating specification as friction to minimise, we'd design it as engagement to support. The interface wouldn't be a form. It would be a conversation about representation, guided by principles of dignity and accuracy rather than administrative efficiency.

This means investing in interface design that respects complexity. Drop-down menus don't capture how identity works. Checkboxes can't represent intersectionality. We'd need systems allowing for multiplicity, context-dependence, “it depends” and “all of the above” and “none of these categories fit.”

Research on value-sensitive design offers frameworks for this development. These approaches emphasise involving diverse stakeholders throughout the design process, not as afterthought but as core collaborators. They recognise that people are experts in their own experiences and that technology works better when built with rather than for.

Second, transparency about what specification actually does. Users need to understand how identity choices affect output, what data is collected, how it's used, what safeguards exist against misuse. The EU's AI Act and emerging ethics legislation mandate this transparency, but it needs to go beyond legal compliance to genuine user comprehension.

Third, ongoing iteration and accountability. Getting representation right isn't one-time achievement. It's continuous listening, adjusting, acknowledging when systems cause harm despite good intentions. This means building feedback mechanisms accessible to people historically excluded from tech development, and actually acting on that feedback.

The World Health Organisation's recommendation for mandatory post-release auditing by independent third parties provides a model. Regular evaluation disaggregated by user type, with results made public and used to drive improvement, creates accountability most current AI systems lack.

Fourth, accepting that some use cases shouldn't exist. If your business model depends on generating thousands of images quickly without thinking about representation, maybe that's not a business model we should enable. If your workflow requires producing human representations at scale without considering who those humans are, maybe that workflow is the problem.

This is where the developer question comes back with force: would you ship it? Because shipping a system that refuses to generate without specification means potentially losing market share to competitors who don't care. It means explaining to investors why you're adding friction when the market rewards removing it. Standing firm on ethics when pragmatism says compromise.

Some companies won't do it. Some markets will reward the race to the bottom. But that doesn't mean developers, designers, and users who care about equitable technology are powerless. It means building different systems, supporting different tools, creating demand for technology that reflects different values.

Fifth, acknowledging that AI-generated human representation might need constraints we haven't seriously considered. Should AI generate human faces at all, given deepfakes and identity theft risks? Should certain kinds of representation require human oversight rather than algorithmic automation?

These questions make technologists uncomfortable because they suggest limits on capability. But capability without accountability is just power. We've seen enough of what happens when power gets automated without asking who it serves.

The Choice We're Actually Making

Every time AI generates a default human, we're making a choice about whose existence is normal and whose requires explanation.

Every white CEO. Every thin model. Every able-bodied athlete. Every heterosexual family. Every young professional. Every Western context. These aren't neutral outputs. They're choices embedded in training data, encoded in algorithms, reinforced by our acceptance.

The developers who won't ship mandatory identity specification are choosing defaults over dignity. The designers who prioritise frictionless over fairness are choosing convenience over complexity. The users who rage-quit rather than specify identity are choosing comfort over consciousness.

And the rest of us, using these tools without questioning what they generate, we're choosing too. Choosing to accept that “a person” means a white person unless otherwise specified. That “a professional” means a man. That “attractive” means thin and young and able-bodied. That “normal” means matching a statistical pattern rather than reflecting human reality.

These choices have consequences. They shape what we consider possible, who we imagine in positions of power, which bodies we see as belonging in which spaces. They influence hiring decisions and casting choices and whose stories get told and whose get erased. They affect children growing up wondering why AI never generates people who look like them unless someone specifically asks for it.

Mandatory identity specification isn't a perfect solution. It carries risks. But it does something crucial: it makes the choice visible. It refuses to hide behind algorithmic neutrality. It says representation matters enough to slow down for, to think about, to get right.

The question posed at the start was whether developers would ship it, designers would build it, users would accept it. But underneath that question is more fundamental: are we willing to acknowledge that AI is already forcing us to make choices about identity, culture, ability, and expression? We just let the algorithm make those choices for us, then pretend they're not choices at all.

What if we stopped pretending?

What if we acknowledged there's no such thing as a default human, only humans in all our specific, particular, irreducible diversity? What if we built technology that reflected that truth instead of erasing it?

This isn't about making AI harder to use. It's about making AI honest about what it's doing. About refusing to optimise away the complexity of human existence in the name of user experience. About recognising that the real friction isn't being asked to specify identity. The real friction is living in a world where AI assumes you don't exist unless someone remembers to ask for you.

The technology we build reflects the world we think is possible. Right now, we're building technology that says defaults are inevitable, bias is baked in, equity is nice-to-have rather than foundational.

We could build differently. We could refuse to ship tools that generate humans without asking which humans. We could design interfaces that treat specification as respect rather than friction. We could use AI in ways that acknowledge rather than erase our responsibility for representation.

The question isn't whether AI should force us to specify identity, culture, ability, and expression. The question is why we're so resistant to admitting that AI is already making those specifications for us, badly, and we've been accepting it because it's convenient.

Convenience isn't ethics. Speed isn't justice. Frictionless isn't fair.

Maybe it's time we built technology that asks more of us. Maybe it's time we asked more of ourselves.


Sources and References

Bloomberg. (2023). “Generative AI Takes Stereotypes and Bias From Bad to Worse.” Bloomberg Graphics. https://www.bloomberg.com/graphics/2023-generative-ai-bias/

Brookings Institution. (2024). “Rendering misrepresentation: Diversity failures in AI image generation.” https://www.brookings.edu/articles/rendering-misrepresentation-diversity-failures-in-ai-image-generation/

Currie, G., Currie, J., Anderson, S., & Hewis, J. (2024). “Gender bias in generative artificial intelligence text-to-image depiction of medical students.” https://journals.sagepub.com/doi/10.1177/00178969241274621

European Commission. (2024). “Ethics guidelines for trustworthy AI.” https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Gillespie, T. (2024). “Generative AI and the politics of visibility.” Sage Journals. https://journals.sagepub.com/doi/10.1177/20539517241252131

MDPI. (2024). “Perpetuation of Gender Bias in Visual Representation of Professions in the Generative AI Tools DALL·E and Bing Image Creator.” Social Sciences, 13(5), 250. https://www.mdpi.com/2076-0760/13/5/250

MDPI. (2024). “Gender Bias in Text-to-Image Generative Artificial Intelligence When Representing Cardiologists.” Information, 15(10), 594. https://www.mdpi.com/2078-2489/15/10/594

Nature. (2024). “AI image generators often give racist and sexist results: can they be fixed?” https://www.nature.com/articles/d41586-024-00674-9

Partnership on AI. (2024). “Prioritizing Equity in Algorithmic Systems through Inclusive Data Guidelines.” https://partnershiponai.org/prioritizing-equity-in-algorithmic-systems-through-inclusive-data-guidelines/

Taylor & Francis Online. (2024). “White Default: Examining Racialized Biases Behind AI-Generated Images.” https://www.tandfonline.com/doi/full/10.1080/00043125.2024.2330340

UNESCO. (2024). “Ethics of Artificial Intelligence.” https://www.unesco.org/en/artificial-intelligence/recommendation-ethics

University of Southern California Viterbi School of Engineering. (2024). “Diversifying Data to Beat Bias.” https://viterbischool.usc.edu/news/2024/02/diversifying-data-to-beat-bias/

Washington Post. (2023). “AI generated images are biased, showing the world through stereotypes.” https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/

World Health Organisation. (2024). “WHO releases AI ethics and governance guidance for large multi-modal models.” https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models

World Health Organisation. (2024). “Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models.” https://www.who.int/publications/i/item/9789240084759


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...

#HumanInTheLoop #RepresentationInAI #FairnessAndBias #InclusiveDesign