The Boss in the Code: Labour Rights When Machines Manage You

On a morning in Yogyakarta in early 2026, a food delivery rider named Lia was up before sunrise. She is 33, a mother of two, and the day started the way every day starts: breakfast for the children, uniforms located, school bags checked, the smaller one coaxed into shoes. Only once the front door had closed behind them did she open the app and begin looking for work. The algorithm that had ignored her for the previous ninety minutes registered her presence and began handing her orders. By the time she returned home that evening to cook, to clean, to help with homework, to do the second shift that nobody paid her for, the app had logged two cancellations against her name. One was a safety decision. The other was a child's fever. Neither was an excuse the system recognised. Her acceptance rate had slipped, her priority score with it, and the next morning the best-paying jobs would go to someone with fewer domestic obligations. Lia does not know exactly how the algorithm ranks her. No one does. The rules are not published. There is no one she can write to. There is, strictly speaking, no one.
Lia's story opens a peer-reviewed analysis published in The Conversation on 12 April 2026 by Suci Lestari Yuana, a lecturer at the Faculty of Social and Political Sciences at Universitas Gadjah Mada, Yogyakarta, and an Innovation Studies PhD from Utrecht University. The analysis, drawing on ethnographic fieldwork with Indonesian gig workers, makes an argument the academic literature has been edging toward for the better part of a decade and which has now, in 2026, become impossible to ignore. Algorithmic management of platform labour, presented by its designers as neutral, is in operation a machine for systematically disadvantaging anyone whose working pattern deviates from the profile of a worker with no care responsibilities. That profile, in Indonesia and almost everywhere else, is male. The discrimination is not encoded; it is structural. The algorithm does not hate women. It simply does not see them.
The Indonesian case is one node in a much larger story. On 25 December 2025, around 40,000 delivery workers across Mumbai, Delhi, Hyderabad, Bengaluru, and a scatter of smaller cities walked off the platforms of Swiggy, Zomato, Zepto, Blinkit, Amazon, and Flipkart in a flash strike that delayed roughly half of Indian food and quick-commerce orders for a day. The strike was organised by the Indian Federation of App-Based Transport Workers and the Telangana Gig and Platform Workers Union, fronted by Shaik Salauddin, the Telangana Four-Wheeler Drivers' Association veteran who has spent a decade turning ride-hail grievance into pan-Indian labour infrastructure. The demand list read like a catalogue of what algorithmic management produces when there is nothing to restrain it: transparent wage structures, an end to the ten-minute quick-commerce delivery target that had been killing couriers, guaranteed work allocation, mandatory rest breaks, a real grievance mechanism, and an end to deactivations that arrived without warning and without appeal. A follow-up strike on New Year's Eve extended the point. Within weeks the Union Government of India had directed the quick-commerce platforms to stop advertising the ten-minute promise. That was the easy win. Everything else is still being fought for.
And then, published in February 2026 in Humanities and Social Sciences Communications, a Nature portfolio journal, a configurational analysis of 316 longitudinally surveyed platform gig workers concluded what workers have been saying all along. Perceptions of decent work on AI-managed platforms emerge through a handful of distinct pathways, almost all of which depend on worker characteristics the platform algorithms do not register, cannot see, and do not accommodate. The study did not say the platforms are uniformly terrible. It said, more awkwardly, that whether a worker experiences anything resembling the International Labour Organization's concept of decent work is determined by the collision between that worker's life circumstances and an algorithm that does not know those circumstances exist.
Put the three documents side by side and a single question rises out of them. If algorithmic management is now the dominant form of labour oversight for hundreds of millions of people globally, if the entity issuing pay, allocating tasks, assessing performance, and terminating contracts is a system rather than a person, what would it mean to say that those people have labour rights at all?
The Shape of the Void
The most useful way to understand the legal position of a gig worker managed by an algorithm is to begin with what they do not have. They do not have an employer in the sense that labour law in most jurisdictions recognises. They do not have a contract of employment. They do not have the protections attached to that contract: minimum wage floors, statutory leave, sickness pay, notice periods, redundancy procedures, anti-discrimination duties, pension contributions, collective bargaining recognition. What they have, in the standard platform model, is a commercial relationship with a company that characterises them as an independent contractor or, in the Indonesian idiom used by ride-hailing platforms, a “partner.”
The partnership is one-sided. The worker accepts terms of service they cannot negotiate. The platform can change those terms at any time and typically does. Pay per task is set by a dynamic pricing algorithm whose inputs the worker cannot see and whose outputs they cannot predict. Work is allocated by a matching algorithm that considers factors the platform describes vaguely as availability, reliability, and proximity, and which in practice include acceptance history, ratings averages, and responsiveness windows that only loosely track what a worker might recognise as merit. Penalties for unavailability, late delivery, low ratings, or customer complaints are applied without a hearing. Deactivation, the platform term for sacking, can be triggered by a single passenger complaint, by a fraud-detection model's pattern match, or by an opaque review whose outcome arrives in a template email. There is, in most cases, no right of appeal that an impartial reader would recognise as meaningful.
This is the void the legal system has not yet filled. The 2025 Human Rights Watch report The Gig Trap, which examined seven major American platforms including Amazon Flex, DoorDash, Instacart, Lyft, Shipt, and Uber, found that six of the seven used algorithms with opaque rules to determine pay and assign jobs, that workers routinely did not know what they would earn until after completing a task, that Texas gig workers surveyed earned nearly 30 per cent below the federal minimum wage and roughly 70 per cent below the MIT-estimated Texas living wage, and that deactivation without warning was a structural feature of the industry rather than an aberration. Of the 65 workers Human Rights Watch surveyed who feared deactivation, 40 had already experienced it. The Fairwork 2025 United States ratings, titled When AI Eats the Manager and produced by the Oxford Internet Institute with the WZB Berlin Social Science Centre, found that the majority of the eleven platforms assessed, including Uber, Lyft, DoorDash, and Instacart, could not evidence they met the minimum thresholds of any of Fairwork's five principles: fair pay, fair conditions, fair contracts, fair management, and fair representation.
Three things follow from this architecture. First, the decision-maker is a system. Human intervention is present somewhere in the loop, but as Amsterdam's Court of Appeal found in 2023 in its judgment on the Drivers v. Uber and Ola cases, a human signature on a termination decision produced by a model does not count as meaningful review when the reviewer in practice does little more than endorse the algorithmic output. The Court called it a “purely symbolic act.” Second, the decision is opaque. The worker cannot know why a rate fell, why an order went to someone else, why an account was suspended. The rules are trade secrets; the training data is private; the weightings are proprietary. Third, the decision is unappealable in the sense that would matter to a lawyer. There is no tribunal. There is a support chat, often another bot, and a form. If the form does not help, the worker's recourse is to find another platform.
Contract law applies. Consumer protection law applies at the margins. Data protection law, in jurisdictions that have it, applies in a way that is slowly becoming useful. But the dense, historically accumulated body of labour law, the workplace-specific settlement Western democracies spent a century building and much of the rest of the world has been extending imperfectly ever since, does not. The gig worker managed by an algorithm stands in a relation to their livelihood that looks, from one angle, like self-employment, from another, like serfdom, and from a third, like nothing the law has seen before.
What Algorithmic Wage Discrimination Actually Looks Like
The scholar who has done most to name the pay side of this problem is Veena Dubal, a professor of law at the University of California, Irvine, whose 2023 Columbia Law Review paper On Algorithmic Wage Discrimination coined the term and grounded it in nearly a decade of ethnographic fieldwork with ride-hail drivers in the San Francisco Bay Area. Dubal's core observation is almost embarrassing in its plainness. Platforms that once paid a flat per-mile or per-minute rate now use machine learning to personalise pay. Two drivers working the same hours in the same city with the same skills can earn strikingly different amounts. Uber's own research, which Dubal catalogued, found that drivers who work longer make less per hour. The variable-rate structure is not an accident; it is an extraction mechanism. The model learns which drivers will accept which jobs at which prices and squeezes each one as far as the model's predictions say they will accept.
Other sources have corroborated it. Research published by the University of Oxford in partnership with Worker Info Exchange, the UK non-profit founded by James Farrar, the former Uber driver who was a claimant in the UK Supreme Court case Uber BV v. Aslam, found that 82 per cent of UK Uber drivers earn less per hour after the introduction of dynamic pay and that the platform's commission on fares now often exceeds 50 per cent, against a previous flat rate of 25 per cent. Worker Info Exchange has since issued Uber a Letter Before Action on behalf of drivers in the UK and Europe, challenging the dynamic pay system as unlawful. It is the first collective legal action in Europe to take direct aim at personalised algorithmic pay.
The Indonesian story is structurally the same but plays out against a different backdrop. Yuana's fieldwork describes women delivery riders like Lia and single-mother riders like Cinthia, whose ability to work is governed by the hours when children are at school or asleep, and ride-hailing drivers like Yanti, the 43-year-old in Yogyakarta who messages male passengers before pick-up to announce, defensively and truthfully, that their driver is a woman. Many cancel. The app records those cancellations. It does not record why. Yanti's acceptance rate falls. Her priority in the matching queue falls. Her earnings fall. She avoids late-night work, because working until three in the morning in Yogyakarta is not a safety-neutral choice for a woman, and the late-night multiplier bonuses that inflate male drivers' weekly totals stay out of reach. The algorithm is not hostile to Yanti. It is structurally indifferent to the fact of being Yanti. In Dubal's vocabulary, Yanti is being wage-discriminated against by a system that has never heard her name.
The Nature study from February 2026 puts empirical scaffolding under this picture. Using fuzzy-set qualitative comparative analysis on 316 longitudinally surveyed gig workers, the authors identified a configuration they labelled, in the clinical language of the genre, the “deep acting-female gig worker” pathway to perceived decent work. In English: women who manage to experience their platform labour as dignified tend to do so only when they can perform sustained emotional regulation, mostly in their interactions with customers, to compensate for structural conditions the algorithm imposes on them. The decent-work perception is bought at the cost of additional unpaid emotional labour layered on top of unpaid domestic labour on top of the paid work that brings food to someone's door. That is three shifts. The algorithm sees the third.
The Twelve Hundred Drivers and the Robot Judge
The legal frontier on which all of this is being fought in 2026 is data protection. It is a surprising place for the fight to have landed. Data protection law was not written as labour law. But the drafters of the European Union's General Data Protection Regulation, in one of the more consequential last-minute additions to the 2016 text, included Article 22, which gives data subjects the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects, subject to narrow exceptions with meaningful safeguards. The drafters did not have platform workers in mind. Their concern was credit scoring and automated profiling. But the gig economy has proved to be the terrain on which Article 22 is doing its most strenuous work.
The case that matters most is Drivers v. Uber and Ola, the consolidated proceedings brought by the App Drivers and Couriers Union and Worker Info Exchange on behalf of drivers in the UK, Portugal, and elsewhere. In 2021, a lower Amsterdam court issued a largely unfavourable ruling. In April 2023, the Amsterdam Court of Appeal reversed it, holding that Uber's deactivation of three drivers' accounts had been based exclusively on automated processing and therefore breached Article 22. Crucially, the appellate court rejected Uber's claim that its human reviewers constituted the “meaningful human intervention” the law requires. The judgment described the reviewers as performing something close to ritual. They had rubber-stamped outputs they were not equipped to interrogate. The algorithmic decision was the decision; the human had merely transcribed it.
The judgment did several things at once. It established that a GDPR right to explanation exists in the gig economy context. It established that data trusts run by third parties such as Worker Info Exchange are a legitimate vehicle for collective enforcement. And it put European platforms on notice that automated deactivation is a legal hazard, not merely a reputational one. By the time the App Drivers and Couriers Union filed a further challenge in 2024 on behalf of more than a thousand British drivers allegedly fired by algorithm without appeal, the legal theory had matured. Automated firing, without a genuine human reviewer, is unlawful in the EU and, under the UK Data Protection Act, in the UK as well. What remains to be tested is the breadth of the remedy.
The EU has since attempted to translate the case-law settlement into structured legislation. Directive (EU) 2024/2831 on improving working conditions in platform work, adopted by the European Parliament in April 2024 and in force from 1 December 2024, requires Member States to transpose it by 2 December 2026. The directive imposes transparency obligations on platforms' use of automated monitoring and decision-making systems, guarantees human oversight of such systems, and prohibits decisions that limit, suspend, or terminate a worker's account (or any other decision having equivalent effect) unless taken by a human being. It prohibits processing of emotional or psychological state data. It gives workers the right to have significant automated decisions explained and reviewed. The directive does not abolish algorithmic management. It insists meaningful human judgment sits at the points where the algorithm touches the worker's livelihood. Whether the transposition into twenty-seven national systems will produce that judgment in substance or merely reproduce its legal form is the open question of the 2026 labour year.
Karnataka, and the Global South Experiment
The response to algorithmic management outside the European context has been uneven and, until recently, mostly theoretical. India's Code on Social Security 2020, finally brought into force on 21 November 2025, represents the largest single legal recognition of gig and platform workers in the world and tries to build a social-security floor under them. Aggregators are required to contribute 1 to 2 per cent of their annual turnover, capped at 5 per cent of payments to workers, to a Social Security Fund that is supposed to finance accident insurance, health and maternity benefits, disability cover, and old-age protection. The architecture is correct. The operational detail is not. As of early 2026, the contribution rates remain unnotified, the fund exists on paper, and the benefits have not been delivered. The December 2025 strike was, in no small part, a strike about the gap between what the Code promises and what the aggregators have yet to hand over.
The more interesting Indian experiment is in Karnataka, whose Platform Based Gig Workers (Social Security and Welfare) Act 2025, notified on 12 September 2025 and effective from 30 May 2025, is the first state-level statute in India to impose direct obligations on algorithmic management. Section 13 requires platforms to explain how their automated systems affect fares, ratings, and task assignments. The Act requires aggregators to prevent algorithmic discrimination on grounds of religion, race, caste, gender, place of birth, and disability. It gives workers the right to seek transparency regarding the parameters used by automated management and decision-making systems. It establishes a Karnataka Platform Based Gig Workers Welfare Board, headquartered in Bengaluru. It is, on paper, the most comprehensive algorithmic-management statute outside the European Union.
Whether it will bite in practice is a question the Business and Human Rights Resource Centre has been tracking. Early analyses observe that the Act's enforcement mechanisms remain weak, that the appeal rights for workers subject to arbitrary deactivation are thin, and that the welfare board faces the usual Indian challenge of adequate staffing and funding. The Act is a legislative intent. It may or may not become a settlement. But its existence matters because it establishes, for the first time in the Global South, a statutory framework that treats algorithmic management as a distinct labour-relations practice requiring its own regulatory architecture.
Indonesia has no equivalent. The country's gig labour market, dominated by Gojek and Grab, operates largely in the partner-classification void Yuana's analysis describes. The International Labour Organization's 2025 work on AI for equality at work in Indonesia has begun to chart a path, identifying algorithmic bias in task allocation, safety risks in night-work patterns that disproportionately affect women, and the absence of meaningful appeal as priority concerns. The ITUC has documented the quiet but persistent organising of Indonesian gig workers, particularly through the SPAI Indonesia Platform Workers Union, as a response to the failure of the legal system to catch up.
The Problem of Dignity
The hardest category in any attempt to reconstruct labour rights for algorithmically managed workers is dignity. A worker who has been sacked for a reason they do not understand, by a system that cannot hear them, and on a platform that sends a template email wishing them well in their future endeavours, has lost income and livelihood. They have also lost something less concrete: the right to be considered by another person, to have their case weighed, to have their circumstances acknowledged. Dignity, in the labour context, has always been bound up with the presence of a human decision-maker who is, in principle, accountable for the decision. The algorithmic regime replaces that presence with a system that is not, strictly speaking, anyone.
The International Labour Organization's concept of decent work, formalised in 1999 and elaborated across two decades of policy instruments, tries to name the relevant combination: productive employment, fair income, security in the workplace, social protection for families, prospects for development, freedom to voice concerns, participate in decisions, organise, and be treated with equal respect. The word that holds it together is not “wage” or “hours” but “respect.” And respect, in the platform context, is the category algorithmic management tends to strip out first, because respect requires recognition, and recognition requires seeing the worker as a person with a biography rather than as a row in a scoring table.
The February 2026 Nature study found that respect was, empirically, the dimension of decent work most consistently rated short by gig workers. The December 2025 Indian strike was, in its organisers' framing, a strike for dignity first and money second. The Indonesian fieldwork, in Yuana's account, is saturated with the experience of women workers who describe the indignity of the algorithm more than its unfairness. What looks, from the platform's dashboard, like an optimisation problem looks from the rider's saddle like a system that is actively refusing to see her.
Various scholars have tried to name what a dignified algorithmic-management regime would require. The Frontiers in Sociology 2026 systematic review gathered the candidates under headings such as algorithmic dignity and fairwork. The ingredients come down to five: transparency, so the worker can understand how decisions about them are made; contestability, so they can challenge those decisions with a real prospect of reversal; human involvement at decisive moments, so the machine does not have the last word on livelihood; collective voice, so workers can organise, bargain, and influence the design of the systems; and a social-security floor that survives the discontinuity of platform employment. None is unfamiliar. Every one has been part of the architecture of twentieth-century labour law. What is novel is the requirement to port them into a contractual and technological environment that does not have the traditional handholds of a workplace, a shift, a manager, a union recognition agreement, or a union at all.
The Case for the Living Wage, Port by Port
The economic case for algorithmic management as the dominant form of twenty-first-century labour oversight has always rested on a single claim. The platform produces work more efficiently, more cheaply, and in greater quantity than the alternative. The consumer gets a fifteen-minute delivery; the retailer gets a flexible workforce; the investor gets an asset-light business model with scalable margins. The worker, in the standard telling, gets flexibility. The Fairwork and Human Rights Watch findings call the flexibility claim into question. The December 2025 Indian strike, against a ten-minute delivery target that was killing couriers on overcrowded roads, called it into question more forcefully. The Nature study provides the quantitative version: working hours, for gig workers on AI-managed platforms, correlate inversely with perceived decent work. The more you work, the less dignified the work becomes. The flexibility is, in many cases, the flexibility of accepting whatever terms the algorithm sets, at whatever hours the algorithm rewards, for as long as the algorithm keeps offering.
The counter-model is visible across the EU Platform Work Directive, the Karnataka Act, the Indian Code on Social Security, and the pending Worker Info Exchange litigation. It consists of a minimum wage floor denominated in local currency per hour worked; a published and auditable algorithmic specification; a statutory right to human review of any decision affecting livelihood; a prohibition on processing of emotional or psychological data; a collective bargaining architecture that recognises platform-worker unions; and a social-security framework financed by the aggregator out of turnover rather than out of the worker's effective pay. None of this is radical. The combination, in 2026, is radical only because the platforms have spent a decade arguing it is inapplicable to them. That argument is losing.
What it is losing to is the slow reassertion by the state, the court, and the union of a proposition once taken to be settled. The proposition is that the relationship between a worker and the entity that directs their labour is not a contract of pure commercial parity, and that the law has a legitimate interest in regulating the power asymmetry between them. That proposition is older than the gig economy by more than a century.
What the Second Shift Has to Say
It is worth returning to the specific argument Yuana's Conversation piece made, because it names something the general analyses tend to miss. The female gig workers in Yuana's fieldwork are not merely victims of algorithmic opacity. They are victims of an algorithmic system optimised, intentionally or not, around a worker who does not exist for them. The profile the algorithm rewards, the always-available, instantly-responsive, evening-and-weekend-flexible worker, presupposes an absence of domestic responsibility. In most societies, including Indonesia and including the United Kingdom, that profile describes men more accurately than women. The algorithm does not discriminate against women. It optimises for a worker profile most women cannot meet, then penalises the deviation.
The consequence, in Yuana's data, is that Indonesian women gig workers consistently earn less than men for what is nominally the same work. Their acceptance rates are lower, their priority scores are lower, their access to peak-hour bonuses is lower, and their exposure to sudden deactivation when they need to cancel for a sick child is higher. The effect is, in the literal sense of Dubal's term, wage discrimination, but it is wage discrimination of a kind that no disparate-impact analysis the platform lawyers would accept is being run.
This is the dimension that the standard framework of algorithmic-management reform, focused on transparency and appeal, does not fully address. Transparency and appeal help the worker who already falls within the worker profile the algorithm recognises. They help less the worker whose life does not fit the profile at all. Decent work, in the Nature study's configurations, turns out to be a function of whether the worker can absorb the mismatch between their life and the algorithm's assumptions, or whether the mismatch absorbs them. The policy implication is uncomfortable. It is not enough for the algorithm to be explained. It must be constrained not to encode an ideal worker profile that the work itself cannot accommodate. Whether Article 22 of the GDPR, Section 13 of the Karnataka Act, the EU Platform Work Directive, or the Indian Code on Social Security is capable of reaching this deeper requirement is, as of April 2026, genuinely unclear.
The Algorithm Does Not Know Your Name
The legal philosophers who have written on the gig economy have tended, in the last decade, to oscillate between two positions. The first, associated with deregulatory defenders of the platform model, holds that gig work is a new form of self-employment and the older apparatus of labour law is a category error when applied to it. The second, associated with Dubal, with Jeremias Adams-Prassl at Oxford, and with scholars grouped around the Fairwork project and Worker Info Exchange, holds that gig work is work; that the platforms are employers by any functional test; and that the older apparatus of labour law is exactly what is required, merely rephrased to cope with the novelty of the technology.
The 2026 evidence suggests neither position is quite adequate. The gig worker managed by an algorithm is neither a self-employed entrepreneur nor an employee in the 1970s sense. They are something the law has not cleanly conceptualised: a person whose livelihood is governed by a system they cannot see, against which they have no functional appeal, whose parameters they cannot negotiate, and whose outputs are determined by inputs including their own behaviour in ways they can learn to game but never fully understand. The legal category for this kind of relationship does not yet exist. The EU Platform Work Directive, the Karnataka Act, and the pending Worker Info Exchange litigation are the first serious attempts to build it.
What those attempts share is a commitment to five propositions. Automated decisions that affect livelihood must be humanly reviewable by a reviewer who is not performing a ritual. The rules of the algorithmic system must be disclosed in a form intelligible to the worker and their representatives. The worker must have the standing to demand that review and that disclosure, either individually or through a collective body such as a union or a data trust. The worker must have the right to organise, to bargain, and to withdraw labour without reprisal. And the state must construct a social-security floor that no platform is permitted to pass its employment risks beneath.
None of this restores the ordinary twentieth-century worker-employer relationship. But none of it needs to. The question is not whether platform work can be converted into factory work. The question is whether the deeper principles that made factory work tolerable, accountability, transparency, voice, and dignity, can be ported into a technological architecture not designed with them in mind. The answer that Yuana's Indonesian fieldwork, the Indian strike of December 2025, the Nature study of February 2026, the Amsterdam judgments of 2023, and the slow accretion of EU and Indian statute together suggest is that this is possible in principle, partly accomplished in law, and almost entirely unfinished in practice.
Lia, cooking her children's breakfast in Yogyakarta on a morning in April 2026, will not see the benefit of any of it for some years yet. The algorithm that ranks her does not know her name. It will not read the Nature study. It will not attend the Karnataka Welfare Board. It will adjust its weightings, quietly, in response to the overall pattern of worker behaviour, and it will continue to optimise for a worker it has not met. What will eventually change Lia's working life is not a better algorithm but a legal and collective architecture that forces the algorithm to meet her. The workers are ahead of the theorists. Salauddin's 40,000 on the streets of Mumbai on the day after Christmas 2025 did not need a law professor to tell them what was wrong. They needed a mechanism that would translate what was wrong into something an algorithm, and the corporation behind it, could be forced to listen to. Labour rights, in the era of algorithmic management, mean what they have always meant: the enforceable guarantee that the system which governs your working life must answer to you. The principle is old. The apparatus is new. The gap between them is where the next decade of labour law will be written.
References and Sources
- Yuana, Suci Lestari. “Algorithms don't care: how AI worsens the double burden for Indonesia's female gig workers.” The Conversation, 12 April 2026. https://theconversation.com/algorithms-dont-care-how-ai-worsens-the-double-burden-for-indonesias-female-gig-workers-279978
- Progressive International. “India's Gig Workers Strike for Dignity and Protection.” 29 December 2025. https://progressive.international/wire/2025-12-29-indias-gig-workers-strike-for-dignity-and-protection/en/
- Human Rights Research. “40,000 gig workers launch flash strike in India demanding fair pay and security.” 2025. https://www.humanrightsresearch.org/post/40-000-gig-workers-launch-flash-strike-in-india-demanding-fair-pay-and-security
- India TV News. “Gig workers launch nationwide strike on New Year's Eve December 31.” 31 December 2025. https://www.indiatvnews.com/news/india/gig-workers-launch-nationwide-strike-on-new-year-s-eve-december-31-what-are-their-demands-and-what-it-means-for-you-zomato-swiggy-blinkit-delivery-2025-12-31-1023899
- The Tribune. “'End 10-min delivery': Gig workers launch nationwide strike against low pay, safety concerns on New Year's Eve.” 2025. https://www.tribuneindia.com/news/deliveryworkers/end-10-min-delivery-gig-workers-launch-nationwide-strike-against-low-pay-safety-concerns-on-new-years-eve
- Humanities and Social Sciences Communications (Nature). “Platform gig work conditions and workers' perceptions of decent work: a configurational and necessity perspective.” Volume 13, Article 359, February 2026. https://www.nature.com/articles/s41599-026-06702-5
- Frontiers in Sociology. “Algorithmic management in the global gig economy: an interdisciplinary systematic literature review and critical discourse analysis.” 2026. https://www.frontiersin.org/journals/sociology/articles/10.3389/fsoc.2026.1743445/full
- Human Rights Watch. “The Gig Trap: Algorithmic, Wage and Labor Exploitation in Platform Work in the US.” 12 May 2025. https://www.hrw.org/report/2025/05/12/the-gig-trap/algorithmic-wage-and-labor-exploitation-in-platform-work-in-the-us
- Fairwork. “Fairwork US Ratings 2025: When AI Eats the Manager.” Oxford Internet Institute and WZB Berlin Social Science Centre, 2025. https://fair.work/en/fw/publications/fairwork-us-ratings-2025/
- Oxford Internet Institute. “New report reveals best and worst practices in the platform economy in the US.” 2025. https://www.oii.ox.ac.uk/news-events/new-report-reveals-best-and-worst-practices-in-the-platform-economy-in-the-us/
- Dubal, Veena. “On Algorithmic Wage Discrimination.” Columbia Law Review, Volume 123, 2023. https://columbialawreview.org/content/on-algorithmic-wage-discrimination/
- Equitable Growth. “On Algorithmic Wage Discrimination (working paper).” Veena Dubal, 2023. https://equitablegrowth.org/wp-content/uploads/2023/07/071223-WP-On-Algorithmic-Wage-Discrimination-Dubal.pdf
- Worker Info Exchange. “Drivers in UK and Europe set to sue Uber for unfair pay set by algorithm.” 2025. https://www.workerinfoexchange.org/post/drivers-in-uk-and-europe-set-to-sue-uber-for-unfair-pay-set-by-algorithm
- App Drivers & Couriers Union. “ADCU and Worker Info Exchange file ground-breaking legal challenge against Uber's dismissal of drivers by algorithm in the UK and Portugal.” https://www.adcu.org.uk/news-posts/app-drivers-couriers-union-files-ground-breaking-legal-challenge-against-ubers-dismissal-of-drivers-by-algorithm-in-the-uk-and-portugal
- Fountain Court Chambers. “Amsterdam Court Upholds Appeal in Algorithmic Decision-Making Test Case: Drivers v Uber and Ola.” April 2023. https://fountaincourt.uk/2023/04/amsterdam-court-upholds-appeal-in-algorithmic-decision-making-test-case-drivers-v-uber-and-ola/
- Council of the European Union. “Platform workers: Council confirms agreement on new rules to improve their working conditions.” 11 March 2024. https://www.consilium.europa.eu/en/press/press-releases/2024/03/11/platform-workers-council-confirms-agreement-on-new-rules-to-improve-their-working-conditions/
- European Parliament. “Parliament adopts Platform Work Directive.” 24 April 2024. https://www.europarl.europa.eu/news/en/press-room/20240419IPR20584/parliament-adopts-platform-work-directive
- Freshfields Technology Quotient. “The EU platform workers directive: effective as of 1 December 2024.” https://technologyquotient.freshfields.com/post/102jqg1/the-eu-platform-workers-directive-effective-as-of-1-december-2024-what-does-thi
- GDPR Info. “Art. 22 GDPR: Automated individual decision-making, including profiling.” https://gdpr-info.eu/art-22-gdpr/
- Fisher Phillips. “India's New Labor Codes Extend Social Security Coverage to Gig Workers: Key Employer Takeaways.” 2025. https://www.fisherphillips.com/en/news-insights/indias-new-labor-codes-extend-social-security-coverage-to-gig-workers.html
- PRS India. “The Karnataka Platform Based Gig Workers (Social Security and Welfare) Bill, 2025.” https://prsindia.org/bills/states/the-karnataka-platform-based-gig-workers-social-security-and-welfare-bill-2025
- Business & Human Rights Resource Centre. “India: Karnataka's gig worker law introduces algorithmic transparency, but enforcement and appeal rights remain weak.” https://www.business-humanrights.org/en/latest-news/india-karnatakas-gig-worker-law-introduces-algorithmic-transparency-but-enforcement-and-appeal-rights-remain-weak/
- International Labour Organization. “AI for equality at work in Indonesia: Harnessing technology to create fair, inclusive and decent workplaces.” https://www.ilo.org/resource/news/ai-equality-work-indonesia-harnessing-technology-create-fair-inclusive-and
- International Trade Union Confederation. “Long silenced, gig workers in Indonesia are organising and fighting for their rights.” https://www.ituc-csi.org/long-silenced-gig-workers-in
- International Labour Organization. “The Algorithmic Management of Work.” https://www.ilo.org/media/372856/download

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk