Rejected by Machines: Inside the Crisis of Automated Recruitment

Bhuvana Chilukuri has applied to more than a hundred jobs. She is a 20-year-old third-year business student at Queen Mary University of London, articulate and qualified, and she has not received a single offer. In several instances her applications were rejected within minutes, far too quickly for any human being to have read her CV, let alone assessed her suitability. The initial stages of hiring, she told the BBC in March 2026, are increasingly handled by AI tools that screen CVs and, in some cases, conduct entirely automated video interviews. The experience, she said, feels impersonal and mechanical, a process that strips away any chance to convey personality or demonstrate the kinds of qualities that do not fit neatly into a keyword match.

Chilukuri is not an outlier. She is a data point in a pattern so large it has become invisible through sheer repetition. Denis Machuel, chief executive of the Adecco Group, one of the world's largest recruitment firms, confirmed the broader dynamic to the BBC: job vacancies have declined from post-pandemic highs, and candidates now routinely submit hundreds of applications to secure a single offer. AI enables companies to process larger candidate pools at speed, but the consequence is an ever-growing population of unsuccessful applicants and a mounting sense of futility among those looking for work. A Collins McNicholas survey published in 2025 found that 75 per cent of job seekers believe AI unfairly filters their applications, while 74 per cent described automated rejection emails as impersonal and dismissive. A Resume Genius survey of 1,000 hiring managers, published in early 2026, found that 79 per cent of companies now use AI somewhere in their hiring or recruiting process, and one in five hiring managers admitted to using AI to screen out applications before they receive any human review at all.

The scale of the filtering is staggering. Research published in early 2026 indicates that more than 90 per cent of employers now use some form of automated system to filter or rank job applications, and that 88 per cent employ AI for initial candidate screening. For every 180 people who apply for a given role, roughly five get an interview. Of those, one or two are hired. The rest vanish into a void that most of them suspect, correctly, is algorithmic. Forty per cent of job applications are now screened out before a human recruiter ever reviews them. An analysis of 1,000 rejected resumes found that 23 per cent of rejections were caused by parsing errors alone: the applicant tracking system could not read the resume correctly because of tables, columns, graphics, or unusual file formats. These are not candidates who were unqualified. They were candidates whose documents confused a machine.

The question is no longer whether algorithms are making consequential decisions about people's working lives. They are. The question is whether anyone, the candidates, the employers, or the regulators, can explain how those decisions are being made, and what it would take to make the system fair.

The Invisible Dossier

On 21 January 2026, two job applicants named Erin Kistler and Sruti Bhaumik filed a class-action lawsuit against Eightfold AI Inc. in California. Both have backgrounds in STEM. Both had applied for positions at major companies through online portals whose URLs contained “eightfold.ai,” a detail neither noticed at the time. Neither had any idea that a company called Eightfold existed, let alone that it was compiling what the lawsuit describes as secret consumer reports on their candidacy.

Eightfold's technology operates behind the application portals of some of the world's largest employers, including Microsoft, Morgan Stanley, Starbucks, BNY, PayPal, Chevron, and Bayer. According to the complaint, filed by the law firms Outten and Golden and Towards Justice, the platform scrapes personal data from third-party sources and runs it through a proprietary large language model to generate a “likelihood of success” score on a scale of zero to five. The system draws on what Eightfold describes as more than 1.5 billion global data points, including profiles of over one billion workers, and makes inferences about applicants' preferences, characteristics, predispositions, behaviour, attitudes, intelligence, abilities, and aptitudes. Applicants receive no disclosure that the report exists. They have no access to it. They have no opportunity to dispute errors. And they receive no notice before the information is used to make what the complaint calls “life-altering employment decisions.”

“I've applied to hundreds of jobs, but it feels like an unseen force is stopping me,” Kistler said in a statement released through her legal team. David Seligman, an attorney with Towards Justice, was more direct: “AI systems like Eightfold's are making life-altering decisions.”

The lawsuit alleges that Eightfold's scoring system constitutes a consumer report under the Fair Credit Reporting Act and California's Investigative Consumer Reporting Agencies Act. The argument is straightforward: if a third-party company compiles a dossier about you, scores your fitness for employment, and sells that assessment to employers who use it to accept or reject your application, the resulting product is functionally identical to a credit report. And credit reports come with legal protections that have governed the industry for decades: the right to know a report exists, the right to see it, the right to challenge inaccuracies, and the right to be notified before adverse action is taken on the basis of the report's contents. Eightfold, according to the complaint, provides none of these protections.

Eightfold's spokesperson, Kurt Foeller, told Fortune that the company “does not scrape social media” and operates only on data that applicants have intentionally shared. The plaintiffs dispute this characterisation. Pauline Kim, the Daniel Noyes Kirby Professor of Law at Washington University School of Law, told Fortune that the case represents the first major instance of the Fair Credit Reporting Act being applied specifically to AI decision-making in hiring, a development that could reshape how companies deploy screening technologies.

The lawsuit arrives at a moment of acute regulatory uncertainty. In October 2024, the Consumer Financial Protection Bureau published a circular stating explicitly that algorithmic employment scores are covered by the Fair Credit Reporting Act. The guidance was designed to close the gap between decades-old consumer protection law and the realities of automated hiring. It was rescinded in May 2025, part of a broader withdrawal of 67 guidance documents under the direction of acting CFPB director Russell T. Vought. The legal framework that might have governed companies like Eightfold was erected and demolished within seven months.

Kim has noted in her academic work that the Fair Credit Reporting Act, even when applied to AI hiring tools, provides only limited transparency. It establishes procedural requirements that can help individual workers challenge inaccurate information, but does little to curb intrusive data collection or to address the risks of unfair or discriminatory algorithms. The statute was written for an era of filing cabinets and background checks. The technology it is now being asked to regulate operates at a scale and speed that its authors never imagined.

When the Machine Measures the Wrong Thing

On 8 April 2026, researchers Rudra Jadhav and Janhavi Danve posted a paper on arXiv titled “The AI Skills Shift: Mapping Skill Obsolescence, Emergence, and Transition Pathways in the LLM Era.” The paper introduces a metric called the Skill Automation Feasibility Index, or SAFI, which benchmarks four frontier large language models across 263 text-based tasks spanning all 35 skills in the US Department of Labor's O*NET taxonomy. The researchers conducted 1,052 model calls with a zero per cent failure rate and cross-referenced their findings against real-world adoption data covering 756 occupations and 17,998 tasks.

The findings reveal a paradox that sits at the heart of AI-driven hiring. Mathematics received the highest automation feasibility score at 73.2, followed by programming at 71.8. Active listening scored 42.2. Reading comprehension scored 45.5. The spread across all four models tested was just 3.6 points, suggesting that automation feasibility is more a property of the skill itself than of the model being used to perform it. The skills that are easiest for large language models to automate are precisely the ones that automated screening tools most readily evaluate: quantifiable, keyword-friendly competencies that map neatly onto a resume. The skills that are hardest for machines to replicate, and that the research identifies as most critical for human value in the LLM era, are the ones that screening algorithms are least equipped to detect.

The researchers call this the “capability-demand inversion.” Skills most demanded in AI-exposed jobs are those that large language models perform least well at in their benchmarks. In other words, the qualities that will matter most in a labour market reshaped by AI are the very qualities that AI hiring tools are structurally unable to assess. The paper found that 78.7 per cent of observed AI interactions in the workplace are augmentation rather than automation, which means the primary role of AI in most jobs is to assist human workers, not to replace them. The skills required to work effectively alongside AI, adaptability, judgement, interpersonal sensitivity, creative problem-solving, are real but largely invisible to a resume-parsing algorithm.

The researchers propose an AI Impact Matrix that positions skills along four quadrants: high displacement risk, upskilling required, AI-augmented, and lower displacement risk. The framework makes visible what most hiring algorithms treat as noise. A candidate whose strongest assets are collaborative reasoning and contextual judgement will generate a weak signal in a system calibrated to detect certifications and years of experience. The matrix suggests that the skills most likely to determine career success in the coming decade are precisely the skills that current screening tools are designed to ignore.

This creates an absurd circularity. The tools being used to decide who gets hired are optimised to evaluate the competencies most likely to be automated, while systematically failing to measure the competencies most likely to determine whether a candidate will succeed. A screening system that rewards keyword density in programming languages or certifications in statistical software is not measuring the thing it thinks it is measuring. It is measuring a candidate's ability to format a CV in a way that satisfies an algorithm. The correlation between that skill and actual job performance is, at best, weak.

Industrial-organisational psychology has long understood this problem. Research on structured interviews, one of the most replicated findings in the field, shows that fully structured behavioural interviews with standardised questions achieve a predictive validity coefficient of approximately 0.55 or higher, while unstructured interviews, the kind most commonly used in hiring, achieve roughly 0.38. The implication is clear: even among traditional hiring methods, the format of the assessment matters as much as the content. An AI screening tool that evaluates candidates on the basis of keyword frequency and experience duration is applying a methodology with no established predictive validity for job performance. It is a tool built to sort, not to select.

The Scale of the Sorting

The numbers are difficult to absorb. Workday, the cloud-based human resources platform, disclosed in court filings related to a separate class-action lawsuit that 1.1 billion applications were rejected using its software tools during the relevant period. The plaintiff in that case, Derek Mobley, is a Black man over the age of 40 who identifies as having anxiety and depression. He applied to more than a hundred jobs at companies that use Workday's AI-based screening tools over several years and was rejected every time. Four additional plaintiffs later joined the case, each alleging a similar pattern: hundreds of applications submitted through Workday, virtually no interviews, and no explanation.

In May 2025, a federal judge in California granted conditional certification of age discrimination claims under the Age Discrimination in Employment Act, allowing the case to proceed as a nationwide class action. The potential class includes every applicant aged 40 and over who, from September 2020 to the present, applied through Workday's platform and was not advanced by the AI tool. That class could number in the hundreds of millions. In July 2025, the same judge expanded the scope to include applicants processed using HiredScore, an AI feature Workday had acquired, broadening the potential membership still further. Workday has denied that its technology is discriminatory, calling the certification ruling “a preliminary, procedural ruling that relies on allegations, not evidence.”

The Eightfold and Workday cases together paint a picture of an infrastructure that is vast, consequential, and almost entirely opaque. These are not niche products used by a handful of companies. They are the plumbing of the modern labour market. When a significant portion of the world's job applications passes through systems that score, rank, and reject candidates without disclosure, human review, or any mechanism for appeal, the word “screening” barely captures what is happening. What is happening is automated adjudication. And the adjudicators are accountable to no one.

The hiring managers who rely on these tools are often unaware of how they work. The UK's Information Commissioner's Office published a report on 31 March 2026, drawing on evidence from more than 30 employers and public perception research from graduates, civil society organisations, government bodies, trade unions, and industry representatives. The report identified a striking pattern: many employers fail to recognise that they are using automated decision-making at all. They purchase recruitment software, configure basic settings, and assume a human is reviewing the output. In many cases, the system is making the decision, and the human involvement that follows is little more than a rubber stamp. The ICO's report stressed that human involvement in hiring must be “active and genuine,” that the personnel reviewing AI-generated recommendations must possess the authority, discretion, and competence to alter outcomes before decisions take effect. The gap between that standard and current practice is wide.

A November 2025 study from the University of Washington added a further complication. The researchers found that people tend to mirror the biases of AI systems they work alongside. When participants were exposed to AI-generated hiring recommendations that contained bias, they did not correct for the bias. They absorbed it. Unless the bias was obvious and egregious, participants were, in the researchers' words, “perfectly willing to accept the AI's biases.” This finding undermines one of the central defences offered by companies that deploy AI screening: the claim that a human is always in the loop. If the human in the loop is unconsciously adopting the biases of the algorithm they are supposed to be overseeing, the oversight is illusory.

What Explainability Actually Requires

The word “explainability” has become a kind of talisman in conversations about AI governance, invoked as though its mere presence in a policy document could resolve the tensions it names. In the context of AI hiring, explainability means something very specific, and very difficult.

At its most basic, explainability requires that a candidate who has been rejected by an algorithmic system can receive an answer to the question: why? Not a generic notification. Not a form email. An answer that identifies the specific factors that led to the rejection, the data that was used, the criteria that were applied, and the weight that each criterion received in the final decision. It requires, in other words, that the system be legible to the person it has affected.

This is not a trivial technical problem. Many modern AI screening systems use large language models or deep neural networks whose internal decision processes are not fully interpretable even to their developers. The term “black box” is sometimes used carelessly, but in this context it is technically accurate. Eightfold's platform runs on a proprietary large language model that analyses 1.5 billion data points. The relationship between any individual input and the resulting score is not reducible to a simple explanation. The system does not apply a checklist. It makes inferences across a latent space of features that no human designed and no human can fully map.

Hilke Schellmann, an Emmy-award-winning investigative journalist and professor at New York University, spent years investigating AI hiring tools for her 2024 book “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now,” named a Financial Times Best Book of the Year. Her reporting revealed that many of the algorithms making high-stakes calculations about candidates do more harm than good, and that AI-based hiring tools have not been shown to be more effective than traditional methods at predicting job performance. Through whistleblower accounts and leaked internal documents, Schellmann documented systemic discrimination against women and people of colour, patterns that the tools' developers often could not explain because the systems were not built for explanation. They were built for throughput.

The European Union's AI Act, which classifies AI systems used in employment decisions as “high-risk,” will begin enforcing its core requirements for such systems in August 2026. Under the Act, employers using AI in hiring will be required to conduct rigorous risk assessments and bias testing, maintain detailed technical documentation explaining how the AI works, implement human oversight mechanisms to prevent automated decisions from going unchecked, and register the system in an EU database before deployment. Violations can attract fines of up to 35 million euros or seven per cent of global annual turnover. The regulation represents the most comprehensive attempt anywhere in the world to bring algorithmic hiring under meaningful legal constraint.

But even the EU AI Act does not fully resolve the explainability problem. It mandates transparency and documentation, but it does not require that employers provide individual candidates with a specific explanation of why they were rejected. The regulation focuses on systemic accountability: are you testing for bias? Are you documenting your processes? Are your human overseers genuinely overseeing? These are necessary conditions for a fair system, but they are not sufficient for an explainable one. A candidate in Berlin who is rejected by an AI tool used by a company complying fully with the AI Act may still have no way to understand why.

The Patchwork Below the Atlantic

In the United States, the regulatory landscape is not merely incomplete. It is contradictory. New York City's Local Law 144, which took effect in July 2023, requires employers using automated employment decision tools to conduct annual bias audits and to notify candidates that AI is being used. The law covers all AI-based tools relating to employment, including resume screening software, personality tests, and skill assessments, and it requires that audits examine whether the tools are treating different groups of people fairly with regard to race, ethnicity, and gender. Illinois amended its Human Rights Act through House Bill 3773, effective January 2026, making it unlawful for employers to use artificial intelligence that has the effect of discriminating on the basis of protected characteristics. The earlier Illinois AI Video Interview Act, effective since January 2020, had already required employer notification and consent when AI is used to analyse video interviews. Colorado's AI Act, signed in 2024, imposes obligations on deployers of high-risk AI systems, including those used in hiring.

These laws represent genuine progress, but they share a common limitation: they are state and local measures in a labour market that operates nationally and globally. A company headquartered in Texas that uses Eightfold or Workday to screen candidates across all 50 states is subject to a patchwork of obligations that varies by jurisdiction. A candidate in Colorado has different rights from a candidate in Florida. A candidate applying through a portal in London is subject to UK data protection law and the Data (Use and Access) Act's reformed provisions on automated decision-making, but the AI tool processing her application may be operated by a company in California, trained on data from LinkedIn profiles worldwide, and governed by the terms of service of a cloud computing provider in Virginia.

The CFPB's withdrawn guidance on algorithmic employment scores illustrates the fragility of the American regulatory approach. For seven months in 2024 and 2025, there was a federal-level interpretation that would have required companies like Eightfold to comply with FCRA disclosure requirements. When that interpretation was rescinded, the obligation evaporated. The Eightfold lawsuit now asks a court to make the same determination that the CFPB made and then unmade: that algorithmic hiring scores are consumer reports. If the court agrees, the result will be a judicial precedent rather than a regulatory framework, binding on the parties but leaving the broader industry to wait for further litigation to clarify the rules.

The Architecture of a Fair System

What would a fair AI hiring system actually require? The question is easier to pose than to answer, but the outlines of an answer are visible in the research, the litigation, and the regulatory experiments now underway.

First, disclosure. Every candidate should know, before they submit an application, that an automated system will be involved in evaluating it. They should know the name of the system, the categories of data it will use, and the general logic by which it makes its assessments. This is not a radical proposition. It is the minimum standard that the Fair Credit Reporting Act has required of credit bureaus since 1970. The fact that it does not yet apply consistently to AI hiring tools is a regulatory failure, not a technical impossibility.

Second, access and correction. Every candidate who is rejected by an AI system should have the right to see the data the system held about them and to challenge inaccuracies. The Eightfold lawsuit alleges that the company generates detailed dossiers about applicants without their knowledge and provides no mechanism for correction. If the allegations are proved, the gap between what the law requires and what the industry practises is not a matter of degree. It is a matter of kind.

Third, validated assessments. The ArXiv research by Jadhav and Danve demonstrates that current AI screening tools evaluate competencies that do not align with the skills most predictive of job performance in the LLM era. A fair system would require that any automated assessment used in hiring decisions be validated against actual job performance outcomes, not merely against the proxy metrics that the system was designed to optimise. Industrial-organisational psychology has established rigorous standards for assessment validation. There is no principled reason why AI screening tools should be exempt from those standards.

Fourth, meaningful human oversight. The ICO's March 2026 report found that many employers do not recognise they are using automated decision-making and that the human involvement in their processes is often nominal. The University of Washington study found that even when humans are present, they tend to absorb rather than correct algorithmic bias. Meaningful oversight requires that the person reviewing an AI recommendation has the authority, training, and information necessary to overrule it. It requires that overruling the algorithm carries no professional penalty. And it requires that the proportion of AI recommendations that are actually reviewed and challenged is itself monitored and reported.

Fifth, independent auditing. New York City's Local Law 144 requires annual bias audits of automated employment decision tools. This is a starting point, but the audits must be genuinely independent, conducted by parties with no financial relationship to the tool's developer or the employer, and the results must be public. An audit that is commissioned by the company being audited, conducted according to the company's own methodology, and published only in summary form is not an audit. It is a press release.

Sixth, regulatory coherence. The current patchwork of state, local, and national regulations creates an environment in which compliance is burdensome for employers who take it seriously and easily evaded by those who do not. The EU AI Act represents one model for a comprehensive approach. The United States does not need to replicate the EU's framework precisely, but it does need a federal standard that establishes minimum requirements for disclosure, validation, human oversight, and auditing. The alternative is an indefinite extension of the current system, in which the rights of a job applicant depend on the jurisdiction in which they happen to live.

The Human Cost of Optimisation

There is a tendency in conversations about AI hiring to frame the problem as a matter of efficiency versus fairness, as though the two are naturally in tension and the task is to find an acceptable compromise. The framing is misleading. A system that rejects qualified candidates because it cannot evaluate the competencies that matter is not efficient. It is wasteful. A system that scores applicants using data they have never seen and cannot correct is not streamlined. It is arbitrary. A system that makes consequential decisions about people's lives without any mechanism for explanation or appeal is not optimised. It is unjust.

The experience of job seekers like Bhuvana Chilukuri and Erin Kistler and Derek Mobley is not a side effect of technological progress. It is a design choice. The companies that build and deploy these systems chose speed over accuracy, throughput over fairness, and opacity over accountability. Those choices were not inevitable. They were made because they were profitable and because, until very recently, they were legal. A 2025 survey found that 69 per cent of candidates said a lack of human interaction would deter them from joining an organisation, and 54 per cent wanted employers to maintain a human touch in hiring. The tools that were supposed to make hiring more efficient are driving away the talent they were meant to attract.

The BBC's reporting, the Eightfold and Workday lawsuits, the ArXiv research on skill obsolescence, and the ICO's findings all converge on the same conclusion: the first and most decisive moment in a person's working life is now frequently decided by a system that neither they nor most employers can interrogate. That is not a technical problem waiting for a better algorithm. It is a governance failure waiting for a political response. The technology exists to build hiring systems that are transparent, validated, and subject to meaningful oversight. What is missing is the will to require it.

The machinery is already in motion. The EU AI Act's high-risk provisions take effect in August 2026. The Eightfold and Workday cases will set precedents in American courts. The ICO is consulting on new guidance until 29 May 2026. Legislators in Illinois, Colorado, and New York have demonstrated that it is possible to regulate AI in hiring without banning it. The question is whether these efforts will coalesce into a coherent framework before a generation of workers is sorted, scored, and discarded by systems that no one can explain.

The algorithms are not going away. The only remaining question is whether the people they judge will ever be allowed to judge them back.


References and Sources

  1. BBC report on AI-led hiring in the UK, featuring Bhuvana Chilukuri's experience and Denis Machuel's comments on the job market, March 2026. https://www.storyboard18.com/trending/student-warns-ai-led-hiring-in-uk-causes-impersonal-rejections-ws-l-92877.htm

  2. Collins McNicholas survey on candidate experiences with AI in recruitment, 2025. https://www.peoplemanagement.co.uk/article/1940958/jobseekers-fear-ai-unfairly-screening-applications-research-finds

  3. Resume Genius, “2026 Hiring Insights Report: ATS, AI, and Employer Expectations,” survey of 1,000 US hiring managers, 2026. https://resumegenius.com/blog/job-hunting/hiring-insights-report

  4. CoverSentry, “ATS Statistics 2026: Why Your Resume Disappears Into the Void,” analysis of AI screening rejection rates and parsing errors. https://www.coversentry.com/ats-statistics

  5. Kistler and Bhaumik v. Eightfold AI Inc., class-action complaint filed 21 January 2026, Outten and Golden LLP and Towards Justice. https://www.outtengolden.com/newsroom/landmark-class-action-accuses-eightfold-ai-of-illegally-producing-hidden-credit-reports-on-job-applicants

  6. Fortune, “Job seekers are suing an AI hiring tool used by Microsoft and PayPal for allegedly compiling secretive reports that help employers screen candidates,” 26 January 2026. https://fortune.com/2026/01/26/job-seekers-suing-ai-hiring-tool-eightfold-allegedly-compiling-secretive-reports/

  7. Consumer Financial Protection Bureau, “Consumer Financial Protection Circular 2024-06: Background Dossiers and Algorithmic Scores for Hiring, Promotion, and Other Employment Decisions,” October 2024. https://www.consumerfinance.gov/compliance/circulars/consumer-financial-protection-circular-2024-06-background-dossiers-and-algorithmic-scores-for-hiring-promotion-and-other-employment-decisions/

  8. Consumer Financial Services Law Monitor, “CFPB Rescinds Dozens of Regulatory Guidance Documents in Major Regulatory Shift,” May 2025. https://www.consumerfinancialserviceslawmonitor.com/2025/05/cfpb-rescinds-dozens-of-regulatory-guidance-documents-in-major-regulatory-shift/

  9. Pauline Kim, “People Analytics and the Regulation of Information Under the Fair Credit Reporting Act,” Washington University School of Law. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2809910

  10. Jadhav, Rudra, and Janhavi Danve, “The AI Skills Shift: Mapping Skill Obsolescence, Emergence, and Transition Pathways in the LLM Era,” arXiv:2604.06906, 8 April 2026. https://arxiv.org/abs/2604.06906

  11. Mobley v. Workday, Inc., US District Court for the Northern District of California, class-action complaint alleging age and race discrimination through AI-based screening. https://fairnow.ai/workday-lawsuit-resume-screening/

  12. Law and the Workplace, “AI Bias Lawsuit Against Workday Reaches Next Stage as Court Grants Conditional Certification of ADEA Claim,” June 2025. https://www.lawandtheworkplace.com/2025/06/ai-bias-lawsuit-against-workday-reaches-next-stage-as-court-grants-conditional-certification-of-adea-claim/

  13. Information Commissioner's Office, “Recruitment Rewired: An Update on the ICO's Work on the Fair and Responsible Use of Automation in Recruitment,” 31 March 2026. https://ico.org.uk/about-the-ico/what-we-do/recruitment-rewired/

  14. University of Washington, “People mirror AI systems' hiring biases, study finds,” November 2025. https://www.washington.edu/news/2025/11/10/people-mirror-ai-systems-hiring-biases-study-finds/

  15. Schellmann, Hilke, “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now,” Hachette Books, 2024. https://www.hachettebookgroup.com/titles/hilke-schellmann/the-algorithm/9780306827365/

  16. European Commission, “AI Act: Shaping Europe's Digital Future,” regulatory framework for artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  17. New York City Local Law 144 on Automated Employment Decision Tools, effective July 2023. https://www.warden-ai.com/resources/hr-tech-compliance-nyc-local-law-144

  18. Illinois House Bill 3773, amendment to the Illinois Human Rights Act regarding AI in employment decisions, effective January 2026. https://www.theemployerreport.com/2024/08/illinois-joins-colorado-and-nyc-in-restricting-generative-ai-in-hr-a-comprehensive-look-at-us-and-global-laws-on-algorithmic-bias-in-the-workplace/

  19. Pauline Kim, testimony before the US Equal Employment Opportunity Commission, “Navigating Employment Discrimination, AI, and Automated Systems,” January 2023. https://www.eeoc.gov/meetings/meeting-january-31-2023-navigating-employment-discrimination-ai-and-automated-systems-new/kim


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...