AI Surveillance Fails: Innocent People Pay the Price

On a July afternoon in 2024, Jason Vernau walked into a Truist bank branch in Miami to cash a legitimate $1,500 cheque. The 49-year-old medical entrepreneur had no idea that on the same day, in the same building, someone else was cashing a fraudulent $36,000 cheque. Within days, Vernau found himself behind bars, facing fraud charges based not on witness testimony or fingerprint evidence, but on an algorithmic match that confused his face with that of the actual perpetrator. He spent three days in detention before the error became apparent.

Vernau's ordeal represents one of at least eight documented wrongful arrests in the United States stemming from facial recognition false positives. His case illuminates a disturbing reality: as law enforcement agencies increasingly deploy artificial intelligence systems designed to enhance public safety, the technology's failures are creating new victims whilst simultaneously eroding the very foundations of community trust and democratic participation that effective policing requires.

The promise of AI in public safety has always been seductive. Algorithmic systems, their proponents argue, can process vast quantities of data faster than human investigators, identify patterns invisible to the naked eye, and remove subjective bias from critical decisions. Yet the mounting evidence suggests that these systems are not merely imperfect tools requiring minor adjustments. Rather, they represent a fundamental transformation in how communities experience surveillance, how errors cascade through people's lives, and how systemic inequalities become encoded into the infrastructure of law enforcement itself.

The Architecture of Algorithmic Failure

Understanding the societal impact of AI false positives requires first examining how these errors manifest across different surveillance technologies. Unlike human mistakes, which tend to be isolated and idiosyncratic, algorithmic failures exhibit systematic patterns that disproportionately harm specific demographic groups.

Facial recognition technology, perhaps the most visible form of AI surveillance, demonstrates these disparities with stark clarity. Research conducted by Joy Buolamwini at MIT and Timnit Gebru, then at Microsoft Research, revealed in their seminal 2018 Gender Shades study that commercial facial recognition systems exhibited dramatically higher error rates when analysing the faces of women and people of colour. Their investigation of three leading commercial systems found that datasets used to train the algorithms comprised overwhelmingly lighter-skinned faces, with representation ranging between 79% and 86%. The consequence was predictable: faces classified as African American or Asian were 10 to 100 times more likely to be misidentified than those classified as white. African American women experienced the highest rates of false positives.

The National Institute of Standards and Technology (NIST) corroborated these findings in a comprehensive 2019 study examining 18.27 million images of 8.49 million people from operational databases provided by the State Department, Department of Homeland Security, and FBI. NIST's evaluation revealed empirical evidence for demographic differentials in the majority of face recognition algorithms tested. Whilst NIST's 2024 evaluation data shows that leading algorithms have improved, with top-tier systems now achieving over 99.5% accuracy across demographic groups, significant disparities persist in many widely deployed systems.

The implications extend beyond facial recognition. AI-powered weapon detection systems in schools have generated their own catalogue of failures. Evolv Technology, which serves approximately 800 schools across 40 states, faced Federal Trade Commission accusations in 2024 of making false claims about its ability to detect weapons accurately. Dorchester County Public Schools in Maryland experienced 250 false alarms for every real hit between September 2021 and June 2022. Some schools reported false alarm rates reaching 60%. A BBC evaluation showed Evolv machines failed to detect knives 42% of the time during 24 trial walkthroughs.

Camera-based AI detection systems have proven equally unreliable. ZeroEyes triggered a lockdown after misidentifying prop guns during a theatre production rehearsal. In one widely reported incident, a student eating crisps triggered what both AI and human verifiers classified as a confirmed threat, resulting in an armed police response. Systems have misidentified broomsticks as rifles and rulers as knives.

ShotSpotter, an acoustic gunshot detection system, presents yet another dimension of the false positive problem. A MacArthur Justice Center study examining approximately 21 months of ShotSpotter deployments in Chicago (from 1 July 2019 through 14 April 2021) found that 89% of alerts led police to find no gun-related crime, and 86% turned up no crime whatsoever. This amounted to roughly 40,000 dead-end police deployments. The Chicago Office of Inspector General concluded that “police responses to ShotSpotter alerts rarely produce evidence of a gun-related crime.”

These statistics are not merely technical specifications. Each false positive represents a human encounter with armed law enforcement, an investigation that consumes resources, and potentially a traumatic experience that reverberates through families and communities.

The Human Toll

The documented wrongful arrests reveal the devastating personal consequences of algorithmic false positives. Robert Williams became the first publicly reported victim of a false facial recognition match leading to wrongful arrest when Detroit police detained him in January 2020. Officers arrived at his home, arresting him in front of his wife and two young daughters, in plain view of his neighbours. He spent 30 hours in an overcrowded, unsanitary cell, accused of stealing Shinola watches based on a match between grainy surveillance footage and his expired driver's licence photo.

Porcha Woodruff, eight months pregnant, was arrested in her home and detained for 11 hours on robbery and carjacking charges based on a facial recognition false match. Nijeer Parks spent ten days in jail and faced charges for over a year due to a misidentification. Randall Reid was arrested whilst driving from Georgia to Texas to visit his mother for Thanksgiving. Alonzo Sawyer, Michael Oliver, and others have joined this growing list of individuals whose lives were upended by algorithmic errors.

Of the seven confirmed cases of misidentification via facial recognition technology, six involved Black individuals. This disparity reflects not coincidence but the systematic biases embedded in the training data and algorithmic design. Chris Fabricant, Director of Strategic Litigation at the Innocence Project, observed that “corporations are making claims about the abilities of these techniques that are only supported by self-funded literature.” More troublingly, he noted that “the technology that was just supposed to be for investigation is now being proffered at trial as direct evidence of guilt.”

In all known cases of wrongful arrest due to facial recognition, police arrested individuals without independently connecting them to the crime through traditional investigative methods. Basic police work such as checking alibis, comparing tattoos, or following DNA and fingerprint evidence could have eliminated most suspects before arrest. The technology's perceived infallibility created a dangerous shortcut that bypassed fundamental investigative procedures.

The psychological toll extends beyond those directly arrested. Family members witness armed officers taking loved ones into custody. Children see parents handcuffed and removed from their homes. Neighbours observe these spectacles, forming impressions and spreading rumours that persist long after exoneration. The stigma of arrest, even when charges are dropped, creates lasting damage to employment prospects, housing opportunities, and social relationships.

For students subjected to false weapon detection alerts, the consequences manifest differently but no less profoundly. Lockdowns triggered by AI misidentifications create traumatic experiences. Armed police responding to phantom threats establish associations between educational environments and danger.

Developmental psychology research demonstrates that adolescents require private spaces, including online, to explore thoughts and develop autonomous identities. Constant surveillance by adults, particularly when it results in false accusations, can impede the development of a private life and the space necessary to make mistakes and learn from them. Studies examining AI surveillance in schools reveal that students are less likely to feel safe enough for free expression, and these security measures “interfere with the trust and cooperation” essential to effective education whilst casting schools in a negative light in students' eyes.

The Amplification of Systemic Bias

AI systems do not introduce bias into law enforcement; they amplify and accelerate existing inequalities whilst lending them the veneer of technological objectivity. This amplification occurs through multiple mechanisms, each reinforcing the others in a pernicious feedback loop.

Historical policing data forms the foundation of most predictive policing algorithms. This data inherently reflects decades of documented bias in law enforcement practices. Communities of colour have experienced over-policing, resulting in disproportionate arrest rates not because crime occurs more frequently in these neighbourhoods but because police presence concentrates there. When algorithms learn from this biased data, they identify patterns that mirror and perpetuate historical discrimination.

A paper published in the journal Synthese examining racial discrimination and algorithmic bias notes that scholars consider the bias exhibited by predictive policing algorithms to be “an inevitable artefact of higher police presence in historically marginalised communities.” The algorithmic logic becomes circular: if more police are dispatched to a certain neighbourhood, more crime will be recorded there, which then justifies additional police deployment.

Though by law these algorithms do not use race as a predictor, other variables such as socioeconomic background, education, and postcode act as proxies. Research published in MIT Technology Review bluntly concluded that “even without explicitly considering race, these tools are racist.” The proxy variables correlate so strongly with race that the algorithmic outcome remains discriminatory whilst maintaining the appearance of neutrality.

The Royal United Services Institute, examining data analytics and algorithmic bias in policing within England and Wales, emphasised that “algorithmic fairness cannot be understood solely as a matter of data bias, but requires careful consideration of the wider operational, organisational and legal context.”

Chicago provides a case study in how these dynamics play out geographically. The city deployed ShotSpotter only in police districts with the highest proportion of Black and Latinx residents. This selective deployment means that false positives, and the aggressive police responses they trigger, concentrate in communities already experiencing over-policing. The Chicago Inspector General found more than 2,400 stop-and-frisks tied to ShotSpotter alerts, with only a tiny fraction leading police to identify any crime.

The National Association for the Advancement of Colored People (NAACP) issued a policy brief noting that “over-policing has done tremendous damage and marginalised entire Black communities, and law enforcement decisions based on flawed AI predictions can further erode trust in law enforcement agencies.” The NAACP warned that “there is growing evidence that AI-driven predictive policing perpetuates racial bias, violates privacy rights, and undermines public trust in law enforcement.”

The Innocence Project's analysis of DNA exonerations between 1989 and 2020 found that 60% of the 375 cases involved Black individuals, and 50% of all exonerations resulted from false or misleading forensic evidence. The introduction of AI-driven forensic tools threatens to accelerate this pattern, with algorithms providing a veneer of scientific objectivity to evidence that may be fundamentally flawed.

The Erosion of Community Trust

Trust between communities and law enforcement represents an essential component of effective public safety. When residents believe police act fairly, transparently, and in the community's interest, they are more likely to report crimes, serve as witnesses, and cooperate with investigations. AI false positives systematically undermine this foundation.

Academic research examining public attitudes towards AI in law enforcement highlights the critical role of procedural justice. A study examining public support for AI in policing found that “concerns related to procedural justice fully mediate the relationship between knowledge of AI and support for its use.” In other words, when people understand how AI systems operate in policing, their willingness to accept these technologies depends entirely on whether the implementation aligns with expectations of fairness, transparency, and accountability.

Research drawing on a 2021 nationally representative U.S. survey demonstrated that two institutional trustworthiness dimensions, integrity and ability, significantly affect public acceptability of facial recognition technology. Communities need to trust both that law enforcement intends to use the technology ethically and that the technology actually works as advertised. False positives shatter both forms of trust simultaneously.

The United Nations Interregional Crime and Justice Research Institute published a November 2024 report titled “Not Just Another Tool” examining public perceptions of AI in law enforcement. The report documented widespread concern about surveillance overreach, erosion of privacy rights, increased monitoring of individuals, and over-policing.

The deployment of real-time crime centres equipped with AI surveillance capabilities has sparked debates about “the privatisation of police tasks, the potential erosion of community policing, and the risks of overreliance on technology.” Community policing models emphasise relationship-building, local knowledge, and trust. AI surveillance systems, particularly when they generate false positives, work directly against these principles by positioning technology as a substitute for human judgement and community engagement.

The lack of transparency surrounding AI deployment in law enforcement exacerbates trust erosion. Critics warn about agencies' refusal to disclose how they use predictive policing programmes. The proprietary nature of algorithms prevents public input or understanding regarding how decisions about policing and resource allocation are made. A Washington Post investigation revealed that police seldom disclose their use of facial recognition technology, even in cases resulting in wrongful arrests. This opacity means individuals may never know that an algorithm played a role in their encounter with law enforcement.

The cumulative effect of these dynamics is a fundamental transformation in how communities perceive law enforcement. Rather than protectors operating with community consent and support, police become associated with opaque technological systems that make unchallengeable errors. The resulting distance between law enforcement and communities makes effective public safety harder to achieve.

The Chilling Effect on Democratic Participation

Beyond the immediate harms to individuals and community trust, AI surveillance systems generating false positives create a broader chilling effect on democratic participation and civil liberties. This phenomenon, well-documented in research examining surveillance's impact on free expression, fundamentally threatens the open society necessary for democracy to function.

Jonathon Penney's research examining Wikipedia use after Edward Snowden's revelations about NSA surveillance found that article views on topics government might find sensitive dropped 30% following June 2013, supporting “the existence of an immediate and substantial chilling effect.” Monthly views continued falling, suggesting long-term impacts. People's awareness that their online activities were monitored led them to self-censor, even when engaging with perfectly legal information.

Research examining chilling effects of digital surveillance notes that “people's sense of being subject to digital surveillance can cause them to restrict their digital communication behaviour. Such a chilling effect is essentially a form of self-censorship, which has serious implications for democratic societies.”

Academic work examining surveillance in Uganda and Zimbabwe found that “surveillance-related chilling effects may fundamentally impair individuals' ability to organise and mount an effective political opposition, undermining both the right to freedom of assembly and the functioning of democratic society.” Whilst these studies examined overtly authoritarian contexts, the mechanisms they identify operate in any surveillance environment, including ostensibly democratic societies deploying AI policing systems.

The Electronic Frontier Foundation, examining surveillance's impact on freedom of association, noted that “when citizens feel deterred from expressing their opinions or engaging in political activism due to fear of surveillance or retaliation, it leads to a diminished public sphere where critical discussions are stifled.” False positives amplify this effect by demonstrating that surveillance systems make consequential errors, creating legitimate fear that lawful behaviour might be misinterpreted.

Legal scholars examining predictive policing's constitutional implications argue that these systems threaten Fourth Amendment rights by making it easier for police to claim individuals meet the reasonable suspicion standard. If an algorithm flags someone or a location as high-risk, officers can use that designation to justify stops that would otherwise lack legal foundation. False positives thus enable Fourth Amendment violations whilst providing a technological justification that obscures the lack of actual evidence.

The cumulative effect creates what researchers describe as a panopticon, referencing Jeremy Bentham's prison design where inmates, never knowing when they are observed, regulate their own behaviour. In contemporary terms, awareness that AI systems continuously monitor public spaces, schools, and digital communications leads individuals to conform to perceived expectations, avoiding activities or expressions that might trigger algorithmic flags, even when those activities are entirely lawful and protected.

This self-regulation extends to students experiencing AI surveillance in schools. Research examining AI in educational surveillance contexts identifies “serious concerns regarding privacy, consent, algorithmic bias, and the disproportionate impact on marginalised learners.” Students aware that their online searches, social media activity, and even physical movements are monitored may avoid exploring controversial topics, seeking information about sexual health or LGBTQ+ identities, or expressing political views, thereby constraining their intellectual and personal development.

The Regulatory Response

Growing awareness of AI false positives and their consequences has prompted regulatory responses, though these efforts remain incomplete and face significant implementation challenges.

The settlement reached on 28 June 2024 in Williams v. City of Detroit represents the most significant policy achievement to date. The agreement, described by the American Civil Liberties Union as “the nation's strongest police department policies constraining law enforcement's use of face recognition technology,” established critical safeguards. Detroit police cannot arrest people based solely on facial recognition results and cannot make arrests using photo line-ups generated from facial recognition searches. The settlement requires training for officers on how the technology misidentifies people of colour at higher rates, and mandates investigation of all cases since 2017 where facial recognition technology contributed to arrest warrants. Detroit agreed to pay Williams $300,000.

However, the agreement binds only one police department, leaving thousands of other agencies free to continue problematic practices.

At the federal level, the White House Office of Management and Budget issued landmark policy on 28 March 2024 establishing requirements on how federal agencies can use artificial intelligence. By December 2024, any federal agency seeking to use “rights-impacting” or “safety-impacting” technologies, including facial recognition and predictive policing, must complete impact assessments including comprehensive cost-benefit analyses. If benefits do not meaningfully outweigh costs, agencies cannot deploy the technology.

The policy establishes a framework for responsible AI procurement and use across federal government, but its effectiveness depends on rigorous implementation and oversight. Moreover, it does not govern the thousands of state and local law enforcement agencies where most policing occurs.

The Algorithmic Accountability Act, reintroduced for the third time on 21 September 2023, would require businesses using automated decision systems for critical decisions to report on impacts. The legislation has been referred to the Senate Committee on Commerce, Science, and Transportation but has not advanced further.

California has emerged as a regulatory leader, with the legislature passing numerous AI-related bills in 2024. The Generative Artificial Intelligence Accountability Act would establish oversight and accountability measures for AI use within state agencies, mandating risk analyses, transparency in AI communications, and measures ensuring ethical and equitable use in government operations.

The European Union's Artificial Intelligence Act, which began implementation in early 2025, represents the most comprehensive regulatory framework globally. The Act prohibits certain AI uses, including real-time biometric identification in publicly accessible spaces for law enforcement purposes and AI systems for predicting criminal behaviour propensity. However, significant exceptions undermine these protections. Real-time biometric identification can be authorised for targeted searches of victims, prevention of specific terrorist threats, or localisation of persons suspected of specific crimes.

These regulatory developments represent progress but remain fundamentally reactive, addressing harms after they occur rather than preventing deployment of unreliable systems. The burden falls on affected individuals and communities to document failures, pursue litigation, and advocate for policy changes.

Accountability, Transparency, and Community Governance

Addressing the societal impacts of AI false positives in public safety requires fundamental shifts in how these systems are developed, deployed, and governed. Technical improvements alone cannot solve problems rooted in power imbalances, inadequate accountability, and the prioritisation of technological efficiency over human rights.

First, algorithmic systems used in law enforcement must meet rigorous independent validation standards before deployment. The current model, where vendors make accuracy claims based on self-funded research and agencies accept these claims without independent verification, has proven inadequate. NIST's testing regime provides a model, but participation should be mandatory for any system used in consequential decision-making.

Second, algorithmic impact assessments must precede deployment, involving affected communities in meaningful ways. The process must extend beyond government bureaucracies to include community representatives, civil liberties advocates, and independent technical experts. Assessments should address not only algorithmic accuracy in laboratory conditions but real-world performance across demographic groups and consequences of false positives.

Third, complete transparency regarding AI system deployment and performance must become the norm. The proprietary nature of commercial algorithms cannot justify opacity when these systems determine who gets stopped, searched, or arrested. Agencies should publish regular reports detailing how often systems are used, accuracy rates disaggregated by demographic categories, false positive rates, and outcomes of encounters triggered by algorithmic alerts.

Fourth, clear accountability mechanisms must address harms caused by algorithmic false positives. Currently, qualified immunity and the complexity of algorithmic systems allow law enforcement to disclaim responsibility for wrongful arrests and constitutional violations. Liability frameworks should hold both deploying agencies and technology vendors accountable for foreseeable harms.

Fifth, community governance structures should determine whether and how AI surveillance systems are deployed. The current model, where police departments acquire technology through procurement processes insulated from public input, fails democratic principles. Community boards with decision-making authority, not merely advisory roles, should evaluate proposed surveillance technologies, establish use policies, and monitor ongoing performance.

Sixth, robust independent oversight must continuously evaluate AI system performance and investigate complaints. Inspector general offices, civilian oversight boards, and dedicated algorithmic accountability officials should have authority to access system data, audit performance, and order suspension of unreliable systems.

Seventh, significantly greater investment in human-centred policing approaches is needed. AI surveillance systems are often marketed as solutions to resource constraints, but their false positives generate enormous costs: wrongful arrests, eroded trust, constitutional violations, and diverted police attention to phantom threats. Resources spent on surveillance technology could instead fund community policing, mental health services, violence interruption programmes, and other approaches with demonstrated effectiveness.

Finally, serious consideration should be given to prohibiting certain applications entirely. The European Union's prohibition on real-time biometric identification in public spaces, despite its loopholes, recognises that some technologies pose inherent threats to fundamental rights that cannot be adequately mitigated. Predictive policing systems trained on biased historical data, AI systems making bail or sentencing recommendations, and facial recognition deployed for continuous tracking may fall into this category.

The Cost of Algorithmic Errors

The societal impact of AI false positives in public safety scenarios extends far beyond the technical problem of improving algorithmic accuracy. These systems are reshaping the relationship between communities and law enforcement, accelerating existing inequalities, and constraining the democratic freedoms that open societies require.

Jason Vernau's three days in jail, Robert Williams' arrest before his daughters, Porcha Woodruff's detention whilst eight months pregnant, the student terrorised by armed police responding to AI misidentifying crisps as a weapon: these individual stories of algorithmic failure represent a much larger transformation. They reveal a future where errors are systematic rather than random, where biases are encoded and amplified, where opacity prevents accountability, and where the promise of technological objectivity obscures profoundly political choices about who is surveilled, who is trusted, and who bears the costs of innovation.

Research examining marginalised communities' experiences with AI consistently finds heightened anxiety, diminished trust, and justified fear of disproportionate harm. Studies documenting chilling effects demonstrate measurable impacts on free expression, civic participation, and democratic vitality. Evidence of feedback loops in predictive policing shows how algorithmic errors become self-reinforcing, creating permanent stigmatisation of entire neighbourhoods.

The fundamental question is not whether AI can achieve better accuracy rates, though improvement is certainly needed. The question is whether societies can establish governance structures ensuring these powerful systems serve genuine public safety whilst respecting civil liberties, or whether the momentum of technological deployment will continue overwhelming democratic deliberation, community consent, and basic fairness.

The answer remains unwritten, dependent on choices made in procurement offices, city councils, courtrooms, and legislative chambers. It depends on whether the voices of those harmed by algorithmic errors achieve the same weight as vendors promising efficiency and police chiefs claiming necessity. It depends on recognising that the most sophisticated algorithm cannot replace human judgement, community knowledge, and the procedural safeguards developed over centuries to protect against state overreach.

Every false positive carries lessons. The challenge is whether those lessons are learned through continued accumulation of individual tragedies or through proactive governance prioritising human dignity and democratic values. The technologies exist and will continue evolving. The societal infrastructure for managing them responsibly does not yet exist and will not emerge without deliberate effort.

The surveillance infrastructure being constructed around us, justified by public safety imperatives and enabled by AI capabilities, will define the relationship between individuals and state power for generations. Its failures, its biases, and its costs deserve scrutiny equal to its promised benefits. The communities already bearing the burden of false positives understand this reality. The broader society has an obligation to listen.


Sources and References

American Civil Liberties Union. “Civil Rights Advocates Achieve the Nation's Strongest Police Department Policy on Facial Recognition Technology.” 28 June 2024. https://www.aclu.org/press-releases/civil-rights-advocates-achieve-the-nations-strongest-police-department-policy-on-facial-recognition-technology

American Civil Liberties Union. “Four Problems with the ShotSpotter Gunshot Detection System.” https://www.aclu.org/news/privacy-technology/four-problems-with-the-shotspotter-gunshot-detection-system

American Civil Liberties Union. “Predictive Policing Software Is More Accurate at Predicting Policing Than Predicting Crime.” https://www.aclu.org/news/criminal-law-reform/predictive-policing-software-more-accurate

Brennan Center for Justice. “Predictive Policing Explained.” https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained

Buolamwini, Joy and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81:1-15, 2018.

Federal Trade Commission. Settlement with Evolv Technology regarding false claims about weapons detection capabilities. 2024.

Innocence Project. “AI and The Risk of Wrongful Convictions in the U.S.” https://innocenceproject.org/news/artificial-intelligence-is-putting-innocent-people-at-risk-of-being-incarcerated/

MacArthur Justice Center. “ShotSpotter Generated Over 40,000 Dead-End Police Deployments in Chicago in 21 Months.” https://www.macarthurjustice.org/shotspotter-generated-over-40000-dead-end-police-deployments-in-chicago-in-21-months-according-to-new-study/

MIT News. “Study finds gender and skin-type bias in commercial artificial-intelligence systems.” 12 February 2018. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

National Association for the Advancement of Colored People. “Artificial Intelligence in Predictive Policing Issue Brief.” https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief

National Institute of Standards and Technology. “Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects.” NISTIR 8280, December 2019. https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software

Penney, Jonathon W. “Chilling Effects: Online Surveillance and Wikipedia Use.” Berkeley Technology Law Journal 31(1), 2016.

Royal United Services Institute. “Data Analytics and Algorithmic Bias in Policing.” 2019. https://www.rusi.org/explore-our-research/publications/briefing-papers/data-analytics-and-algorithmic-bias-policing

United Nations Interregional Crime and Justice Research Institute. “Not Just Another Tool: Report on Public Perceptions of AI in Law Enforcement.” November 2024. https://unicri.org/Publications/Public-Perceptions-AI-Law-Enforcement

University of Michigan Law School. “Flawed Facial Recognition Technology Leads to Wrongful Arrest and Historic Settlement.” Law Quadrangle, Winter 2024-2025. https://quadrangle.michigan.law.umich.edu/issues/winter-2024-2025/flawed-facial-recognition-technology-leads-wrongful-arrest-and-historic

Washington Post. “Arrested by AI: Police ignore standards after facial recognition matches.” 2025. https://www.washingtonpost.com/business/interactive/2025/police-artificial-intelligence-facial-recognition/

White House Office of Management and Budget. AI Policy for Federal Law Enforcement. 28 March 2024.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...