The Evidence Employers Ignore: Bossware Does Not Improve Productivity

In May 2024, Wells Fargo fired more than a dozen employees in its wealth and investment management division. Their offence was not fraud, misconduct, or incompetence. It was the use of mouse jigglers, small devices costing roughly twenty dollars apiece that simulate cursor movement on a screen, creating the illusion of an active worker at their desk. The disclosures, filed with the Financial Industry Regulatory Authority, described their transgression as “simulation of keyboard activity creating impression of active work.” A Wells Fargo spokesperson told Bloomberg that the company “holds employees to the highest standards and does not tolerate unethical behaviour.”

The incident became a flashpoint. Not because the employees were blameless, but because it exposed the architecture of suspicion that now undergirds the modern workplace. These workers were not stealing money or falsifying accounts. They were gaming a system designed to reduce their entire working day to a stream of keystrokes, mouse movements, and activity scores. The fact that such a system existed, and that circumventing it was treated as a fireable offence, tells you more about the state of employer-employee relations in 2026 than any corporate mission statement ever could.

Across the industrialised world, millions of remote and hybrid workers now operate under what researchers and labour advocates have come to call “bossware”: a sprawling ecosystem of software tools that log keystrokes, capture screenshots at random intervals, track application usage, monitor website visits, record webcam footage, score activity levels in real time, and in some cases analyse facial expressions to determine whether someone is paying attention. According to industry surveys, 80 per cent of US companies now track employee performance digitally, and 74 per cent use online tracking tools of some kind. Sixty-one per cent use AI-powered analytics to measure employee productivity or behaviour, signalling a shift from simple time tracking to algorithm-driven performance evaluation. The employee monitoring software market, valued at approximately 587 million US dollars in 2024, is projected to reach 1.4 billion dollars by 2031. Some market analyses place it significantly higher, with estimates ranging up to 4.59 billion dollars in 2026 depending on scope. However you measure it, the trajectory is unmistakable. The business of watching workers is booming.

And yet, a growing body of research from institutions including MIT, Stanford, and the US Government Accountability Office suggests that these tools are not accomplishing what they promise. They are not making workers more productive. In many cases, they are making them more anxious, more disengaged, and more likely to leave. Some evidence links intensive productivity monitoring to increased physical injury rates. The question that emerges is not simply whether this technology works, but what its continued adoption reveals about the distribution of power between employers and the people who work for them.

The Machinery of Ambient Scoring

To understand what bossware does, it helps to examine the tools themselves. The market is crowded, but a handful of names dominate: Teramind, Hubstaff, ActivTrak, Time Doctor, Veriato, and Kickidler, among others. Their capabilities vary, but the general architecture is consistent. Each tool sits silently on an employee's device, often installed by IT departments without detailed explanation, collecting behavioural data and feeding it into management dashboards that convert a working day into graphs, percentages, and colour-coded scores.

Teramind, one of the more comprehensive platforms, offers keystroke logging, screen recording, application and website monitoring, email surveillance, file transfer tracking, chat monitoring, clipboard capture, and even printing activity logs. Hubstaff provides screenshot capturing at set intervals, keyboard and mouse activity tracking, GPS location monitoring for mobile workers, and application usage analytics. These tools run continuously, and their data collection is often invisible to the worker. There is no blinking light, no notification, no moment when the system asks permission. It simply watches.

Some systems go further still. Fujitsu Laboratories developed an AI model capable of detecting small changes in facial expression muscles using a framework called Action Units. The system claims to determine whether someone is concentrating or not by tracking muscular micro-movements every few seconds, capturing both short-term changes such as a tense mouth and longer-term patterns such as a sustained stare. Fujitsu reported an 85 per cent accuracy rate based on a study of 650 participants across the United States, China, and Japan, and has targeted applications including teleconferencing support and employee engagement measurement. The Victorian parliamentary inquiry into workplace surveillance in Australia specifically cited this kind of facial analysis technology as an example of the expanding frontier of worker monitoring. The committee heard evidence about wearable devices that monitor conversations, including how enthusiastically someone is speaking.

The data these tools generate is then fed into dashboards that score employees on productivity metrics, often in real time. Managers can view who is “active” and who is “idle,” which applications are being used, and how time is distributed across tasks. In some implementations, these scores feed directly into performance reviews, promotion decisions, and disciplinary processes. The worker rarely sees the same dashboard the manager sees. They experience the outputs of the system, in the form of warnings, performance ratings, or termination, without access to the inputs that produced those outcomes.

The core premise is straightforward: if you can measure activity, you can optimise it. What the research increasingly shows is that the premise is wrong.

What the Evidence Actually Shows

In February 2025, MIT Technology Review published a detailed investigation by Rebecca Ackermann into how opaque algorithms designed to analyse worker productivity have been rapidly spreading through workplaces. The piece argued that these algorithmic tools are less about efficiency than about control, and that workers have less and less recourse to challenge the decisions made on the basis of their data. There are few laws, Ackermann noted, requiring companies to offer transparency about what data goes into their productivity models or how decisions are derived from them. Labour groups, the article reported, were pushing back against this shift in power by seeking to make the algorithms that fuel management decisions more transparent.

The evidence against the effectiveness of monitoring has been building for years. A meta-analysis published in Computers in Human Behavior Reviews examined the impact of electronic monitoring on job satisfaction, stress, performance, and counterproductive work behaviour. The findings were stark: electronic monitoring showed a near-zero correlation with performance improvement (r = -0.01) while showing positive correlations with stress and counterproductive behaviour. In other words, monitoring does not make people work better. It makes them more stressed and, in some cases, more likely to act out. The study also found that performance targets and feedback, when combined with monitoring, could further exacerbate these negative effects.

A 2024 study published in Social Currents by Paul Glavin, Alex Bierman, and Scott Schieman, based on a nationally representative sample of 3,508 Canadian workers, found that perceptions of workplace surveillance were indirectly associated with increased psychological distress and lower job satisfaction. The mechanism, the researchers found, ran through what they termed “stress proliferation”: surveillance increased job pressures, reduced autonomy, and heightened feelings of privacy violation, all of which compounded into measurable psychological harm. The study used a novel measurement approach that captured overall surveillance perceptions across all types of work, rather than focusing narrowly on specific monitoring technologies.

The American Psychological Association's 2024 Work in America Survey, conducted by The Harris Poll among more than 2,000 employed adults, found that 56 per cent of workers who reported being monitored also reported feeling tense or stressed at work, compared with 40 per cent of those who were not monitored. Just over a third of respondents said they worried that their employer used technology to spy on them during work hours. The prevalence of monitoring was notably higher among Black and Hispanic workers (55 per cent and 47 per cent respectively) than among White workers (38 per cent), and higher among those doing manual labour (55 per cent) than among office workers (44 per cent). These disparities point to an equity dimension that is rarely discussed in the productivity optimisation conversation. The people bearing the heaviest burden of surveillance are disproportionately those who already occupy the most precarious positions in the labour market.

The US Government Accountability Office weighed in with a comprehensive report, GAO-25-107126, published in September 2025 and reissued with revisions in December 2025. The GAO reviewed 122 studies published between 2020 and 2024 on the effects of digital surveillance on workers' physical health and safety, mental health, and employment opportunities. The report concluded that while surveillance can in some contexts alert workers to potential health problems and increase their sense of physical safety, it can also increase anxiety and, critically, increase the risk of injury by pushing workers to move faster to meet productivity targets. The report further noted that several federal agencies that had previously provided guidance to employers about digital surveillance had, by mid-2025, rescinded those efforts or were reassessing their alignment with current administration priorities. The Department of Labor, for instance, removed a relevant resource from its website in June 2025 as part of a broader review.

When Productivity Scores Cause Injuries

The starkest illustration of how productivity tracking can cause physical harm comes from Amazon's warehouse operations. In December 2024, the US Senate Committee on Health, Education, Labor and Pensions published a 160-page report following an 18-month investigation led by Chairman Bernie Sanders. The investigation examined Amazon's internal systems for tracking worker speed, including the so-called “Time Off Task” metric that penalises workers for any period of inactivity, including time spent using the bathroom or waiting for equipment.

The Senate report cited an internal Amazon study, Project Soteria, which found a direct relationship between the speed at which workers performed tasks and their rate of injury. In each of the prior seven years, Amazon workers were nearly twice as likely to be injured as workers at other warehouses. More than two-thirds of Amazon's fulfilment centres had injury rates exceeding the industry average. The investigation concluded that Amazon had studied this connection for years but refused to implement changes that might reduce productivity, even when its own internal data showed those changes would reduce injuries. The report further alleged that Amazon manipulated workplace injury data to make its facilities appear safer than they were, and prevented injured workers from receiving needed medical care.

The report also found that Amazon's disciplinary systems, powered by automated tracking, forced workers into an impossible choice: follow safety procedures such as requesting help to move heavy objects, or risk discipline and potential termination for not maintaining sufficient speed. The system was, in effect, using surveillance and automated scoring to compel workers to choose between their physical safety and their employment.

Amazon contested the report's findings, insisting that injury rates had declined and that the investigation distorted the data. But the pattern the Senate investigation described, automated monitoring creating pressure that leads to physical harm, is not confined to warehouses. It is the logical endpoint of any system that reduces work to quantified activity and then optimises for speed.

The Panopticon Has a Subreddit

If you want to understand what it feels like to work under constant surveillance, the academic literature is illuminating. But Reddit may be more revealing.

A 2024 study published on arXiv and later in the Proceedings of the ACM on Human-Computer Interaction, titled “It's Always a Losing Game: How Workers Understand and Resist Surveillance Technologies on the Job,” analysed posts from nine work-related subreddits, including r/antiwork, r/remotework, r/WorkersStrikeBack, and r/overemployed, alongside ten in-depth semi-structured interviews with employees and managers from industries including operations, customer service, marketing, and food and beverage. The researchers found that workers consistently identified surveillance technologies as causing significant stress, reducing their productivity, and increasing their risk of disciplinary action. Workers also reported that these technologies fostered paranoia and distrust, not just between employee and employer, but among colleagues who feared that their peers might be reporting monitored data to management.

The resistance tactics the researchers documented included commiseration (sharing frustrations with fellow workers), obfuscation (using tools like mouse jigglers to game activity trackers), soldiering (deliberately slowing down work in protest), and quitting. Search queries for “mouse mover” and “mouse jiggler” have remained consistently elevated since March 2020, when the mass shift to remote work began. Approximately 16 per cent of employees, according to industry surveys, now use some form of device or software to circumvent inactivity tracking, while roughly 7 to 8 per cent use automation specifically to fake productivity metrics.

The psychological weight described in these communities is consistent with the formal research. Workers describe the sensation of being permanently watched not as an inconvenience but as a persistent source of anxiety that colours every aspect of their working day. The knowledge that a screenshot might be taken at any moment, that an idle period might be flagged, that a bathroom break might register as a productivity dip, creates a state of hypervigilance that is functionally indistinguishable from chronic low-level stress. These accounts are anecdotal, but they are also numerous, spanning thousands of posts across multiple communities, and they align precisely with what peer-reviewed studies have documented.

Industry-level surveys reinforce the picture. Seventy-two per cent of monitored employees say that monitoring has not improved their productivity. Forty-two per cent of monitored workers plan to leave their employer within a year, compared with 23 per cent of those who are not monitored. Fifty-nine per cent report that digital tracking damages workplace trust. Fifty-four per cent say they would consider quitting if their employer increased surveillance. Eight in ten employees report that monitoring erodes trust. The tools designed to keep workers productive are, by workers' own accounts, driving them away.

A Regulatory Patchwork Full of Gaps

The legal landscape governing workplace surveillance is, to put it charitably, fragmented. In the United States, there is no comprehensive federal law regulating employers' use of electronic monitoring. New York requires employers to provide advance written notice if they monitor employees' phone and internet use, a requirement that has been in force since May 2022, but this is a notification requirement, not a consent mechanism. Workers must be informed, but they cannot refuse. Illinois enforces the Biometric Information Privacy Act, one of the more stringent biometric protection statutes in the world, requiring written consent before employers collect fingerprints, facial scans, or retinal data. Violations carry penalties of 1,000 to 5,000 US dollars per incident. California's Consumer Privacy Act extends some data rights to employees, including the right to know what personal information is being collected. But these are state-level provisions, inconsistent in scope and enforcement, and they leave the vast majority of American workers without meaningful protection.

The EU AI Act, which entered into force on 1 August 2024, represents the most significant regulatory intervention to date. Its risk-based framework explicitly classifies AI used for performance evaluation and other employment-related decision-making as high-risk. Emotion recognition in workplaces was banned outright in February 2025. Starting in August 2026, any AI tool used in recruitment, screening, or performance assessment will require mandatory risk assessments, technical documentation, bias testing, human oversight, transparency disclosures, and continuous monitoring. Penalties for violations can reach 35 million euros or 7 per cent of global annual turnover for prohibited practices. In November 2025, the European Parliament advanced a further call for the European Commission to launch a dedicated legislative initiative regulating AI in the workplace. That same month, the EU AI Office introduced a dedicated whistleblower tool, enabling employees, contractors, and external stakeholders to report breaches of the AI Act anonymously through a secure platform.

In Australia, the Victorian parliamentary inquiry that reported in May 2025 made 29 findings and 18 recommendations. The committee concluded that workers were increasingly being subjected to surveillance through optical, listening, tracking, and data-recording devices, often without their knowledge or consent. It found widespread examples of biometric surveillance in practice, including the collection of retinal, finger, hand, and facial data from nurses and construction workers. The committee recommended dedicated workplace surveillance legislation requiring employers to demonstrate that any monitoring is “reasonable, necessary and proportionate to achieve a legitimate objective.” It called for the prohibition of selling worker data to third parties and severe restrictions on the collection of biometric data. The Victorian government subsequently provided in-principle support for 15 of the 18 recommendations.

In July 2025, the National Employment Law Project in the United States published “When 'Bossware' Manages Workers,” a policy report arguing that employers' expanding use of digital surveillance and automated decision-making systems had intensified a range of existing job quality problems, including harmful disciplinary practices, job precarity, lack of autonomy, exploitative pay, unfair scheduling, barriers to benefits, discrimination, and the suppression of collective action. NELP called for a two-pronged approach: updating existing workplace protections to account for bossware-related harms, and directly regulating the tools themselves.

The picture that emerges is one of significant regulatory activity, but mostly at the margins. In the jurisdictions where the largest number of workers are subject to monitoring, particularly the United States, the legal framework remains permissive. Employers can, in most states, monitor virtually everything an employee does on a company device without explicit consent. The gap between what the research shows and what the law permits is enormous.

The Power Question

If workplace surveillance does not reliably improve productivity, increases worker stress and anxiety, drives higher turnover, may contribute to physical injuries, and erodes the trust that functional employment relationships require, then why is the market for these tools growing at double-digit rates? The question is not rhetorical. It has an answer, and the answer has less to do with productivity than with power.

Part of the explanation lies in a perception gap that the data makes visible. According to industry surveys, 68 per cent of employers believe that monitoring improves work output. Meanwhile, 72 per cent of the workers being monitored say it does not improve their productivity, and 59 per cent report feeling stress or anxiety as a result of surveillance. The two sides of the employment relationship are looking at the same technology and reaching opposite conclusions. But only one side gets to decide whether the tools stay installed. The employer's belief that monitoring works is sufficient for continued adoption, regardless of whether the employees' experience confirms or contradicts that belief. This is not a failure of communication. It is the predictable outcome of a relationship in which one party holds unilateral decision-making authority over the terms of the other's working conditions.

Merve Hickok and Nestor Maslej, writing in AI and Ethics in 2023, published a policy primer examining assumptions embedded in workplace surveillance and productivity scoring technologies. Their central finding was that, in the absence of legal protections and strong collective action capabilities, workers are in a structurally imbalanced power position to challenge the use of these tools. The tools, they argued, undermine human dignity and human rights. Employers adopt them because they can, and because the technology offers a sense of control and visibility that managers find appealing, regardless of whether it translates into measurable performance gains. The tools serve a managerial appetite for legibility rather than any demonstrated improvement in output.

This dynamic explains the otherwise puzzling disconnect between evidence and adoption. Companies are not purchasing bossware because the data shows it works. They are purchasing it because it satisfies an organisational desire to see what employees are doing, to quantify their effort, and to possess a mechanism for discipline and justification. In a labour market shaped by years of remote and hybrid work arrangements, where physical presence can no longer serve as a proxy for productivity, surveillance software fills the gap. It is not a productivity tool. It is a control tool marketed as a productivity tool.

The asymmetry runs deeper than individual employer-employee interactions. The employees most heavily monitored tend to be those with the least bargaining power: warehouse workers, call centre operators, gig economy participants, and remote workers in competitive labour markets. The APA survey data showing disproportionate monitoring of Black and Hispanic workers suggests that existing social inequalities are being replicated and potentially amplified through the architecture of digital surveillance. The workers most likely to be watched are also the workers least likely to have the resources or institutional support to push back.

Can Workers Ever Trust Workplace AI?

If the current model of workplace AI is fundamentally about surveillance and control, the question remains: is there an alternative? Can artificial intelligence be deployed in the workplace in a way that workers would actually choose to use?

The answer, according to some emerging research and practice, is conditionally yes, but only if the architecture of the technology is rebuilt around entirely different principles. The distinction that matters is between surveillance-oriented monitoring and what researchers call developmental monitoring. A meta-analysis of electronic performance monitoring studies found that when monitoring data is used developmentally, meaning it is shared transparently with employees, used to provide constructive feedback, and oriented towards growth rather than discipline, the negative effects on wellbeing and counterproductive behaviour are significantly reduced. The tool is the same; the governance model is different. Supervisors who return performance monitoring data to employees in a constructive, developmental way can buffer the negative relational consequences that electronic monitoring would otherwise produce.

Broader surveys of workplace AI tell a similar story. A 2025 study cited by Wiley found that employees who understood how AI tools functioned, how they would affect their roles, and how they could contribute to shaping their deployment reported significantly higher trust and engagement. Sixty-seven per cent of employees reported increased efficiency from AI integration, 61 per cent reported improved information access, and 59 per cent cited greater innovation. But these gains tracked almost exclusively with organisations that had communicated clearly about how AI was being used. Where communication was absent, trust collapsed. Between May and July 2025, employee trust in company-provided generative AI tools fell 31 per cent, and trust in agentic AI systems that act autonomously dropped 89 per cent. Only 34 per cent of employees reported that their organisations had clearly explained how AI affected their roles and skill requirements. The pattern is consistent: productivity gains alone do not build confidence or engagement. Workers want to understand how AI fits into their work today and how it shapes opportunity tomorrow.

The pattern is not complicated. Workers do not inherently distrust AI. They distrust opacity. They distrust tools deployed without their input, governed without their participation, and used for purposes they cannot see or challenge. The EU AI Act's transparency and human oversight requirements for high-risk employment AI represent one structural answer to this problem. The Victorian inquiry's recommendation that employers demonstrate surveillance is “reasonable, necessary and proportionate” represents another. Both approaches share a common logic: the legitimacy of workplace technology depends on the extent to which the people subject to it have meaningful knowledge of and voice in how it operates.

There are practical models that point in this direction. ActivTrak, one of the larger workforce analytics platforms, has explicitly positioned itself as a “privacy-first” alternative that analyses productivity patterns at the team level rather than conducting individual keystroke surveillance. It does not offer keystroke logging or screen recording, and its analytics are designed to surface patterns such as burnout risk and collaboration bottlenecks rather than to generate individual compliance scores. Whether one believes ActivTrak's marketing claims is a separate question. But the fact that a monitoring company sees market advantage in positioning itself against surveillance suggests that the appetite for a different model exists, both among workers and among employers who recognise that trust is a precondition for sustained performance.

What Comes Next

The current trajectory of workplace surveillance is not sustainable in either a practical or a political sense. Practically, the evidence base for its effectiveness is thin and getting thinner. Tools that increase stress, drive turnover, and damage trust impose real costs on the organisations that use them, even if those costs do not appear on the dashboards that justify the software's purchase. Politically, the regulatory tide is turning. The EU has moved from general principles to specific prohibitions. Australia's Victorian inquiry has produced actionable recommendations with government backing. The GAO has documented the harms. Labour advocates and legal scholars are building the frameworks for broader reform.

But the pace of regulatory action remains slow relative to the pace of technological adoption. The employee monitoring market continues to grow. New tools are entering the market with increasingly granular capabilities. And in the jurisdictions where the regulatory environment is most permissive, particularly the United States, there is little immediate prospect of comprehensive federal legislation.

What the continued adoption of surveillance tools tells us, in the face of contrary evidence, is something uncomfortable but important. It tells us that the employment relationship, in its current form, is not fundamentally structured around mutual benefit. It is structured around control. When an employer can install software that monitors every keystroke, captures random screenshots, and scores an employee's activity minute by minute, and the employee has no legal right to refuse, challenge, or even fully understand what is being collected, that is not a partnership. It is an asymmetry of power expressed through technology.

The conversation about workplace AI needs to begin from this recognition. The problem is not that the technology is too powerful or too imprecise. The problem is that it is deployed within a relationship that gives one party near-total discretion over its use and the other party near-zero recourse. Fixing the technology without fixing the relationship will produce, at best, more sophisticated forms of the same dysfunction.

A version of workplace AI that workers could genuinely trust would require, at minimum, transparency about what data is collected and how it is used; meaningful consent, not the kind buried in paragraph 47 of an employment contract; worker participation in the governance of monitoring systems; clear limitations on the purposes for which collected data can be used; independent auditing of algorithmic decision-making; and enforceable rights of challenge and appeal. These are not radical proposals. They are the basic conditions under which any reasonable person would agree to be monitored. The fact that they describe almost no workplace surveillance system currently in operation is the most important thing to understand about where we are.

The tools exist. The evidence exists. The regulatory models exist. What does not yet exist, in most of the world, is the political will to force the rebalancing that workers deserve and that, if the research is to be believed, productivity actually requires.


References

  1. Bloomberg, “Wells Fargo Fires Over a Dozen for 'Simulation of Keyboard Activity,'” June 2024.
  2. MIT Technology Review, Rebecca Ackermann, “How AI Is Used to Surveil Workers,” February 2025.
  3. Glavin, P., Bierman, A., and Schieman, S., “Private Eyes, They See Your Every Move: Workplace Surveillance and Worker Well-Being,” Social Currents, Vol. 11, No. 4, pp. 327-345, August 2024.
  4. American Psychological Association, “2024 Work in America Survey: Psychological Safety in the Changing Workplace,” 2024.
  5. US Government Accountability Office, “Digital Surveillance: Potential Effects on Workers and Roles of Federal Agencies,” GAO-25-107126, September 2025.
  6. US Senate Committee on Health, Education, Labor and Pensions, “The Injury-Productivity Trade-off: How Amazon's Obsession with Speed Creates Unprecedented Danger for Workers,” December 2024.
  7. Parliament of Victoria, Economy and Infrastructure Committee, “Inquiry into Workplace Surveillance,” May 2025.
  8. Victorian Government, “Victorian Government Response to the Inquiry into Workplace Surveillance Report,” November 2025.
  9. National Employment Law Project, “When 'Bossware' Manages Workers: A Policy Agenda to Stop Digital Surveillance and Automated-Decision-System Abuses,” July 2025.
  10. Hickok, M. and Maslej, N., “A Policy Primer and Roadmap on AI Worker Surveillance and Productivity Scoring Tools,” AI and Ethics, Springer, 2023.
  11. Sum et al., “It's Always a Losing Game: How Workers Understand and Resist Surveillance Technologies on the Job,” arXiv:2412.06945 / Proceedings of the ACM on Human-Computer Interaction (CSCW), 2024-2025.
  12. Fujitsu, “Fujitsu Develops AI Model to Determine Concentration During Tasks Based on Facial Expression,” Press Release, March 2021.
  13. EU AI Act, “Regulatory Framework for Artificial Intelligence,” European Commission, entered into force August 2024.
  14. Crowell and Moring LLP, “Artificial Intelligence and Human Resources in the EU: A 2026 Legal Overview,” 2026.
  15. Fortune Business Insights, “Employee Surveillance and Monitoring Software Market,” 2024-2034.
  16. APA, “Electronically Monitoring Your Employees? It's Impacting Their Mental Health,” 2024.
  17. ADM+S Centre, “Being Monitored at Work? A New Report Calls for Tougher Workplace Surveillance Controls,” 2025.
  18. Wiley, “How Employee Trust in AI Drives Performance and Adoption,” 2025.
  19. High5Test, “Employee Monitoring Statistics in the US (2024-2025): Surveillance and AI Tracking,” 2025.
  20. ScienceDirect / Computers in Human Behavior Reviews, “The Impact of Electronic Monitoring on Employees' Job Satisfaction, Stress, Performance, and Counterproductive Work Behavior: A Meta-Analysis,” 2022.
  21. Teramind, “ActivTrak vs Hubstaff: Features, Pros, Cons and Pricing,” 2025.
  22. European Parliament, Resolution on AI in the Workplace, November 2025.
  23. Biometric Update, “Australian State Launches Inquiry into Workplace Surveillance,” August 2024.
  24. Corrs Chambers Westgarth, “Victorian Government Backs Landmark Workplace Surveillance Reforms,” November 2025.
  25. IT Pro, “The Rise of 'Bossware' Means Workers Have Nowhere to Hide from Management,” 2025.

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...