Decision&LawAI Legal Intelligence
ethics-aiworkplace-ai

AI at Work: Psychosocial Risks and the Regulatory Gap

Elena Markov
May 3, 2026
14 min read
algorithmic-managementpsychosocial-risksAI-accountabilityworker-surveillanceoccupational-health

Educational Content – Not Legal Advice

This article provides general information. Consult a qualified attorney before taking action.

Disclaimer

This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.

Last Updated: May 3, 2026

Your workplace already has an algorithmic manager. It does not sign payrolls or appear on the organizational chart, but it assigns tasks, measures time, evaluates performance and can trigger disciplinary proceedings. According to recent survey data, 74% of firms across six high-income countries already use at least one algorithmic management tool to instruct, monitor or evaluate employees. And 27% of the mid-level managers who oversee those tools acknowledge that the physical and mental health of their staff is often not adequately protected by them.

These figures are not drawn from a union manifesto or a sectoral advocacy report. They come from ILO Working Paper 170: AI Systems at Work — A Changing Psychosocial Work Environment, published in April 2026 by the International Labour Organization, authored by Tahmina Karimova, a Law Research Specialist in the ILO's Research Department. Its full document is available open-access at ilo.org. It constitutes the ILO's first systematic analysis of the psychosocial risks — hereafter PSRs — generated specifically by AI systems in workplace settings, and it advances a thesis that deserves doctrinal attention: existing occupational safety and health (OSH) frameworks are structurally unfit to capture the psychosocial risks that AI creates in employment, and the regulatory response cannot come from any single instrument, but must integrate labour law, AI regulation, data protection and anti-discrimination frameworks simultaneously.

An Algorithm as Employer of Fact

The transformation documented in Working Paper 170 is not merely technological. It is structural. The deployment of AI systems in the workplace does not only change which tasks workers perform; it reshapes how authority over those workers is exercised. Algorithmic management of work — defined as the use of algorithmic procedures to coordinate labour input in an organization, or more precisely, the delegation of managerial functions to algorithms — fundamentally alters the legal relationship that underlies the traditional employment contract.

The depth of this shift deserves emphasis. Labour law as a discipline — from ILO instruments to the most developed national frameworks — was built on an assumption of human-to-human authority: an employer who gives instructions and a worker who executes them, with rights and obligations flowing from that interaction. When the directive function is delegated to an algorithmic system, questions arise for which the existing legal order has no clear answer. Who is responsible for the algorithm's decisions? Does the worker have the right to know the parameters by which their performance is being measured? Can they contest a decision made by an automated system with no discernible human involvement?

The Working Paper categorizes the relevant AI systems into three broad families: advanced robotics, algorithmic management systems, and smart digital systems — which integrate sensors, IoT devices, wearables, augmented reality and drones to monitor and manage workplace safety and health risks. In all three, AI is not a passive instrument. It processes data in real time, generates predictions, and issues recommendations or decisions affecting working conditions as sensitive as pay, scheduling, performance evaluation, access to training and career progression. This depth of penetration across the full employment lifecycle is what qualitatively distinguishes the current moment from previous waves of technological change in the workplace.

The Psychosocial Dimension: The Overlooked Link in Workplace Safety

Psychosocial risks at work — defined by the ILO as any element in the design or management of work that increases the risk of work-related stress — have a long history in occupational medicine and psychology, but have been systematically undervalued in regulatory frameworks. Karimova documents this deficit with precision: the majority of OSH legislation worldwide continues to focus on the physical dimensions of safety, and references to mental health are not always operationalized into concrete preventive measures.

The taxonomy of psychosocial factors developed in the 1986 Joint ILO/WHO Committee on Occupational Health report remains the canonical reference: job content and task design, workload and work pace, job control, work environment and equipment, organizational culture, interpersonal relationships at work, role in the organization, career development and the home-work interface. What Working Paper 170 does — with methodological rigour — is map how each of these factors is exacerbated or reconfigured by the introduction of AI systems.

The taxonomic table the document proposes is one of its most valuable conceptual contributions. It identifies, for instance, that "workload and work pace" is affected by the cognitive overload and time pressure generated by automated management systems capable of increasing the number of tasks within shorter working periods and imposing unreachable or arbitrary key performance indicators. "Job control" acquires an entirely new dimension when decisions about hiring, task assignment, evaluation or dismissal are taken by an opaque algorithm. And "organizational culture" mutates into an environment of permanent surveillance when AI systems continuously collect data on workers' activity, communications and behavioural patterns.

Three emerging PSR factors are identified as specifically attributable to AI and fitting imprecisely into any pre-existing category: intensive and intrusive surveillance, loss of job autonomy, and excessive data collection combined with a lack of transparency. Their examination reveals both the inadequacy of current conceptual frameworks and the urgency of purpose-built regulatory responses.

Permanent Surveillance: When the Panopticon Goes Algorithmic

Jeremy Bentham designed the panopticon as an architecture of control in which the observed never knows precisely when they are being watched, but acts at all times as though they are. Algorithmic management systems have perfected that model to technically unthinkable limits. A US-based survey cited in Working Paper 170, conducted with 1,273 workers, found that 46% of those whose productivity was monitored "all the time" agreed that they worked too fast, compared to just 15% of those never monitored electronically. 53% of the constantly monitored group reported experiencing workplace anxiety all or most of the time.

What is legally significant here is not only the empirical data but its normative implication. Constant surveillance is not a neutral management preference: it is a psychosocial risk factor with measurable consequences for mental health. And yet, with notable exceptions, mainstream labour legislation does not treat it as such.

Australia offers the most advanced example. The Work Health and Safety (Managing Psychosocial Hazards at Work) Code of Practice 2024 classifies intrusive surveillance as a specific workplace hazard and includes within that category both disproportionate supervision and activity tracking through keystroke loggers, webcams, GPS and similar devices. Australian Comcare has further developed a list of workplace practices qualifying as intrusive surveillance, including unreasonable oversight, tracking via keyboard activity trackers, monitoring emails and internet use, covert webcam surveillance, tracking calls and movements via CCTV, and GPS monitoring of workers in company vehicles for performance purposes rather than safety reasons.

The European picture is more fragmented. The European Parliament's Resolution of 5 July 2022 on mental health in the digital world of work acknowledges the significant risks that AI management tools pose to workers' privacy and dignity, and notes that approximately 40% of human resources departments in international companies now use AI applications, with 70% regarding this as a high priority. But political recognition has not yet translated into broadly applicable, binding legal obligations.

Switzerland's Article 26 of Ordinance 3 to the Employment Act prohibits monitoring or control systems designed to observe employee behaviour in the workplace, and requires that any necessary monitoring system be designed and installed in a way that does not affect workers' health or freedom of movement. France mandates prior notification to employees before any monitoring tool is deployed. Portugal, Greece, Cyprus and Bulgaria have incorporated comparable restrictions in the context of telework, though with more limited scope of application.

The regulatory dispersion is precisely the problem. Awareness of the risk is growing, but there is no coherent, binding framework that systematically prevents it.

Autonomy as a Health Right: What Occupational Medicine Knows and Law Ignores

Occupational psychology and medicine have documented a robust correlation between job autonomy and workers' mental health for decades. The Hackman and Oldham model (1976) identifies autonomy — understood as the freedom, independence and discretion a worker has in scheduling and executing their work — as one of five core job dimensions that determine satisfaction and well-being. The WHO Guidelines on Mental Health at Work (2022) confirm that low control over one's work is associated with symptoms of mental health conditions, emotional exhaustion and an increased probability of mental health-related sick leave.

But what happens when a worker's autonomy is eroded not by an authoritarian human supervisor, but by an AI system that assigns tasks, determines pace, evaluates outputs and makes decisions with no discernible human involvement? Working Paper 170 responds with evidence: a German survey of 8,000 employees (the Digitalisierung und Wandel der Beschäftigung, DiWaBe study) concluded that a minimum level of autonomy is a relevant factor for employees' mental health and represents a long-term risk for mental illness. The study's results indicated a worsening of working conditions — not an improvement — when more decisions were delegated to technology.

What is striking is that labour law generally lacks mechanisms to capture this dimension. The concept of "job control" as a psychosocial factor does not include specific references to AI systems, and OSH risk assessment frameworks have not been updated to incorporate algorithmic erosion of job autonomy. As the Working Paper itself observes, occupational psychology and medicine may be unable to "fully explain specific changes regarding the digitalization of tasks," suggesting that the risk of low job control is being structurally underestimated by current diagnostic instruments.

Research additionally suggests a nuanced relationship between automation levels and worker performance. One study found that performance was best under low-to-intermediate levels of automation, while higher levels were negatively associated with performance and workload. High automation increased mental workload and had a detrimental effect on situational awareness, the feeling of control and task variability. Other studies distinguish between task sophistication, emphasising that control over execution is more critical for tasks requiring expertise, while simple and repetitive tasks can be performed by fully autonomous systems without negatively affecting worker autonomy. The policy implication is clear: preserving worker autonomy and job control must become an explicit design and governance criterion for workplace AI systems.

Data Opacity: Privacy, Inference and Disproportionality

The third analytical axis concerns the mass collection of data on workers and the absence of transparency in its processing. The ILO's 1997 Code of Practice on the Protection of Workers' Personal Data already recognized that employers legitimately collect personal information for compliance, selection, safety and quality control purposes. But the scale and granularity of data collection enabled by current AI systems exceeds any historical precedent, generating an informational power asymmetry with no equivalent in the employment relationship model for which existing law was designed.

AI systems deployed in work management do not merely record productivity metrics. They can monitor communication patterns, activity on collaborative platforms, physical movements, facial expressions, vocal tones and — through affective computing systems — inferred emotional states. The absence of transparency regarding what data is collected, for what purpose, for how long and by whom creates what the Working Paper terms a perceived lack of fairness and trust, which is itself a psychosocial risk factor. As one formulation cited in the document puts it: "overcollection is overexposure," and this can erode the foundational trust that any employment relationship requires to function.

The EU AI Act prohibits emotion inference systems in workplace and educational settings, with exceptions for medical or safety purposes, and classifies as high-risk those systems that monitor the behaviour of persons in work-related contractual relationships. But the prohibition is partial: it does not cover all modalities of biometric surveillance or all forms of data collection that current systems enable. And implementation is being delayed. The European Commission's Digital Omnibus package, announced in November 2025, postponed obligations for high-risk AI systems from August 2026 to December 2027, in the name of "innovation-friendly AI rules."

This delay matters. It means that for at least another two years, the most comprehensive AI governance framework in force will impose no binding obligations on deployers of high-risk workplace AI systems — precisely at the moment when their deployment is accelerating.

The Regulatory Gap and the Case for an Integrated Framework

The central conclusion of Working Paper 170 is that existing regulatory frameworks are insufficient to address AI-generated PSRs at work — not because they are fundamentally misconceived, but because of a structural design limitation: they are technologically neutral, built for risks that are predictable at the design phase, not for risks that emerge dynamically during the deployment of opaque and adaptive systems. Researchers Cefaliello et al. make this point precisely: neither the EU OSH Framework Directive nor ISO 45-003 (Guidelines for Managing Psychosocial Risks) accounts for the dynamic emergence of previously unknown OSH risks from the ongoing deployment of AI. All existing instruments follow the safety by design paradigm, which assumes that all risks can be identified and mitigated before a system is put into operation. Generative and adaptive AI systems falsify that assumption.

The document's normative contribution is to propose not a new sectoral instrument, but an integrated approach that simultaneously activates four regulatory planes.

The first is labour and employment law, which must update its worker protection mechanisms to address the algorithmic authority relationship, including obligations of information, consultation and participation in AI deployment decisions. Spain's so-called Rider Decree offers a model: it introduced algorithmic transparency obligations on employers, requiring works councils to be informed of the parameters, rules and instructions of algorithms affecting working conditions, including profiling. Germany incorporated AI into the information duties toward works councils under its Betriebsverfassungsgesetz. Bulgaria's 2024 Labour Code amendments regulate the application of AI in telework management specifically, requiring employers to provide written information on the decision-making logic of algorithmic management systems and — crucially — obliging them to review algorithmic decisions upon written request from the worker, and to communicate the final human decision.

The second plane is equality and non-discrimination. The Working Paper emphasises that AI systems used in recruitment, evaluation, promotion or dismissal may perpetuate historical discrimination patterns against women, older workers, persons with disabilities, and individuals of certain racial or ethnic origins or sexual orientations. Existing anti-discrimination frameworks were not designed with algorithmic processes in mind, and the challenge of attributing discriminatory outcomes to specific decisions in opaque automated systems remains largely unresolved.

The third plane is OSH properly understood. The framework must broaden its conception of risk to include AI-specific PSRs, and develop adapted risk assessment tools. The ongoing lifecycle nature of AI risk — which emerges dynamically and cannot be fully anticipated at design stage — requires continuous and iterative risk monitoring mechanisms rather than point-in-time assessments. Brazil's draft AI legislation (PL 2338/2023) takes a notably different approach from the EU's product-safety orientation: it grounds AI governance in respect for rights, including social rights and "the valorisation of human work," and establishes the right to a human review of AI decisions as a baseline protection independent of the risk classification of the system involved.

The fourth plane is privacy and data protection. This means establishing clear limits on the collection and processing of labour data in algorithmic environments, updating the purpose limitation and data minimisation principles to account for the feedback loops that AI deployment creates — where adoption of AI systems necessitates greater collection of employee data, which in turn feeds further AI development.

Emerging Rights in the Algorithmic Workplace

The Working Paper devotes specific attention to what it calls "new OSH rights" in the context of algorithmic management. These are, more precisely, recalibrations of pre-existing rights — the right to information, the right to participate in decisions affecting health and safety, the right to refuse dangerous work — adapted to the specificity of the algorithmic environment.

Four of these rights merit particular attention. The right to information about AI deployment requires that workers be informed in accessible, comprehensible terms of the systems applied to them, their automated nature and their impact on working conditions. The EU AI Act incorporates this in Article 26(7), requiring deployers to inform workers' representatives and affected workers before putting a high-risk AI system into service.

The right to explanation of algorithmic decisions recognises the worker's entitlement to receive an understandable explanation of any decision an AI system makes that affects them. The EU AI Act addresses this in Article 86, while the EU General Data Protection Regulation establishes in Article 22 the right not to be subject to decisions based solely on automated processing that produce legally significant effects. Brazil's draft legislation goes further, establishing the right to challenge and request review of AI system decisions and the right to human review taking into account context, risk and the state of technological development.

The right to meaningful human review is connected to the principle of human oversight that underlies the AI Act, but in the employment context it acquires specific dimensions: the review cannot be merely formal, must be capable of modifying the outcome, and must consider contextual factors the algorithm was not designed to capture.

The right to disconnect, finally, is analysed by the Working Paper as a complementary protection against permanent surveillance and algorithmic intrusion into private life. Its implementation varies considerably — Australia, Belgium, France, Greece and Bulgaria have incorporated it into their respective legal frameworks — but its specific function in the AI context is to limit the capacity of monitoring systems to extend beyond working hours. As one analytical framing notes, the right to disconnect may be understood as allowing workers to disengage from constant AI-based workplace surveillance, an issue of direct concern from the perspective of psychosocial health.

What Remains Unresolved: Autonomy, Dignity and Transparency

The conclusion of Working Paper 170 is honest about what the law has not yet addressed. Regulations on surveillance and intrusiveness are taking shape, however dispersed. Privacy protections at work are advancing, however slowly. But the loss of job autonomy as an AI-attributable psychosocial risk factor lacks, in general terms, any articulated normative response. The same applies to dignity at work — understood as the worker's capacity not to be reduced to a set of algorithmically evaluated metrics without the possibility of contestation — and to organizational transparency in the broader sense, which implies not only knowing that an algorithm exists, but understanding how it operates, what data it uses and what consequences it produces.

These three deficits raise a question that ultimately transcends labour law or AI regulation: what kind of world of work are we choosing to build? The ILO, since its founding, has maintained that working conditions must be humane. The mass deployment of AI systems that assume directive functions, evaluate performance and allocate opportunities asks whether that principle remains operative — and which institutions, legislative, judicial or supervisory, are positioned to guarantee it.

Working Paper 170 does not offer a definitive answer to that question. What it does, with academic rigour and solid empirical grounding, is demonstrate that the question is urgent, and that the answers can no longer afford to be technologically neutral.


The full document — ILO Working Paper 170: AI Systems at Work — A Changing Psychosocial Work Environment, by Tahmina Karimova (ILO, Geneva, 2026) — is available open access under Creative Commons Attribution 4.0 at ilo.org.

Back to News