AI Adoption and Psychosocial Risk in Quebec Workplaces: The Policy Gap Under Law 27
Introduction
Under Quebec’s current occupational health and safety regime, employers are required to identify, analyze, and prevent psychosocial risks at work. The Act respecting occupational health and safety states that prevention programs and action plans must eliminate risks to workers’ physical and mental well-being at the source. The Commission des normes, de l’équité, de la santé et de la sécurité du travail (CNESST) and the Institut national de santé publique du Québec (INSPQ) now clearly frame psychosocial risks as workplace hazards that must be identified and managed (Act respecting occupational health and safety, CQLR c S-2.1; CNESST, n.d.-b; INSPQ, n.d.).
At the same time, Quebec workplaces are rapidly adopting artificial intelligence. AI is now embedded in scheduling, screening and hiring, performance analytics, productivity monitoring, generative copilots, and decision-support tools. These systems do not just change efficiency. They can also change workload, autonomy, fairness, communication, and the lived experience of work itself, which is precisely the terrain of psychosocial risk (CNESST, n.d.-a).
This is the gap. Quebec’s existing framework for psychosocial risk is broad enough to cover many of the harms that AI can create or intensify, but the policy guidance has not yet caught up to AI as a distinct source of those harms. The problem is not that Law 27 failed. The problem is that employers can comply on paper while missing one of the fastest-moving changes in the conditions of work (Act respecting occupational health and safety, CQLR c S-2.1; Loi modernisant le régime de santé et de sécurité du travail, 2021).
A framework that can see the problem, but does not yet name it
INSPQ defines psychosocial risks as factors tied to work organization, management practices, employment conditions, and social relations that increase the probability of adverse physical and psychological health effects (INSPQ, n.d.; Vézina et al., 2011). CNESST’s guidance identifies ten core psychosocial risk factors that employers are expected to assess and address: psychological demands and workload, decision latitude and autonomy, recognition, social support, organizational justice, job insecurity, information and communication, psychological harassment, workplace violence, and work-life balance (CNESST, n.d.-a).
CNESST also emphasizes that these factors should be considered together, rather than one by one, because their interaction matters. High demands combined with low autonomy, for example, are more harmful than either factor alone (CNESST, n.d.-a; Vézina et al., 2011).
Quebec therefore already has a strong conceptual map. What it does not yet have is a clear AI layer on top of that map. That matters because AI often enters organizations looking like a technology decision, not a work-design decision. A new tool is purchased, a pilot is launched, a team is expected to adapt. But in psychosocial terms, the real question is not whether the tool is impressive. It is whether it changes demands, control, support, fairness, recognition, and security in ways that could harm workers.
What follows is an analysis of how AI adoption intersects with the factors the CNESST framework already requires employers to assess.
Workload and psychological demands
One of the clearest pathways is workload. AI is often introduced with the promise of reducing effort, but in practice it frequently raises expectations. A customer service team with AI drafting tools may be expected to handle more tickets. A professional using a generative copilot may be expected to produce more output in the same time. And even when manual effort drops, cognitive work can rise. People still have to review outputs, catch errors, explain inconsistencies, and carry responsibility when the system is wrong (CNESST, n.d.-a).
The evidence here is still emerging, but it is already enough to justify caution. A longitudinal study of workers in Germany found no sizeable overall negative effect of occupational AI exposure on well-being or mental health in its main measure, but it did find small negative effects on life and job satisfaction when using a more granular self-reported AI exposure measure (Giuntella et al., 2025). This is not evidence of broad harm, but it does suggest that the lived experience of direct AI use may matter more than high-level exposure categories indicate.
Decision latitude and autonomy
Decision latitude is one of the classic pillars of psychosocial risk research. When people have little control over how and when they do their work, stress risk rises (Karasek, 1979). AI can quietly erode that control. Scheduling systems can optimize shifts with little worker input. Task-allocation systems can dictate priorities. Decision-support tools can produce recommendations that workers are expected, implicitly or explicitly, to follow. A tool presented as assistance can operate, in practice, as a constraint (CNESST, n.d.-a).
This is one of the places where current guidance feels incomplete. CNESST’s materials on decision latitude are useful, but they were not written with algorithmic management and generative systems at the center of the picture. That leaves employers with a familiar checklist and a new kind of intervention that does not sit neatly on it.
Recognition and social support
Recognition protects against burnout, disengagement, and the feeling that one’s effort has become invisible (Siegrist, 1996). AI complicates recognition in subtle ways. When outputs are produced with a generative tool, attribution becomes blurry. The thinking, editing, judgment, and verification done by the worker can disappear behind the apparent speed of the machine. Organizations may then raise expectations without raising recognition, pay, or support. The result is a new version of an old problem: effort-reward imbalance, now dressed in the language of innovation (CNESST, n.d.-a).
Social support can also thin out at exactly the moment it is needed most. AI systems can replace human touchpoints with chatbots, dashboards, automated prompts, and algorithm-mediated feedback. Workers who are struggling with a tool may hesitate to ask for help, especially in cultures where AI fluency is treated as the new baseline. A workplace can end up with more technical mediation and less actual support, which is the opposite of what the existing psychosocial framework identifies as protective (CNESST, n.d.-a; Vézina et al., 2011).
Organizational justice and communication
Organizational justice is about whether decisions feel fair, processes feel legitimate, and people feel they are treated with respect (Colquitt, 2001). AI raises justice issues almost by default. Hiring tools, performance scoring systems, and productivity analytics can rely on criteria that workers do not understand, cannot challenge, and do not experience as fully reflective of their contribution. Even when a system is statistically defensible, opacity itself can erode trust (CNESST, n.d.-a).
Communication is equally important. AI adoption is organizational change, but many organizations still communicate it as a software rollout rather than a redesign of work. Workers are told what the tool does, but not what it may change in accountability, pace, judgment, or monitoring. When the human implications of AI are underexplained, uncertainty fills the gap, and uncertainty is itself part of psychosocial risk (CNESST, n.d.-a; Vézina et al., 2011).
Job insecurity, harassment, and work-life boundaries
AI also interacts with the remaining CNESST risk factors through more diffuse but still important pathways. Job insecurity can rise long before any role disappears, simply because workers begin to wonder whether their work is becoming more replaceable, more measurable, or more easily reorganized around the technology. The uncertainty itself can be harmful. Kim and Lee (2024) found, in a three-wave study of South Korean professionals, that organizational AI adoption increased job stress and, through job stress, burnout, while self-efficacy in AI learning helped buffer that effect.
AI-enabled monitoring can also create climates that feel coercive, contributing to conditions in which harassment dynamics intensify. Always-on tools can lengthen the workday without anyone formally extending it. Content moderation and review work can expose people to concentrated harmful material. These are not all new hazards, but AI can intensify them, scale them, or make them harder to notice early.
Worker consultation: a procedural gap
Law 27 does not only require employers to identify psychosocial risks. It also strengthens prevention and participation mechanisms within Quebec’s health and safety regime, including mechanisms for worker involvement in prevention planning and workplace health and safety processes (Act respecting occupational health and safety, CQLR c S-2.1; Loi modernisant le régime de santé et de sécurité du travail, 2021).
AI adoption almost never includes workers in the decision. Technology is selected, piloted, and deployed through IT procurement, executive strategy, or operational efficiency workstreams, none of which typically involve the health and safety committee or the workers whose conditions of work will change. This is not just a psychosocial risk issue. It is also a procedural gap. If AI adoption changes the psychosocial conditions of work, and workers are not consulted in the process, the employer may already be falling short of the participatory spirit of the current regime.
That creates a practical compliance risk. An employer can have a prevention program, a psychosocial risk assessment, and an active AI rollout underway, yet still fail to connect them.
The real policy gap
The policy gap in Quebec is not the absence of a psychosocial framework. It is the absence of explicit guidance connecting that framework to AI adoption. Employers can run a psychosocial risk assessment and still fail to ask whether a new scheduling algorithm is reducing autonomy, whether a performance dashboard is undermining justice, or whether a generative copilot is intensifying workload through higher output expectations. CNESST guidance currently provides the categories. It does not yet give organizations enough help in seeing how AI fits inside them (CNESST, n.d.-a; CNESST, n.d.-b).
In my work supporting psychosocial risk management across multinational employers, I observe this pattern consistently. Organizations invest in AI deployment and psychosocial risk compliance as parallel workstreams, with no integration between them. The AI team does not consult the OHS team. The psychosocial risk assessment does not mention AI. The prevention program does not address the psychosocial consequences of the technology the organization is simultaneously introducing.
That gap matters because the two workstreams are not parallel at all. They are deeply intertwined. AI adoption is reshaping the very conditions that psychosocial risk assessments are supposed to measure.
Conclusion
Quebec does not need to start from scratch. Law 27 and the broader occupational health and safety regime already provide a strong foundation for psychosocial risk prevention.
The next step is more precise policy guidance: when AI deployment should trigger psychosocial risk reassessment, how AI-related psychosocial hazards should be identified within the existing INSPQ/CNESST factor model, what worker consultation should look like during AI implementation, and how employers should document the preventive measures they put in place (Act respecting occupational health and safety, CQLR c S-2.1; CNESST, n.d.-a).
Whether the institutions responsible for healthy work are ready to see AI clearly is the challenge now.
References
Act respecting occupational health and safety, CQLR c S-2.1. https://www.legisquebec.gouv.qc.ca/en/document/cs/s-2.1
Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386–400. https://doi.org/10.1037/0021-9010.86.3.386
Commission des normes, de l’équité, de la santé et de la sécurité du travail. (n.d.-a). Facteurs de risques psychosociaux liés au travail. Retrieved April 15, 2026, from https://www.cnesst.gouv.qc.ca/fr/prevention-securite/sante-psychologique/facteurs-risques-psychosociaux-lies-au-travail
Commission des normes, de l’équité, de la santé et de la sécurité du travail. (n.d.-b). Risques psychosociaux liés au travail. Retrieved April 15, 2026, from https://www.cnesst.gouv.qc.ca/fr/prevention-securite/sante-psychologique/risques-psychosociaux-lies-au-travail
Giuntella, O., König, J., & Stella, L. (2025). Artificial intelligence and the wellbeing of workers. Scientific Reports, 15(1), 20087. https://doi.org/10.1038/s41598-025-98241-3
Institut national de santé publique du Québec. (n.d.). Risques psychosociaux du travail et promotion de la santé des travailleurs et travailleuses. Retrieved April 15, 2026, from https://www.inspq.qc.ca/risques-psychosociaux-du-travail-et-promotion-de-la-sante-des-travailleurs
Karasek, R. A. (1979). Job demands, job decision latitude, and mental strain: Implications for job redesign. Administrative Science Quarterly, 24(2), 285–308. https://doi.org/10.2307/2392498
Kim, B.-J., & Lee, J. (2024). The mental health implications of artificial intelligence adoption: The crucial role of self-efficacy. Humanities and Social Sciences Communications, 11, 1561. https://doi.org/10.1057/s41599-024-04018-w
Loi modernisant le régime de santé et de sécurité du travail, LQ 2021, c 27.
Siegrist, J. (1996). Adverse health effects of high-effort/low-reward conditions. Journal of Occupational Health Psychology, 1(1), 27–41. https://doi.org/10.1037/1076-8998.1.1.27
Vézina, M., Cloutier, E., Stock, S., Lippel, K., Fortin, É., Delisle, A., St-Vincent, M., Funes, A., Duguay, P., Vézina, S., & Prud’homme, P. (2011). Enquête québécoise sur des conditions de travail, d’emploi et de santé et de sécurité du travail (EQCOTESST). Institut de recherche Robert-Sauvé en santé et en sécurité du travail. https://www.irsst.qc.ca/publications-et-outils/publication/i/100584