Beyond the Prompt – Decoding AI Compliance at Work
Over the past couple of years, there has been an explosion of generative artificial intelligence (AI) use in the workplace. AI can be utilized in a multitude of ways, including automating routine tasks, screening data sets, creating training content, and screening job applications, to name a few. However, the use of AI comes with inherent risks, and employers must take proactive measures to minimize these risks within their organizations.
Regulating AI in the Workplace
Canada is currently in the early phases of regulating AI usage. Previously, the Government of Canada introduced Bill C-27, which included legislation titled the “Artificial Intelligence and Data Act.” This bill was drafted to ensure the development of AI systems occurs safely and responsibly, primarily through a risk-based approach to address potential harms associated with AI use. Unfortunately, Bill C-27 did not advance and has been shelved.
The majority of provinces and territories in Canada lack comprehensive legislation governing AI use, with a few notable exceptions:
- Ontario: As of January 1, 2026, changes to the Employment Standards Act require employers with more than 25 employees to disclose in job postings if they use AI to screen or assess applicants.
- Quebec: An Act respecting the protection of personal information mandates that individuals must be informed if a decision is made based solely on automated processing of their personal information.
On December 3, 2025, Accessibility Standards Canada announced the publication of the country’s first National Standard focused on accessible AI, titled CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems (AI Standard). This standard aims to ensure that AI use is equitable and accessible, particularly for individuals with disabilities. Currently, it remains voluntary for employers to implement.
What Employers Need to Know
While it may seem that legislation regulating AI is a new legal frontier with minimal regulation, existing legal frameworks must be considered, including human rights and privacy obligations.
Human Rights Considerations
Many AI systems recycle outputs based on pre-existing data, which can result in inaccurate or discriminatory outcomes. Liability may ensue, and workplaces could be held responsible. For example, a screening program might inadvertently eliminate candidates based on a protected ground under human rights legislation, constituting a discriminatory practice.
Privacy Considerations
Employers must also be aware of intellectual property and privacy concerns related to AI use. Inputting information into AI programs can risk losing intellectual property rights, as the terms of service may transfer ownership to the AI provider. Additionally, entering employee personal or sensitive information into AI systems poses risks of privacy violations and security breaches.
How Employers Can Mitigate Risk
To minimize the risks associated with improper AI use, employers should ensure that employees are aware of these risks and implement measures to mitigate them.
An effective AI Use Policy is a proactive step in minimizing risk and communicating organizational expectations regarding AI use. Important provisions to include in an AI policy are:
- Requirements for employees to disclose and receive pre-approval for AI use.
- A clear outline of what the organization considers acceptable AI use.
- Consequences for improper AI usage.
For organizations interested in learning more about AI and risk mitigation, consulting with experienced legal professionals can provide vital support in complying with legal frameworks and establishing effective AI policies.
Note: This article is of a general nature and is not exhaustive of all possible legal rights or remedies. Laws may change over time and should be interpreted in specific contexts; therefore, it is advisable to consult a legal professional for tailored advice.