AI Disclosure in Ontario Job Postings: What Employers Need to Know

AI and Job Postings: Navigating Ontario’s Upcoming Requirements

On March 21, Ontario’s Bill 149, Working for Workers Four Act, 2024 received Royal Assent, introducing significant amendments to the Employment Standards Act, 2000. One of the key elements of this legislation is the mandate for employers to disclose the use of artificial intelligence (AI) in publicly advertised job postings. Starting January 1, 2026, employers with twenty-five or more employees must include a statement on job postings if AI is utilized to screen, assess, or select applicants.

As employers prepare for this change, it is crucial that they align their hiring practices with broader legal and ethical standards. Transparency and fairness are paramount considerations in the responsible use of AI, and Ontario’s mandatory disclosure requirements are designed with these factors in mind. By prioritizing transparency and fairness, employers can mitigate the risks associated with non-compliance while demonstrating a commitment to ethical hiring practices in an increasingly automated landscape.

AI Disclosure Requirements Under Bill 149

Bill 149 amends the ESA by introducing several new requirements regarding AI, which is defined broadly as “a machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”

Employers must disclose if AI is used in assessing or selecting candidates in any publicly posted job.

Scope and Exceptions

These rules apply specifically to external job postings accessible to the public. Notably, Bill 149 exempts:

  • Internal postings (for current employees only).
  • General “help wanted” signs.
  • Recruitment for work performed outside Ontario.

Employers with fewer than twenty-five employees are currently not subject to these obligations.

Legal and Human Rights Considerations

Automated decision-making is increasingly being utilized by organizations to attract top talent and streamline the hiring process. However, the potential impact of algorithmic bias and the necessity for transparency remain critical considerations for the fair and responsible use of AI.

The Ontario Human Rights Commission (OHRC) has highlighted the use of AI in employment as a growing risk, particularly due to the potential for indirect discrimination stemming from algorithmic bias. For instance, AI tools trained on historical hiring data could inadvertently replicate unfair requirements and biased language in job advertisements. This could favor certain demographics while excluding others, potentially infringing upon rights under the Ontario Human Rights Code.

Failure to comply with Bill 149’s AI disclosure requirements may not only result in enforcement actions under the ESA but could also lead to human rights complaints if AI-driven decisions yield discriminatory outcomes.

Practical Risk Mitigation Strategies

Although a regulation accompanying Bill 149 was released in late 2024, clarifying the definition of AI, it does not specify which tools, systems, or processes fall under this definition and would therefore trigger the disclosure requirement for publicly advertised job postings. The current ambiguity regarding what constitutes AI use in job postings may lead to inconsistent reporting, increasing the risk of non-compliance.

In light of the AI disclosure requirements expected to take effect in January, employers should utilize the intervening months to prepare for compliance by considering the following strategies:

  • Conduct Algorithmic Impact Assessments: Regular audits can help identify biased outputs or data gaps in AI tools. Employers should document AI decision logic and test for discriminatory patterns.
  • Develop Transparent Disclosure Practices: Job postings should clearly state how AI is utilized in the recruitment and onboarding process, including at which stages it is employed.
  • Integrate Human Oversight: Employers should consider adopting a “human-in-the-loop” approach in hiring practices to ensure that final hiring decisions are reviewed by HR professionals, limiting overreliance on algorithmic judgment.
  • Train HR and Legal Teams: Staff should have a clear understanding of how AI is deployed and be well-versed in their legal disclosure and privacy obligations, including conducting gap analyses to prepare for new obligations under Bill 149.
  • Review Vendor Agreements: Employers using third-party AI solutions should confirm that these solutions comply with Bill 149, addressing any compliance gaps through contractual amendments.

Key Takeaways

Ontario’s Bill 149 introduces new AI disclosure requirements that reflect the growing integration of AI in hiring practices. As these regulations come into effect, businesses must begin evaluating their use of AI in recruitment and take proactive steps to align with evolving legal expectations. Emphasizing transparency and fairness is not merely a best practice but now carries legal significance across employment, human rights, and privacy frameworks. This is particularly critical in the context of hiring, where the implications of AI decisions can significantly impact individuals. Early preparation can assist organizations in navigating these changes responsibly and effectively.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...