AI Disclosure in Ontario Job Postings: What Employers Need to Know

AI and Job Postings: Navigating Ontario’s Upcoming Requirements

On March 21, Ontario’s Bill 149, Working for Workers Four Act, 2024 received Royal Assent, introducing significant amendments to the Employment Standards Act, 2000. One of the key elements of this legislation is the mandate for employers to disclose the use of artificial intelligence (AI) in publicly advertised job postings. Starting January 1, 2026, employers with twenty-five or more employees must include a statement on job postings if AI is utilized to screen, assess, or select applicants.

As employers prepare for this change, it is crucial that they align their hiring practices with broader legal and ethical standards. Transparency and fairness are paramount considerations in the responsible use of AI, and Ontario’s mandatory disclosure requirements are designed with these factors in mind. By prioritizing transparency and fairness, employers can mitigate the risks associated with non-compliance while demonstrating a commitment to ethical hiring practices in an increasingly automated landscape.

AI Disclosure Requirements Under Bill 149

Bill 149 amends the ESA by introducing several new requirements regarding AI, which is defined broadly as “a machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”

Employers must disclose if AI is used in assessing or selecting candidates in any publicly posted job.

Scope and Exceptions

These rules apply specifically to external job postings accessible to the public. Notably, Bill 149 exempts:

  • Internal postings (for current employees only).
  • General “help wanted” signs.
  • Recruitment for work performed outside Ontario.

Employers with fewer than twenty-five employees are currently not subject to these obligations.

Legal and Human Rights Considerations

Automated decision-making is increasingly being utilized by organizations to attract top talent and streamline the hiring process. However, the potential impact of algorithmic bias and the necessity for transparency remain critical considerations for the fair and responsible use of AI.

The Ontario Human Rights Commission (OHRC) has highlighted the use of AI in employment as a growing risk, particularly due to the potential for indirect discrimination stemming from algorithmic bias. For instance, AI tools trained on historical hiring data could inadvertently replicate unfair requirements and biased language in job advertisements. This could favor certain demographics while excluding others, potentially infringing upon rights under the Ontario Human Rights Code.

Failure to comply with Bill 149’s AI disclosure requirements may not only result in enforcement actions under the ESA but could also lead to human rights complaints if AI-driven decisions yield discriminatory outcomes.

Practical Risk Mitigation Strategies

Although a regulation accompanying Bill 149 was released in late 2024, clarifying the definition of AI, it does not specify which tools, systems, or processes fall under this definition and would therefore trigger the disclosure requirement for publicly advertised job postings. The current ambiguity regarding what constitutes AI use in job postings may lead to inconsistent reporting, increasing the risk of non-compliance.

In light of the AI disclosure requirements expected to take effect in January, employers should utilize the intervening months to prepare for compliance by considering the following strategies:

  • Conduct Algorithmic Impact Assessments: Regular audits can help identify biased outputs or data gaps in AI tools. Employers should document AI decision logic and test for discriminatory patterns.
  • Develop Transparent Disclosure Practices: Job postings should clearly state how AI is utilized in the recruitment and onboarding process, including at which stages it is employed.
  • Integrate Human Oversight: Employers should consider adopting a “human-in-the-loop” approach in hiring practices to ensure that final hiring decisions are reviewed by HR professionals, limiting overreliance on algorithmic judgment.
  • Train HR and Legal Teams: Staff should have a clear understanding of how AI is deployed and be well-versed in their legal disclosure and privacy obligations, including conducting gap analyses to prepare for new obligations under Bill 149.
  • Review Vendor Agreements: Employers using third-party AI solutions should confirm that these solutions comply with Bill 149, addressing any compliance gaps through contractual amendments.

Key Takeaways

Ontario’s Bill 149 introduces new AI disclosure requirements that reflect the growing integration of AI in hiring practices. As these regulations come into effect, businesses must begin evaluating their use of AI in recruitment and take proactive steps to align with evolving legal expectations. Emphasizing transparency and fairness is not merely a best practice but now carries legal significance across employment, human rights, and privacy frameworks. This is particularly critical in the context of hiring, where the implications of AI decisions can significantly impact individuals. Early preparation can assist organizations in navigating these changes responsibly and effectively.

More Insights

AI Readiness Framework for the Pharmaceutical Industry

This article presents an AI readiness assessment framework tailored for the pharmaceutical industry, emphasizing the importance of aligning AI initiatives with regulatory standards and ethical...

AI as a Strategic Partner in Governance

The UAE has announced that a National Artificial Intelligence System will become a non-voting member of all federal and government company boards, marking a significant shift in governance. This...

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...

AI Governance: The Key to Successful Enterprise Implementation

Artificial intelligence is at a critical juncture, with many enterprise AI initiatives failing to reach production and exposing organizations to significant risks. Effective AI governance is essential...

AI Code Compliance: Companies May Get a Grace Period

The commission is considering providing a grace period for companies that agree to comply with the new AI Code. This initiative aims to facilitate a smoother transition for businesses adapting to the...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas enacted the Responsible Artificial Intelligence Governance Act, making it the second state to implement comprehensive AI legislation. The act establishes a framework for the...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas enacted the Responsible Artificial Intelligence Governance Act, making it the second state to implement comprehensive AI legislation. The act establishes a framework for the...

Laws in Europe Combatting Deepfakes

Denmark has introduced a law that grants individuals copyright over their likenesses to combat deepfakes, making it illegal to share such content without consent. Other European countries are also...