AI Disclosure in Ontario Job Postings: What Employers Need to Know

AI and Job Postings: Navigating Ontario’s Upcoming Requirements

On March 21, Ontario’s Bill 149, Working for Workers Four Act, 2024 received Royal Assent, introducing significant amendments to the Employment Standards Act, 2000. One of the key elements of this legislation is the mandate for employers to disclose the use of artificial intelligence (AI) in publicly advertised job postings. Starting January 1, 2026, employers with twenty-five or more employees must include a statement on job postings if AI is utilized to screen, assess, or select applicants.

As employers prepare for this change, it is crucial that they align their hiring practices with broader legal and ethical standards. Transparency and fairness are paramount considerations in the responsible use of AI, and Ontario’s mandatory disclosure requirements are designed with these factors in mind. By prioritizing transparency and fairness, employers can mitigate the risks associated with non-compliance while demonstrating a commitment to ethical hiring practices in an increasingly automated landscape.

AI Disclosure Requirements Under Bill 149

Bill 149 amends the ESA by introducing several new requirements regarding AI, which is defined broadly as “a machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”

Employers must disclose if AI is used in assessing or selecting candidates in any publicly posted job.

Scope and Exceptions

These rules apply specifically to external job postings accessible to the public. Notably, Bill 149 exempts:

  • Internal postings (for current employees only).
  • General “help wanted” signs.
  • Recruitment for work performed outside Ontario.

Employers with fewer than twenty-five employees are currently not subject to these obligations.

Legal and Human Rights Considerations

Automated decision-making is increasingly being utilized by organizations to attract top talent and streamline the hiring process. However, the potential impact of algorithmic bias and the necessity for transparency remain critical considerations for the fair and responsible use of AI.

The Ontario Human Rights Commission (OHRC) has highlighted the use of AI in employment as a growing risk, particularly due to the potential for indirect discrimination stemming from algorithmic bias. For instance, AI tools trained on historical hiring data could inadvertently replicate unfair requirements and biased language in job advertisements. This could favor certain demographics while excluding others, potentially infringing upon rights under the Ontario Human Rights Code.

Failure to comply with Bill 149’s AI disclosure requirements may not only result in enforcement actions under the ESA but could also lead to human rights complaints if AI-driven decisions yield discriminatory outcomes.

Practical Risk Mitigation Strategies

Although a regulation accompanying Bill 149 was released in late 2024, clarifying the definition of AI, it does not specify which tools, systems, or processes fall under this definition and would therefore trigger the disclosure requirement for publicly advertised job postings. The current ambiguity regarding what constitutes AI use in job postings may lead to inconsistent reporting, increasing the risk of non-compliance.

In light of the AI disclosure requirements expected to take effect in January, employers should utilize the intervening months to prepare for compliance by considering the following strategies:

  • Conduct Algorithmic Impact Assessments: Regular audits can help identify biased outputs or data gaps in AI tools. Employers should document AI decision logic and test for discriminatory patterns.
  • Develop Transparent Disclosure Practices: Job postings should clearly state how AI is utilized in the recruitment and onboarding process, including at which stages it is employed.
  • Integrate Human Oversight: Employers should consider adopting a “human-in-the-loop” approach in hiring practices to ensure that final hiring decisions are reviewed by HR professionals, limiting overreliance on algorithmic judgment.
  • Train HR and Legal Teams: Staff should have a clear understanding of how AI is deployed and be well-versed in their legal disclosure and privacy obligations, including conducting gap analyses to prepare for new obligations under Bill 149.
  • Review Vendor Agreements: Employers using third-party AI solutions should confirm that these solutions comply with Bill 149, addressing any compliance gaps through contractual amendments.

Key Takeaways

Ontario’s Bill 149 introduces new AI disclosure requirements that reflect the growing integration of AI in hiring practices. As these regulations come into effect, businesses must begin evaluating their use of AI in recruitment and take proactive steps to align with evolving legal expectations. Emphasizing transparency and fairness is not merely a best practice but now carries legal significance across employment, human rights, and privacy frameworks. This is particularly critical in the context of hiring, where the implications of AI decisions can significantly impact individuals. Early preparation can assist organizations in navigating these changes responsibly and effectively.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...