AI Regulation in Australian Workplaces: Balancing Innovation and Employee Rights

Navigating the Evolving Landscape of AI Regulation in Australian Workplaces

As artificial intelligence (AI) and automated decision-making (ADM) technologies rapidly transform the world of work, Australian employers face a complex and shifting regulatory environment.

Employer and Union Perspectives

On the employer’s side is the argument that AI creates opportunities for increased productivity, efficiency, and innovation. On the other hand, unions highlight risks ranging from potential discrimination in hiring to concerns over workplace surveillance and job displacement.

This article considers employers’ existing obligations regarding the use of AI and ADM technologies in the workplace and how they should navigate the evolving regulatory environment moving forward.

Different Perspectives on AI

AI was a hot topic leading into the Productivity Roundtables in August 2025, where union representatives pushed for a strong regulatory framework and greater worker voice in AI adoption. If union proposals are adopted, they may impede employers’ ability to implement productivity measures.

While unions continue to push for reforms, the Australian Government has not yet committed to a position, announcing a regulatory gap analysis to determine if legislative change is necessary, including a review of workplace laws. This is part of the government’s ongoing National AI Plan, currently under consultation.

The outcome of this review and the National AI Plan is expected by year-end. Meanwhile, key government figures have endorsed union calls for greater worker voice in AI adoption, marking this as a significant emerging industrial relations issue.

The Existing Regulatory Framework

Despite union concerns, Australia’s current workplace laws already provide protections relevant to AI and ADM use.

Unfair dismissal laws, anti-discrimination statutes, adverse action provisions, and work health and safety (WHS) legislation all safeguard employees. For instance, if an algorithmic decision results in termination without human oversight, the employer remains liable under unfair dismissal laws. The Fair Work Commission (FWC) requires a valid reason and a fair process for dismissal.

Discrimination law is more nuanced. While intent is not required for discrimination findings, the law usually contemplates human actions. This raises questions about liability when decisions are made solely by algorithms, such as using ADM to vet candidates who may be rejected for discriminatory reasons unconsciously embedded in the technology.

This may appear as a “regulatory gap,” but the general protections under the Fair Work Act 2009 arguably cover cases where candidates are rejected for discriminatory reasons, regardless of algorithm or human decision. Employers relying exclusively on ADM for recruitment would face a challenging burden of proof.

A Patchwork of Surveillance, Health, and Safety Laws

Workplace surveillance is governed by various state and territory laws along with WHS obligations. Though these laws may need modernization to keep up with technology, they do provide protections against unreasonable monitoring and data collection.

Consultation requirements are another important area. Most employees are covered by modern awards or enterprise agreements mandating consultation when major changes, such as new technologies, are likely to significantly affect employees.

These obligations encompass AI and ADM, ensuring employees and their representatives are involved in discussions about technological changes.

Recent reports and union submissions claim that consultation duties are sometimes “obviated by employers” and may lack transparency, creating uncertainty about whether AI deployment triggers formal consultation. Although evidence is limited, this argument is gaining traction in Federal Cabinet.

Federal Minister for Industry, Innovation and Science, Senator Tim Ayres, the government’s AI lead, has publicly endorsed greater union voice in AI adoption. Similarly, Assistant Treasury Minister Dr. Andrew Leigh MP stated the union movement argued at the Productivity Roundtable that “workers must be partners in shaping how AI is deployed, not passive recipients of decisions made in corporate boardrooms.”

Where to From Here?

Recent developments show specific AI workplace regulation is already emerging. The statutory Digital Labour Platform Deactivation Code for gig economy platforms and proposed amendments to the Workers Compensation Act 1987 (NSW) signal increased oversight of automated systems.

The NSW workers compensation amendments before Parliament are particularly novel and may serve as a blueprint for other jurisdictions. They link WHS risks with workplace surveillance and “discriminatory” decision-making, granting union officials specific rights to inspect “digital work systems” to investigate suspected breaches.

These reforms aim to ensure human oversight in key decisions, prevent unreasonable performance metrics and surveillance, and empower unions.

Federally, the Australian Council of Trade Unions (ACTU) advocates for mandatory “AI Implementation Agreements” requiring employers to consult staff before introducing AI technologies. These agreements would guarantee job security, skills development, retraining, and transparency.

Other union proposals include rights to refuse AI use in some cases, mandated training, surveillance law reforms, and expanded bargaining rights related to AI adoption.

In critiquing the Productivity Commission’s interim AI report, the ACTU criticized its “let it rip” stance and called for a “dedicated AI Act and a well-resourced regulator,” opposing copyright changes diminishing workers’ rights or enabling their work to train AI without consent.

Though the government appears to move away from a dedicated AI Act, recent comments from key officials indicate employers should prepare for legislative changes giving workers and unions greater voice in AI adoption.

Proactive Steps to Using AI in the Workplace

In this evolving context, employers should proactively manage legal compliance and workforce relations by:

  • Ensuring human oversight: Maintain human involvement in significant employment decisions using AI or ADM, especially in hiring, firing, promotion, and performance management.
  • Conducting AI risk assessments: Evaluate bias, privacy, WHS, and discrimination risks before implementation.
  • Consulting with employees: Engage in timely, meaningful consultation with employees and representatives when introducing new technologies, as required by awards or agreements.
  • Developing clear policies: Establish and communicate policies on AI use, workplace surveillance, and data handling to ensure transparency and trust.
  • Investing in skills development: Provide upskilling and retraining to help employees safely and effectively use AI, adapt to technological change, and maintain workforce capability.
  • Monitoring legal developments: Track federal and state reforms and emerging best practices to ensure ongoing compliance and readiness.

Preparing for the future of work: With government, unions, and business groups all engaged in shaping AI regulation’s future, employers must understand the current legal framework and anticipate future reforms.

Those who act now will be better prepared to meet upcoming reforms and maintain the trust crucial for successful AI adoption.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...