Lessons for Australia from the EU’s AI Regulations

What Lessons Can Australia Learn from the EU Artificial Intelligence Act?

As the popularity of artificial intelligence (AI) grows, the need to legislate and develop a framework for its use is imperative. The European Union is at the forefront of this movement, having passed the Artificial Intelligence Act (AI Act) in late 2023. It was published in the Official Journal of the European Union on 12 July 2024 and came into force on 1 August 2024, with provisions being implemented in stages.

Prohibitions Under the AI Act

One of the significant aspects of the AI Act is the prohibitions outlined under Article Five, which will take effect on 2 February 2025. These prohibitions cover a range of areas, including:

  • Restricting the use of manipulative or deceptive techniques
  • Prohibiting the use of AI for social scoring
  • Predicting criminality based on profiling
  • Untargeted scraping of facial images from the internet or CCTV
  • Use of “real-time” biometric identification systems

While these prohibitions will be reviewed annually, there are exceptions that allow for specific use cases.

Complexity of the Regulation

The regulation is notably complex, as highlighted by experts in the field. A significant portion of it relates to high-risk systems and prohibited systems. For high-risk systems, there is a necessity for technical standards or common specifications to ensure developers can comply, particularly concerning safety thresholds and the protection of human rights.

Among the prohibitions, real-time facial recognition emerges as one of the most complicated issues. Although there is a push to prohibit its use entirely, scenarios such as locating a lost child or responding to imminent threats complicate the debate.

Subliminal Exploitation of Vulnerabilities

Another notable prohibition addresses the subliminal exploitation of vulnerabilities. For instance, targeted ads could exploit a person’s gambling tendencies when they are most vulnerable, highlighting the challenges in implementation and monitoring of the Act.

Exceptions and Workarounds

Despite the prohibitions, there are circumstances under which exceptions may apply. For example, a judge assessing the likelihood of a person committing another offense could use AI to examine relevant factors directly related to a person’s criminal record, which may not be classified as a prohibited system if it does not involve profiling.

Is There a Need for AI Regulation?

There is a general consensus among advanced liberal democracies that some degree of regulation for AI, based on risk, is necessary. The insights gained from the EU Act suggest that certain practices, such as using AI to exploit vulnerabilities, should be universally prohibited without debate.

A Stable Regulatory Regime

A stable regulatory regime is crucial for developers. The EU AI Act provides clarity, allowing developers to understand the expectations and access a market of 500 million people. For Australia, interoperability is vital, as the market is smaller and will necessitate compatibility with external systems.

Penalties for Non-Compliance

It’s important to note that penalties for non-compliance with Article Five of the EU AI Act will commence on 2 August 2025, with fines reaching up to 35 million Euros or up to 7% of a company’s total worldwide annual turnover, whichever is higher.

This overview of the EU Artificial Intelligence Act emphasizes the significant lessons Australia can learn as it considers its regulatory approach to AI. The establishment of clear, comprehensive legislation will be essential as the landscape of artificial intelligence continues to evolve.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...