Lessons for Australia from the EU’s AI Regulations

What Lessons Can Australia Learn from the EU Artificial Intelligence Act?

As the popularity of artificial intelligence (AI) grows, the need to legislate and develop a framework for its use is imperative. The European Union is at the forefront of this movement, having passed the Artificial Intelligence Act (AI Act) in late 2023. It was published in the Official Journal of the European Union on 12 July 2024 and came into force on 1 August 2024, with provisions being implemented in stages.

Prohibitions Under the AI Act

One of the significant aspects of the AI Act is the prohibitions outlined under Article Five, which will take effect on 2 February 2025. These prohibitions cover a range of areas, including:

  • Restricting the use of manipulative or deceptive techniques
  • Prohibiting the use of AI for social scoring
  • Predicting criminality based on profiling
  • Untargeted scraping of facial images from the internet or CCTV
  • Use of “real-time” biometric identification systems

While these prohibitions will be reviewed annually, there are exceptions that allow for specific use cases.

Complexity of the Regulation

The regulation is notably complex, as highlighted by experts in the field. A significant portion of it relates to high-risk systems and prohibited systems. For high-risk systems, there is a necessity for technical standards or common specifications to ensure developers can comply, particularly concerning safety thresholds and the protection of human rights.

Among the prohibitions, real-time facial recognition emerges as one of the most complicated issues. Although there is a push to prohibit its use entirely, scenarios such as locating a lost child or responding to imminent threats complicate the debate.

Subliminal Exploitation of Vulnerabilities

Another notable prohibition addresses the subliminal exploitation of vulnerabilities. For instance, targeted ads could exploit a person’s gambling tendencies when they are most vulnerable, highlighting the challenges in implementation and monitoring of the Act.

Exceptions and Workarounds

Despite the prohibitions, there are circumstances under which exceptions may apply. For example, a judge assessing the likelihood of a person committing another offense could use AI to examine relevant factors directly related to a person’s criminal record, which may not be classified as a prohibited system if it does not involve profiling.

Is There a Need for AI Regulation?

There is a general consensus among advanced liberal democracies that some degree of regulation for AI, based on risk, is necessary. The insights gained from the EU Act suggest that certain practices, such as using AI to exploit vulnerabilities, should be universally prohibited without debate.

A Stable Regulatory Regime

A stable regulatory regime is crucial for developers. The EU AI Act provides clarity, allowing developers to understand the expectations and access a market of 500 million people. For Australia, interoperability is vital, as the market is smaller and will necessitate compatibility with external systems.

Penalties for Non-Compliance

It’s important to note that penalties for non-compliance with Article Five of the EU AI Act will commence on 2 August 2025, with fines reaching up to 35 million Euros or up to 7% of a company’s total worldwide annual turnover, whichever is higher.

This overview of the EU Artificial Intelligence Act emphasizes the significant lessons Australia can learn as it considers its regulatory approach to AI. The establishment of clear, comprehensive legislation will be essential as the landscape of artificial intelligence continues to evolve.

More Insights

The Perils of ‘Good Enough’ AI in Compliance

In today's fast-paced world, the allure of 'good enough' AI in compliance can lead to significant legal risks when speed compromises accuracy. Leaders must ensure that AI tools provide explainable...

European Commission Unveils AI Code of Practice for General-Purpose Models

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to provide a framework for compliance with certain provisions of the EU AI...

EU Introduces New Code to Streamline AI Compliance

The European Union has introduced a voluntary code of practice to assist companies in complying with the upcoming AI Act, which will regulate AI usage across its member states. This code addresses...

Reforming AI Procurement for Government Accountability

This article discusses the importance of procurement processes in the adoption of AI technologies by local governments, highlighting how loopholes can lead to a lack of oversight. It emphasizes the...

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry's approach to AI security through strategy and governance. The...

Tokio Marine Unveils Comprehensive AI Governance Framework

Tokio Marine Holdings has established a formal AI governance framework to guide its global operations in developing and using artificial intelligence. The policy emphasizes transparency, human...

Shadow AI: The Urgent Need for Governance Solutions

Generative AI (GenAI) is rapidly becoming integral to business operations, often without proper oversight or approval, leading to what is termed as Shadow AI. Companies must establish clear governance...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...