AI Regulations Transforming the Automotive Landscape

AI Act and the Automotive Industry – Where does the road lead?

The AI Act introduces a transformative regulatory framework aimed at the integration of artificial intelligence within the automotive industry. As AI technologies advance, they enhance the capabilities of automated and assisted driving systems, but they also introduce a range of risks that could impact individual rights and safety on the road.

Current Legal Framework for AI in the Automotive Industry

Autonomous and automated vehicles (AVs) in the EU are governed by various regulatory frameworks. Key regulations include the Type-Approval Framework Regulation (TAFR), which focuses on automotive standards, and the General Product Safety Regulation (GPSR), which addresses product safety concerns.

Under TAFR, vehicles must undergo a type-approval process to ensure compliance before market entry. Additionally, the United Nations Economic Commission for Europe (UNECE) standards are integrated into EU law to enhance safety and compliance.

While existing regulations address traditional safety issues, they do not explicitly cover AI-specific risks. However, AI use in the automotive sector also falls under regulations such as the General Data Protection Regulation (GDPR), particularly when personal data is processed by connected vehicles.

What Does AI Act Regulate?

Effective from August 1, 2024, the AI Act introduces a risk-based approach to regulation. AI systems classified as posing unacceptable risks, such as those involved in social scoring or real-time surveillance, are outright prohibited.

High-risk AI systems (HRAI) must adhere to stringent requirements, including:

  • Data security
  • Transparency
  • Human oversight
  • Robustness

These systems must undergo mandatory conformity assessments before they can be marketed, with ongoing monitoring throughout their lifecycle. Applications in critical infrastructure sectors, such as energy supply and justice, exemplify areas where HRAI regulations are crucial.

AI systems classified as very low risk will be governed by general guidelines and best practices without additional legal requirements.

Impact on the Use of AI in the Automotive Industry

AI is integral to many products and services within the automotive sector, particularly in autonomous and driver-assistance systems. It enhances vehicle intelligence by analyzing vast amounts of data to make informed driving decisions and monitor driver fitness and alertness.

Given the critical nature of in-vehicle decision-making, most AI-driven autonomous systems are expected to be classified as HRAI. However, non-safety-related AI applications may be classified as low-risk AI (LRAI).

The AI Act mandates that certain AI systems are classified as HRAI if governed by specific harmonisation legislation, such as TAFR and GPSR. This ensures that automotive-specific AI use remains primarily regulated by existing legislation, while HRAI provisions serve as overarching standards.

Compliance Requirements for AI Use Cases

While specifics regarding the revised TAFR and GPSR regulations are still pending, the HRAI and LRAI provisions in the AI Act provide an early indication of the regulatory demands facing the industry.

Under the high-risk classification, businesses must establish extensive documentation and monitoring processes, including:

  • Rigorous testing to eliminate potential biases
  • Transparency logs for decision traceability
  • Comprehensive quality assurance throughout the vehicle’s lifecycle

Dedicated teams, including external auditors, will be necessary to conduct risk assessments and ensure compliance, making adherence to the AI Act technically complex and resource-intensive.

Impact on Non-EU Companies

The AI Act applies broadly to AI systems introduced or used in the EU or European Economic Area (EEA), regardless of the provider’s location. This includes non-EU companies whose AI outputs are utilized within these regions.

For instance, a U.S.-based company must comply with the EU’s stringent high-risk AI requirements while also adhering to potentially less stringent regulations in its home country. This extraterritorial scope necessitates that automotive manufacturers and service providers entering the EU market address overlapping regulatory requirements.

Many businesses may opt to adopt the EU’s high standards globally, potentially leading to a harmonization of AI safety standards worldwide as other major markets align with EU principles.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...