Rethinking High-Risk Labels for Autonomous Vehicles

Why Autonomous Vehicles Should Not Be Labeled ‘High-Risk AI’

Even though fully driverless robotaxis are not yet commonplace, supervised autonomous vehicles have become a familiar sight on roads, particularly in South Korea. Tesla, a U.S.-based company, dominates the global market for autonomous vehicles, generating profits while simultaneously gathering extensive driving data to enhance its AI systems. Notably, Tesla plans to offer its autonomous driving capabilities through a monthly subscription model, starting in mid-February.

Korea’s leadership in industrial robot density is largely due to its robust electronics and automotive sectors. The nation utilizes more robots in manufacturing than any other country, with automation deeply integrated into automobile production. The automotive industry is now transitioning to a model where AI not only builds vehicles but also operates them, while humans pay for the use of AI technologies. Tesla has redefined automobiles in this AI era as software delivered as a service.

Korea’s AI Basic Act

As of January 22, Korea’s AI Basic Act has established a global precedent for regulating high-risk AI operators. Although penalties for violations have been deferred, the law assigns responsibility to AI providers to manage risks, mirroring similar regulations proposed in the European Union (EU). However, the EU has recently postponed its own high-risk AI regulations, fearing that a regulation-centered approach may hinder its global competitiveness.

The EU’s initial draft aimed to manage risks associated with high-risk AI systems, focusing on protecting human life and fundamental rights. This plan, however, faced challenges with the emergence of general-purpose AI systems like ChatGPT, which complicate regulatory definitions and objectives.

The Regulatory Dilemma

The rapid evolution of AI technology, especially following the introduction of ChatGPT, has led to significant changes in how AI is embedded in daily life. However, Korea’s regulatory framework for high-risk AI remains closely aligned with the EU’s 2021 draft, subjecting the country to international scrutiny as the first to fully implement such measures.

Under Korea’s AI law, Level 4 autonomous vehicles that are being piloted, as well as robotaxis already commercialized abroad, would fall under high-risk regulations. While the intent to classify technologies like autonomous vehicles as high-risk is to protect human safety, this categorization risks undermining national competitiveness. This is especially concerning when humanoid robots are viewed as innovative while autonomous vehicles are relegated to high-risk classifications.

Competitive Landscape

Tesla continues to sell vehicles at competitive prices, amassing driver behavior data globally without additional costs, which accelerates its technological advancements. To remain competitive against such companies, Korea must reassess its definition of high-risk AI and adjust its regulatory approaches to better reflect domestic realities.

One potential alternative could be a system similar to that of Britain, where insurance companies cover victims in the event of an autonomous vehicle accident and later seek recourse from manufacturers. This could alleviate the immediate liability burdens on automakers. Moreover, policies that promote the development of high-risk AI applications could be more beneficial than early, restrictive regulations.

In conclusion, while safety and innovation must coexist in the realm of autonomous vehicles, labeling them as high-risk AI without careful consideration may hinder progress and competitiveness in the rapidly evolving technological landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...