Assessing the High-Risk Landscape of AI in Insurtech

Is Insurtech a High-Risk Application of AI?

The landscape of Insurtech is rapidly evolving, particularly with the integration of artificial intelligence (AI) into various processes. However, as AI becomes more prevalent in the insurance sector, the question arises: Is Insurtech considered a high-risk application of AI?

Understanding AI Regulations in Insurtech

Numerous AI regulations may apply to companies operating within the Insurtech space. However, these laws are often inconsistent in their obligations. Many regulations emphasize different constructs, and a company’s focus determines which obligations apply. For instance, jurisdictions like Colorado and the European Union have enacted AI laws specifically targeting “high-risk AI systems,” imposing increased burdens on companies deploying AI models that fall into this category.

What is a “High-Risk AI System”?

The definition of a “high-risk AI system” varies across jurisdictions. In Europe, the Artificial Intelligence Act (EU AI Act) adopts a risk-based approach to compliance obligations. The more risk associated with an AI deployment, the stricter the requirements imposed on the organization. According to Article 6 of the EU AI Act, an AI system is deemed “high risk” if it meets specific conditions outlined in subsection (1) or is included in the list of high-risk AI systems in Annex III. This includes AI systems that handle biometric data, evaluate eligibility for benefits and services, assess creditworthiness, or are involved in risk evaluation and pricing related to life or health insurance.

Similarly, the Colorado Artificial Intelligence Act (CAIA), effective February 1, 2026, also takes a risk-based approach, highlighting the potential for “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that significantly influences a “consequential decision,” which materially affects the provision or cost of insurance.

Regulatory Trends in Insurtech

Both the CAIA and the rejected Virginia AI Bill specify that an AI system making consequential decisions regarding insurance is classified as “high-risk.” This suggests a regulatory trend toward categorizing AI use in Insurtech as high-risk. However, it does not imply that all AI implementations in the Insurtech sector will automatically be labeled as high-risk. For example, under the CAIA, an AI system is only classified as high-risk if it actively makes or is a significant factor in making a consequential decision.

The failed Virginia AI Bill defined “high-risk AI” narrowly, focusing on systems intended for autonomous decision-making. Thus, the scope of regulated AI usage varies based on applicable laws and specific business applications. While various AI use cases in insurance could lead to consequential decisions affecting insured individuals, such as enhancing underwriting and fraud detection, other internal uses, like improving marketing strategies or streamlining client onboarding, may not fall under high-risk classifications.

Implications of the EU AI Act

Though the EU AI Act does not explicitly categorize Insurtech as high-risk, it provides a framework for analysis. AI systems used for assessing creditworthiness or determining eligibility for coverage in the EU likely meet high-risk thresholds. Under Article 6(2) of the EU AI Act, even if an AI system is categorized under Annex III, the deployer must conduct an analysis to assess potential risks to individuals’ health, safety, or rights.

If the deployer determines that the AI system does not pose significant risks, it may not be classified as high-risk despite its categorization.

Obligations for High-Risk AI Systems

When developing or deploying a high-risk AI system, various obligations emerge, particularly under the CAIA. Developers must disclose any high-risk AI systems on their websites, explaining their risk management strategies regarding algorithmic discrimination. They are also required to notify the Colorado Attorney General of any potential discriminatory outcomes within 90 days of discovery.

Deployers have specific obligations when utilizing high-risk AI systems. They must notify consumers when such systems will significantly influence consequential decisions and provide comprehensive information about the AI system’s purpose, the nature of the decision, and the consumer’s rights concerning personal data processing.

Additionally, if a decision adverse to the consumer occurs, deployers must explain the reasoning behind it, the AI system’s involvement, and the data used, allowing consumers to correct inaccuracies and appeal the decision through human review.

Conclusion

The Insurtech sector is expanding concurrently with the evolving regulatory landscape of AI in the U.S. Companies must remain vigilant about these legal dynamics and ensure compliance with emerging regulations. Understanding the classification of AI systems as high-risk is crucial for prudent growth in the industry and for safeguarding consumers against potential risks associated with algorithmic decision-making.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...