Assessing the High-Risk Landscape of AI in Insurtech

Is Insurtech a High-Risk Application of AI?

The landscape of Insurtech is rapidly evolving, particularly with the integration of artificial intelligence (AI) into various processes. However, as AI becomes more prevalent in the insurance sector, the question arises: Is Insurtech considered a high-risk application of AI?

Understanding AI Regulations in Insurtech

Numerous AI regulations may apply to companies operating within the Insurtech space. However, these laws are often inconsistent in their obligations. Many regulations emphasize different constructs, and a company’s focus determines which obligations apply. For instance, jurisdictions like Colorado and the European Union have enacted AI laws specifically targeting “high-risk AI systems,” imposing increased burdens on companies deploying AI models that fall into this category.

What is a “High-Risk AI System”?

The definition of a “high-risk AI system” varies across jurisdictions. In Europe, the Artificial Intelligence Act (EU AI Act) adopts a risk-based approach to compliance obligations. The more risk associated with an AI deployment, the stricter the requirements imposed on the organization. According to Article 6 of the EU AI Act, an AI system is deemed “high risk” if it meets specific conditions outlined in subsection (1) or is included in the list of high-risk AI systems in Annex III. This includes AI systems that handle biometric data, evaluate eligibility for benefits and services, assess creditworthiness, or are involved in risk evaluation and pricing related to life or health insurance.

Similarly, the Colorado Artificial Intelligence Act (CAIA), effective February 1, 2026, also takes a risk-based approach, highlighting the potential for “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that significantly influences a “consequential decision,” which materially affects the provision or cost of insurance.

Regulatory Trends in Insurtech

Both the CAIA and the rejected Virginia AI Bill specify that an AI system making consequential decisions regarding insurance is classified as “high-risk.” This suggests a regulatory trend toward categorizing AI use in Insurtech as high-risk. However, it does not imply that all AI implementations in the Insurtech sector will automatically be labeled as high-risk. For example, under the CAIA, an AI system is only classified as high-risk if it actively makes or is a significant factor in making a consequential decision.

The failed Virginia AI Bill defined “high-risk AI” narrowly, focusing on systems intended for autonomous decision-making. Thus, the scope of regulated AI usage varies based on applicable laws and specific business applications. While various AI use cases in insurance could lead to consequential decisions affecting insured individuals, such as enhancing underwriting and fraud detection, other internal uses, like improving marketing strategies or streamlining client onboarding, may not fall under high-risk classifications.

Implications of the EU AI Act

Though the EU AI Act does not explicitly categorize Insurtech as high-risk, it provides a framework for analysis. AI systems used for assessing creditworthiness or determining eligibility for coverage in the EU likely meet high-risk thresholds. Under Article 6(2) of the EU AI Act, even if an AI system is categorized under Annex III, the deployer must conduct an analysis to assess potential risks to individuals’ health, safety, or rights.

If the deployer determines that the AI system does not pose significant risks, it may not be classified as high-risk despite its categorization.

Obligations for High-Risk AI Systems

When developing or deploying a high-risk AI system, various obligations emerge, particularly under the CAIA. Developers must disclose any high-risk AI systems on their websites, explaining their risk management strategies regarding algorithmic discrimination. They are also required to notify the Colorado Attorney General of any potential discriminatory outcomes within 90 days of discovery.

Deployers have specific obligations when utilizing high-risk AI systems. They must notify consumers when such systems will significantly influence consequential decisions and provide comprehensive information about the AI system’s purpose, the nature of the decision, and the consumer’s rights concerning personal data processing.

Additionally, if a decision adverse to the consumer occurs, deployers must explain the reasoning behind it, the AI system’s involvement, and the data used, allowing consumers to correct inaccuracies and appeal the decision through human review.

Conclusion

The Insurtech sector is expanding concurrently with the evolving regulatory landscape of AI in the U.S. Companies must remain vigilant about these legal dynamics and ensure compliance with emerging regulations. Understanding the classification of AI systems as high-risk is crucial for prudent growth in the industry and for safeguarding consumers against potential risks associated with algorithmic decision-making.

More Insights

Balancing Innovation and Ethics in AI Engineering

Artificial Intelligence has rapidly advanced, placing AI engineers at the forefront of innovation as they design and deploy intelligent systems. However, with this power comes the responsibility to...

Harnessing the Power of Responsible AI

Responsible AI is described by Dr. Anna Zeiter as a fundamental imperative rather than just a buzzword, emphasizing the need for ethical frameworks as AI reshapes the world. She highlights the...

Integrating AI: A Compliance-Driven Approach for Businesses

The Cloud Security Alliance (CSA) highlights that many AI adoption efforts fail because companies attempt to integrate AI into outdated processes that lack the necessary transparency and adaptability...

Preserving Generative AI Outputs: Legal Considerations and Best Practices

Generative artificial intelligence (GAI) tools raise legal concerns regarding data privacy, security, and the preservation of prompts and outputs for litigation. Organizations must develop information...

Embracing Responsible AI: Principles and Practices for a Fair Future

Responsible AI refers to the creation and use of artificial intelligence systems that are fair, transparent, and accountable. It emphasizes the importance of ethical considerations in AI development...

Building Trustworthy AI for Sustainable Business Growth

As businesses increasingly rely on artificial intelligence (AI) for critical decision-making, the importance of building trust and governance around these technologies becomes paramount. Organizations...

Spain’s Trailblazing AI Regulatory Framework

Spain is leading in AI governance by establishing Europe’s first AI regulator, AESIA, and implementing a draft national AI law that aligns with the EU AI Act. The country is also creating a regulatory...

Global AI Regulation: Trends and Challenges

This document discusses the current state of AI regulation in Israel, highlighting the absence of specific laws directly regulating AI. It also outlines the government's efforts to promote responsible...

AI and Regulatory Challenges in the Gambling Industry

The article discusses the integration of Artificial Intelligence (AI) in the gambling industry, emphasizing the balance between technological advancements and regulatory compliance. It highlights the...