Assessing the High-Risk Landscape of AI in Insurtech

Is Insurtech a High-Risk Application of AI?

The landscape of Insurtech is rapidly evolving, particularly with the integration of artificial intelligence (AI) into various processes. However, as AI becomes more prevalent in the insurance sector, the question arises: Is Insurtech considered a high-risk application of AI?

Understanding AI Regulations in Insurtech

Numerous AI regulations may apply to companies operating within the Insurtech space. However, these laws are often inconsistent in their obligations. Many regulations emphasize different constructs, and a company’s focus determines which obligations apply. For instance, jurisdictions like Colorado and the European Union have enacted AI laws specifically targeting “high-risk AI systems,” imposing increased burdens on companies deploying AI models that fall into this category.

What is a “High-Risk AI System”?

The definition of a “high-risk AI system” varies across jurisdictions. In Europe, the Artificial Intelligence Act (EU AI Act) adopts a risk-based approach to compliance obligations. The more risk associated with an AI deployment, the stricter the requirements imposed on the organization. According to Article 6 of the EU AI Act, an AI system is deemed “high risk” if it meets specific conditions outlined in subsection (1) or is included in the list of high-risk AI systems in Annex III. This includes AI systems that handle biometric data, evaluate eligibility for benefits and services, assess creditworthiness, or are involved in risk evaluation and pricing related to life or health insurance.

Similarly, the Colorado Artificial Intelligence Act (CAIA), effective February 1, 2026, also takes a risk-based approach, highlighting the potential for “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that significantly influences a “consequential decision,” which materially affects the provision or cost of insurance.

Regulatory Trends in Insurtech

Both the CAIA and the rejected Virginia AI Bill specify that an AI system making consequential decisions regarding insurance is classified as “high-risk.” This suggests a regulatory trend toward categorizing AI use in Insurtech as high-risk. However, it does not imply that all AI implementations in the Insurtech sector will automatically be labeled as high-risk. For example, under the CAIA, an AI system is only classified as high-risk if it actively makes or is a significant factor in making a consequential decision.

The failed Virginia AI Bill defined “high-risk AI” narrowly, focusing on systems intended for autonomous decision-making. Thus, the scope of regulated AI usage varies based on applicable laws and specific business applications. While various AI use cases in insurance could lead to consequential decisions affecting insured individuals, such as enhancing underwriting and fraud detection, other internal uses, like improving marketing strategies or streamlining client onboarding, may not fall under high-risk classifications.

Implications of the EU AI Act

Though the EU AI Act does not explicitly categorize Insurtech as high-risk, it provides a framework for analysis. AI systems used for assessing creditworthiness or determining eligibility for coverage in the EU likely meet high-risk thresholds. Under Article 6(2) of the EU AI Act, even if an AI system is categorized under Annex III, the deployer must conduct an analysis to assess potential risks to individuals’ health, safety, or rights.

If the deployer determines that the AI system does not pose significant risks, it may not be classified as high-risk despite its categorization.

Obligations for High-Risk AI Systems

When developing or deploying a high-risk AI system, various obligations emerge, particularly under the CAIA. Developers must disclose any high-risk AI systems on their websites, explaining their risk management strategies regarding algorithmic discrimination. They are also required to notify the Colorado Attorney General of any potential discriminatory outcomes within 90 days of discovery.

Deployers have specific obligations when utilizing high-risk AI systems. They must notify consumers when such systems will significantly influence consequential decisions and provide comprehensive information about the AI system’s purpose, the nature of the decision, and the consumer’s rights concerning personal data processing.

Additionally, if a decision adverse to the consumer occurs, deployers must explain the reasoning behind it, the AI system’s involvement, and the data used, allowing consumers to correct inaccuracies and appeal the decision through human review.

Conclusion

The Insurtech sector is expanding concurrently with the evolving regulatory landscape of AI in the U.S. Companies must remain vigilant about these legal dynamics and ensure compliance with emerging regulations. Understanding the classification of AI systems as high-risk is crucial for prudent growth in the industry and for safeguarding consumers against potential risks associated with algorithmic decision-making.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...