Is Insurtech a High-Risk Application of AI?
The landscape of Insurtech is rapidly evolving, particularly with the integration of artificial intelligence (AI) into various processes. However, as AI becomes more prevalent in the insurance sector, the question arises: Is Insurtech considered a high-risk application of AI?
Understanding AI Regulations in Insurtech
Numerous AI regulations may apply to companies operating within the Insurtech space. However, these laws are often inconsistent in their obligations. Many regulations emphasize different constructs, and a company’s focus determines which obligations apply. For instance, jurisdictions like Colorado and the European Union have enacted AI laws specifically targeting “high-risk AI systems,” imposing increased burdens on companies deploying AI models that fall into this category.
What is a “High-Risk AI System”?
The definition of a “high-risk AI system” varies across jurisdictions. In Europe, the Artificial Intelligence Act (EU AI Act) adopts a risk-based approach to compliance obligations. The more risk associated with an AI deployment, the stricter the requirements imposed on the organization. According to Article 6 of the EU AI Act, an AI system is deemed “high risk” if it meets specific conditions outlined in subsection (1) or is included in the list of high-risk AI systems in Annex III. This includes AI systems that handle biometric data, evaluate eligibility for benefits and services, assess creditworthiness, or are involved in risk evaluation and pricing related to life or health insurance.
Similarly, the Colorado Artificial Intelligence Act (CAIA), effective February 1, 2026, also takes a risk-based approach, highlighting the potential for “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that significantly influences a “consequential decision,” which materially affects the provision or cost of insurance.
Regulatory Trends in Insurtech
Both the CAIA and the rejected Virginia AI Bill specify that an AI system making consequential decisions regarding insurance is classified as “high-risk.” This suggests a regulatory trend toward categorizing AI use in Insurtech as high-risk. However, it does not imply that all AI implementations in the Insurtech sector will automatically be labeled as high-risk. For example, under the CAIA, an AI system is only classified as high-risk if it actively makes or is a significant factor in making a consequential decision.
The failed Virginia AI Bill defined “high-risk AI” narrowly, focusing on systems intended for autonomous decision-making. Thus, the scope of regulated AI usage varies based on applicable laws and specific business applications. While various AI use cases in insurance could lead to consequential decisions affecting insured individuals, such as enhancing underwriting and fraud detection, other internal uses, like improving marketing strategies or streamlining client onboarding, may not fall under high-risk classifications.
Implications of the EU AI Act
Though the EU AI Act does not explicitly categorize Insurtech as high-risk, it provides a framework for analysis. AI systems used for assessing creditworthiness or determining eligibility for coverage in the EU likely meet high-risk thresholds. Under Article 6(2) of the EU AI Act, even if an AI system is categorized under Annex III, the deployer must conduct an analysis to assess potential risks to individuals’ health, safety, or rights.
If the deployer determines that the AI system does not pose significant risks, it may not be classified as high-risk despite its categorization.
Obligations for High-Risk AI Systems
When developing or deploying a high-risk AI system, various obligations emerge, particularly under the CAIA. Developers must disclose any high-risk AI systems on their websites, explaining their risk management strategies regarding algorithmic discrimination. They are also required to notify the Colorado Attorney General of any potential discriminatory outcomes within 90 days of discovery.
Deployers have specific obligations when utilizing high-risk AI systems. They must notify consumers when such systems will significantly influence consequential decisions and provide comprehensive information about the AI system’s purpose, the nature of the decision, and the consumer’s rights concerning personal data processing.
Additionally, if a decision adverse to the consumer occurs, deployers must explain the reasoning behind it, the AI system’s involvement, and the data used, allowing consumers to correct inaccuracies and appeal the decision through human review.
Conclusion
The Insurtech sector is expanding concurrently with the evolving regulatory landscape of AI in the U.S. Companies must remain vigilant about these legal dynamics and ensure compliance with emerging regulations. Understanding the classification of AI systems as high-risk is crucial for prudent growth in the industry and for safeguarding consumers against potential risks associated with algorithmic decision-making.