The AI Regulation Challenge: How Insurers Can Ensure Ethical Compliance
Insurance companies are increasingly integrating AI into underwriting, pricing, claims handling, and customer service processes. This integration allows for a significant increase in the speed of data processing, enhanced accuracy of risk assessment, and improved quality of interaction with policyholders. However, alongside these technological advantages, regulatory concerns are also mounting.
Regulatory Pressure Shaping New Rules
The widespread adoption of artificial intelligence technologies has drawn the attention of regulators. Notably, in the United States, several states, including California, Colorado, and New York, have begun implementing laws or recommendations to regulate the use of AI in insurance. Furthermore, 24 states have adopted their versions of the 2023 National Association of Insurance Commissioners (NAIC) Model Bulletin on the Use of AI by Insurers.
The primary aim of these new regulations is to minimize risks of unfair discrimination and to ensure fairness, transparency, and accountability in the use of intelligent systems. Key requirements include:
- Creating systems for internal testing of AI;
- Implementing corporate governance and control structures;
- Mandatory written policies and procedures;
- Transparency with consumers;
- Certification and quality control requirements for algorithms.
These measures are designed to ensure that AI technologies align with public interests and adhere to established insurance regulations.
Fair and Unfair Discrimination in AI
While the use of AI opens new possibilities for insurers in risk assessment, the fundamental principles of insurance regulation remain intact. The NAIC emphasizes that insurance is founded on the principle of objective risk discrimination, allowing for differences among policyholders based on sound data regarding the likelihood of insured events.
However, AI introduces the risk of unfair discrimination, which can occur when algorithms base decisions on data linked to protected characteristics like race, gender, age, or ethnicity. Such correlations can lead to results deemed unfair, violating principles of equal access to insurance products.
The “AI Principles” established by the NAIC in 2020 guide entities using AI in insurance, stressing the importance of:
- Fair and ethical decision-making with AI;
- Minimizing algorithmic bias;
- Ensuring model transparency;
- Accountability for AI system performance.
Ultimately, the Unfair Trade Practices Act remains the key regulatory benchmark for assessing the legality of AI use in insurance, ensuring that technology serves to enhance justice and protect public interests.
Corporate Governance and AI Literacy
The integration of AI into insurance processes necessitates a thorough review of corporate governance systems. One critical expectation for boards of directors is to acquire AI literacy, which encompasses the skills and knowledge necessary to understand the opportunities, limitations, and risks associated with AI in insurance.
Key requirements include aligning AI use with organizational goals and values, considering both economic feasibility and compliance with core principles such as client interest protection and regulatory adherence. Furthermore, enhancing the technological competence of board members is essential for effective risk management and informed decision-making.
Companies must also develop clear criteria to evaluate the effectiveness of AI systems, assessing how these technologies contribute to organizational goals while meeting expectations for transparency, fairness, and accuracy. Strategic integration of AI into long-term business plans is vital, viewing intelligent technologies as sustainable elements of corporate strategy in the context of digital transformation.
Establishing a written program for the responsible use of AI, known as the AIS (Artificial Intelligence System Program), is becoming a mandatory requirement for insurers. This program should regulate the development, implementation, control, and audit of AI systems, ensuring transparency, fairness, accountability, and assigning responsibility for AI system management to top-level executives.