States Adopt NAIC Model Bulletin on Insurers’ Use of AI
State regulators are taking action on the use of artificial intelligence in insurance. To date, nearly a dozen states have adopted some form of the National Association of Insurance Commissioners (NAIC) Model Bulletin on the Use of Artificial Intelligence (AI) Systems by Insurers. We expect many states to follow suit with similar standards for use of AI in insurance.
The NAIC adopted the model bulletin in December 2023. Since then, eleven states have adopted the bulletin with little or no customization, including:
- Alaska, adopted February 1, 2024
- Connecticut, adopted February 26, 2024
- Illinois, adopted March 13, 2024
- Kentucky, adopted April 16, 2024
- Maryland, adopted April 22, 2024
- Nevada, adopted February 23, 20224
- New Hampshire, adopted February 20, 2024
- Pennsylvania, adopted April 6, 2024
- Rhode Island, adopted March 15, 2025
- Vermont, adopted March 12, 2024
- Washington, adopted April 22, 2024
Maryland Insurance Commissioner Kathleen A. Birrane, who chaired the NAIC committee, noted “this initiative represents a collaborative effort to set clear expectations for state Departments of Insurance regarding the utilization of AI by insurance companies, balancing the potential for innovation with the imperative to address unique risks” and explained that the model bulletin “provides a robust foundation to safeguard consumers, promote fairness, and uphold the highest standards of integrity within the industry.”
The NAIC model bulletin is fairly prescriptive and outlines principles that are becoming best practice in developing AI law and industry guidance, including:
Written AI program required
Insurers must develop, implement, and maintain a written program for responsible use of AI systems that make or support decisions related to regulated insurance practices, including mitigating adverse consumer outcomes and addressing governance, risk management, and internal audit functions.
Robust governance, risk management controls, internal audit functions, and written policies and procedures are core elements of an AI governance program in mitigating risk and managing oversight at each stage of an AI system’s lifecycle.
Clear Governance Framework driven by transparency, fairness, and accountability
Insurers should have a clear governance accountability structure comprised of representatives from appropriate disciplines and units (e.g., business units, product specialists, actuarial, data science and analytics, underwriting, claims, compliance, and legal), each with scope of responsibility and authority, chains of command, and decisional hierarchies.
Consumer notice
Consumers should receive notice that AI systems are in use and should have access to appropriate levels of information based on the phase of the insurance life cycle in which the AI systems are deployed.
Risk management and internal controls
Controls and processes in the AI program should be reflective of, and commensurate with, insurers’ assessment of the degree and nature of risk posed to consumers by the AI systems considering: (1) the nature of the decisions being made, informed, or supported using the AI system; (2) the type and degree of potential harm to consumers resulting from the use of AI systems; (3) the extent to which humans are involved in the final decision-making process; (4) the transparency and explainability of outcomes to the impacted consumer; and (5) the extent and scope of the insurer’s use or reliance on data, predictive models, and AI systems from third parties.
Controls should address: (1) oversight and approval process for development, adoption, or acquisition of AI systems and identification of considerations and controls; (2) data practices and accountability, including data currency, lineage, quality, integrity, bias, minimization, and suitability; (3) validating, testing, and retesting to assess generalization of outputs upon implementation; (4) privacy of non-public information; and (5) data and record retention.
Third-party vendor management
Insurers are responsible for vendor diligence including processes to assess acquiring, using, and relying on: (1) third-party data to develop AI systems and (2) AI systems developed by third parties. Insurers should implement contract terms in third-party agreements allowing audit rights and requiring cooperating with regulatory inquiries when appropriate. Regulators may request information on insurers’ vendor diligence as part of regulatory oversight.
Prepare for regulatory inquiry about AI program
The model bulletin notes that insurers may be asked – including document production – about development and use of AI, including governance, risk management, and internal controls in the context of an investigation or market conduct action.
Quarles is continuing to track adoption of the NAIC model bulletin. While it is being adopted in a number of states, remember that certain states have existing AI regulatory approaches in place pre-bulletin. Insurers operating in multiple states should be prepared for varying requirements, and all insurers should be prepared for evolving requirements as AI laws and regulations continue to develop.
For inquiries about developing an AI governance program or the specific requirements under your state insurance requirements, please contact your health information privacy and security attorney or:
- Meghan O'Connor: 414-277-5423 / meghan.oconnor@quarles.com
- John Hintz: 414-277-5620 / john.hintz@quarles.com
- Bill Toman: 608-283-2434 / william.toman@quarles.com