NYDFS Proposes Strict AI and Third-Party Data Rules


The New York Department of Financial Services (NYDFS) has released a proposed circular letter outlining stringent new rules for insurers using artificial intelligence (AI) systems and external consumer data and information sources (ECDIS) in underwriting and pricing policies. The proposed rules aim to address concerns about potential discrimination, unfairness, and lack of transparency and oversight concerning insurers’ use of AI and third-party data. 

The proposed new regulations would require insurers to establish robust governance frameworks, providing board and senior management oversight of AI and third-party data use. Insurers would need to implement formal policies, procedures, risk management, and internal controls around the development and use of AI and data; data and models would have to be actuarially sound and tested for proxy discrimination and unfair discrimination. 

The rules also mandate several transparency and consumer protection measures. Insurers would have to disclose to consumers when AI or third-party data are used in underwriting and pricing decisions. Consumers denied insurance or subject to limits would have the right to an explanation with specifics on the AI/ECDIS-derived information underlying the decision. The proposal also clarifies required disclosures when consumers are excluded from accelerated underwriting programs. 

Insurers relying on third-party vendors for AI and data would remain responsible for ensuring models comply with laws and regulations. Vendor contracts would need to include oversight provisions, and vendors could not avoid providing model transparency to insurers by citing proprietary processes. 

These are fairly strict rules, and insurers will likely lobby against them or try to loosen the strictest parts. As we’ve seen with data privacy regulations, however, insurers may wind up having to follow the strictest set of rules that are set forth. 

It’s also worth acknowledging that many of the proposed rules, particularly those for clear governance and transparency, are generally good guidance. They may not be widespread enough right now to truly be best practice, but they are the sorts of things insurers benefit from doing anyway, even when they’re hard. 

Regardless, these rules are a valuable glimpse into the minds of regulators who are attempting to address valid consumer concerns about fairness and accountability in the application of AI and external data, and insurers should start planning now, ideally for the strictest scenario, so they’re prepared if the most stringent forms of these rules are enacted. If you need help planning your AI and data governance, don’t hesitate to reach out to us