BLOG POST

NY DFS Raises the Bar: New AI Cybersecurity Guidance Signals Shift in Regulatory Expectations

/

As artificial intelligence transforms the insurance industry, cybersecurity risks are evolving at an unprecedented pace. The New York Department of Financial Services (NY DFS) has responded with groundbreaking guidance on AI-related cybersecurity risks, setting new expectations that will likely influence regulators nationwide. 

While NY DFS directly regulates only entities operating under New York banking, insurance, and financial services laws, its cybersecurity regulation has historically served as a model for other state regulators and the National Association of Insurance Commissioners (NAIC). This latest guidance offers a preview of how regulators are likely to approach AI security risks across the industry. 

The AI Threat Evolution: More Than Just Another Tech Risk 

The guidance highlights a sobering reality: AI isn’t just creating new efficiencies—it’s revolutionizing cyber threats. Threat actors are leveraging AI to execute more sophisticated, personalized, and scalable attacks. The most concerning development being AI-enabled social engineering. 

Consider this: In early 2024, a single AI-generated video conference led to a $25 million fraudulent transfer at a Hong Kong firm. The perpetrators used deepfake technology to impersonate multiple executives, including the CFO. This isn’t science fiction—it’s the new reality. 

Strategic Requirements: Beyond Checkbox Compliance 

NY DFS’s guidance integrates AI considerations into existing cybersecurity frameworks rather than creating new requirements. Risk assessments must now specifically address AI threats, with boards and senior leadership needing sufficient AI security knowledge to provide meaningful oversight. Incident response plans must evolve to account for AI-specific scenarios. 

Authentication presents particular challenges in an AI world. Traditional methods may no longer suffice as deepfake technology advances. By November 2025, multi-factor authentication requirements will expand significantly. Biometric authentication, while convenient, needs additional safeguards against AI spoofing. The guidance suggests considering authentication with liveness detection or multiple biometric factors. 

Third-party risk management becomes even more critical with AI. Insurers must enhance due diligence for vendors using AI, implement specific contractual protections around AI deployment, and ensure prompt notification of AI-related incidents. This extends beyond traditional cybersecurity concerns to include oversight of AI training data and model security. These third-party risks have reached a point where cyber carriers are offering endorsements for risks like data poisoning and usage rights infringement.  

Data management complexity also increases substantially. The vast amounts of data required for AI training and operation each become potential security risks. The guidance emphasizes data minimization and introduces new inventory requirements. Organizations must balance AI effectiveness with security risks in their data governance approaches. 

The Regulatory Horizon and Strategic Response 

While NY DFS regulates a specific subset of insurers, this guidance reflects broader regulatory trends we’re seeing in federal AI initiatives, NAIC model laws, and international frameworks. Smart insurers, regardless of their regulator, will use this guidance as a roadmap for strengthening their AI security posture. 

The most pressing need is to review current AI usage and related security controls while assessing authentication methods for AI vulnerability. Organizations should be developing comprehensive AI security strategies that include enhanced training programs and necessary technology upgrades. Looking further ahead, building AI security expertise within cybersecurity teams and establishing robust AI governance frameworks will be crucial. 

The reality is clear: AI security isn’t just a compliance issue—it’s a business imperative. As AI becomes more deeply embedded in insurance operations, the ability to secure AI systems while defending against AI-enabled threats will become a critical differentiator. 

To learn more about how AI is impacting your organization and the broader insurance industry, visit our website to explore our recent research reports.