BLOG POST

Injecting Intelligence into Underwriting Through Automation

Insurer CIOs are actively exploring way to streamline underwriting with technology.
/

As insurers look to reduce underwriting friction and drive straight-through processing, automation powered by advanced technologies like natural language AI is taking center stage. Based on a recent roundtable discussion with insurance CIOs, held at our annual CIO Council meeting in Boston, carriers are actively exploring opportunities to streamline underwriting by intelligently extracting and enriching data from submissions.

The Data Deluge Challenge

One of the biggest bottlenecks in underwriting is the manual effort required to ingest and parse information from PDFs, loss runs, emails, and other unstructured data sources that flow in through submissions. As one insurer stated, “A lot of the pain is in the ingestion.”

However, traditional approaches from the recent past that relied upon leveraging advanced OCR, natural language processing (NLP), and other AI/machine learning (ML) techniques are struggling to keep up with the diversity of data formats. There was a sentiment that large language models (LLMs) could be a more sustainable approach to intelligent document processing and enrichment.

Validating the AI Underwriting Engine

Several companies discussed their experiences evaluating AI vendors and building custom solutions to intelligently read submissions. One insurer found most off-the-shelf options to be cost-prohibitive at $5-10 per document processed.

Instead, they are having success with leveraging modern LLMs to extract and validate critical datapoints from sources like loss runs in a consistent way. The key is combining the language model with rules, prompt engineering, and fine-tuning as needed.

Driving Straight-Through Processing

With intelligent data ingestion and enrichment in place, the goal is to then enable more straight-through underwriting on smaller, lower-risk commercial submissions without human touchpoints. One insurer has achieved quoting and binding in under 5 minutes with no human involvement for risks under $50K by explicitly codifying all underwriting rules and decisions.

There’s substantial desire to leverage LLMs and other machine leveraging models in this regard; however the need to ensure compliance and auditability is an impediment. As one attendee stated, “If you apply AI or LLMs, you need explainability. The NAIC says you need to be able to explain how decisions are made: you can’t just say the third-party vendor made the decision.”

CIOs acknowledged that underwriter adoption and change management are among biggest hurdles, as seasoned underwriters can be resistant to cede control to automation. However, several participants pointed to direct pressure from distribution partners and leadership’s desire for faster turnaround times as catalysts for adoption.

Architecting for Evolving AI Capabilities

On the technology architecture front, the advice was to design automation solutions with a “veneer” flexibility layer. This allows core AI/LLM capabilities to be updated and swapped out without refactoring integrated applications and processes.

As one participant summarized: “Where are we using new APIs and LLMs in the model? Bake it in so when a new LLM comes out, you can switch over.”

With data being the foundational fuel, underwriters in short supply, and customer expectations evolving, intelligently automating underwriting processes is becoming an imperative for many insurers. Those making investments now could be well-positioned to realize significant future gains in operational efficiency.

We’ve opened registration for our Q4 CIO Council meeting, which will be held on October 15 at ITC. You can register here to join the conversation.