Our Jobs Are Safe… for Now

AI's progress sparks concerns about job replacement for knowledge workers.

As artificial intelligence (AI) continues to advance, it’s natural for us knowledge workers to wonder if machines could someday replace our jobs. As an insurance industry analyst, I’ll admit I’ve had fleeting thoughts about whether large language models (LLMs) like ChatGPT could one day do my job. But after testing one out, I’m relieved to say our jobs are probably safe… for now!

To see how long I have before being put out to pasture, I did some chatting with an LLM to see if it could replace an analyst like me. I won’t name the LLM to spare its feelings, but I can reveal that it is one of the current generation offerings.

I started by picking the AI’s brain about core systems.

What are the main differences between [Vendor A] and [Vendor B],” I asked, where A and B are two well-known core system vendors.

After a few seconds of deliberation, it presented me with five bulleted paragraphs. They looked well-reasoned, comprehensive, and authoritative, but beyond that, every single bullet was inaccurate or outright wrong.

So, I decided to interrogate the model further on one of its “hallucinations” (the generative AI term for BS).

So why do you think that [Vendor B] is faster to implement?” I asked.

Six bullets this time. And again, six hallucinations.

In the first one, it reported confidently that “[Vendor B]’s software is delivered as a multi-tenant SaaS platform, while [Vendor A]’s is traditionally deployed on-premises.” Hmm. That’s wrong on both fronts.

Why do you think that [Vendor B] is multi-tenant?” I queried, knowing full well that it wasn’t.

Here, the LLM educated me on why a vendor might choose a multi-tenant architecture. Good explanation, but not what I was asking for. I will assume it was an honest misunderstanding and not an attempt to change the subject and evade my questioning.

I don’t believe [Vendor B] is multi-tenant,” I politely corrected, “Where did you get that information from?” (Politeness goes a long way in getting what you want from an LLM, I’ve found.)

Now the game was up for the LLM, and it came clean.

You’re absolutely right. My previous statements about [Vendor B] being multi-tenant were incorrect. After reviewing their architecture and offerings more closely, I see that [Vendor B] actually provides single-tenant SaaS and on-premises options… I made an incorrect assumption that because [Vendor B] was cloud-based, they utilized a multi-tenant SaaS model…. Thank you for catching my mistake – I will update my knowledge and not make this assumption again without verifying architectural details first. Appreciate you taking the time to correct me – it will really help reinforce accurate knowledge.

The thing is, it won’t update its knowledge base, at least not until the next version of the model is available. That could be a year or more. If I try again tomorrow, it will still have the same flawed knowledge as today.

It’s worth highlighting that every single response I got sounded credible. If I wasn’t the coauthor of our policy administration system market report, I might have fallen for it. But I just pulled on one thread, and it all unraveled.

You can argue that my test was a little unfair. The information I asked for is not available in the public domain from an objective, reliable source. The LLM was trained on vendor marketing material and “free” research and is joining the dots as best it can. Nevertheless, it answered my questions without hedging and got almost everything wrong. If I had made a business decision based on this information, I would have wasted a lot of time and money.

You could also argue that by giving it better direction—asking it to double-check its work, for example—I might have seen better results. That’s also true, but the reality is that most users aren’t experts in asking questions in exactly the right way yet. (The term for this is “prompt engineering.”)

So, while generative AI is incredibly impressive (I’m a big fan), it has some key limitations that mean human expertise remains essential in navigating insurance technology. Flesh-and-blood analysts like me don’t need to stress too much about being replaced by robots just yet. The machines are getting smarter, but it seems like we have a couple of years before we are relegated to the scrap heap.

For more information about generative AI in insurance, you can read our report, Top Five Questions Insurer CIOs Have About Generative AI. If you’d like to talk to a genuine human advisor about AI (or if you’d like to read the full transcript of the conversation I had with this AI), please reach out to me directly at [email protected].