BLOG POST

Artificial General Intelligence: What to Expect While We’re Expecting

AGI will be so transformative that predictions are likely already based on a flawed understanding of the future.
/

Disclaimer: This will read like science fiction, and it is. For now. However, don’t be too quick to dismiss. If you are a GenX or Millennial, AI researchers believe there is a 50% chance you will live to see this in your lifetime.

Artificial General Intelligence (AGI) refers to autonomous systems that can perform at or beyond human level at general tasks. Unlike the AI we have today, which is designed for specific tasks, AGI would have the ability to understand, learn, and apply its intelligence to any problem or situation, much like a human but potentially at a superior level.

The timeline for AGI development is a subject of debate among experts. Estimates range widely, from as early as the end of this decade to as late as the end of the century or beyond. A survey conducted in 2022 found that AI researchers believed there was a 50% chance of AGI being developed by 2061. However, these predictions are informed speculation; AGI could arrive sooner or later.

A few years ago, I read a book by philosopher Nick Bostrom called “Superintelligence: Paths, Dangers, Strategies” (Bostrom is better known as the guy behind the simulation hypothesis, which reasons that we are more likely to be living in a computer simulation than not, but that’s another story for a different day).

Bostrom argues that the development of AGI will quickly lead to superintelligence, which he defines as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

This struck me hard. We’ve been conditioned to picture AGI as human-like, whereas the reality is that it would be vastly superior to us almost immediately. We will not be peers, and it may view us the way we view our pets or worse, the ants in our backyard.

Why will it advance quickly? Bostrom’s reasoning identifies several factors including self-improvement (AGI identifying ways of enhancing its own intelligence, leading to what Bostrom terms an “intelligence explosion.”), improvements in computing power (Moore’s law), and data availability (AGI can rapidly process and learn from the entirety of human knowledge).

The Implications of AGI Are Profound

Unlike human intelligence, which is bound to our physical brains, AI systems can be copied and distributed rapidly. This means that once AGI is developed, it will likely become widespread very quickly. Once we have one operational AGI physician, we can displace millions of human physicians with little to no delays, barring regulatory intervention.

The ability to easily deploy AGI means its effects could be felt globally in an incredibly short time frame, potentially leading to rapid and drastic changes in economic, political, and social structures. There is real potential for extreme inequality or power imbalances if AGI development is not managed with global cooperation and ethical considerations in mind.

The first entity (company and/or nation) to develop AGI is set to gain an enormous competitive advantage. They could quickly deploy their AGI across various sectors of the economy and military, potentially outperforming all human-led or narrow AI-led competitors. This can lead to a “winner takes all” scenario as they become our “omni power”. We can only hope that they are benevolent, whoever they end up being.

Assuming we can navigate this geopolitical doomsday scenario, what would AGI mean for the insurance industry? And is there anything we can do to prepare for it?

The Future of Insurance

There will be plenty of uncertainties in a time of AGI, and people will still want and need financial protection. But the nature of risks and uncertainties themselves will change; certain types (e.g., human error in driving) will be reduced, while new types of risks (e.g., autonomous car failures) will emerge. However, pooling risk may become less significant as AGI will be able to process vast amounts of data, making pools smaller and highly individualized pricing possible. The ability to predict and prevent losses before they happen could increase dramatically, potentially shifting the focus from compensation to prevention.

AGI would create changes so profound that it will render most current business models, including insurance, obsolete; our current understanding of risk, liability, and financial protection may become irrelevant. “Insurance” may look much different in the future than how we conceive it today.

More troubling is the way that AGI will transform the structure and operations of insurance companies. Most operational roles can–and, without regulation, will–be automated entirely, including underwriting, claims processing, and customer service. Leadership, sales and relationship management may be less impacted initially, although no human role will be entirely safe if regulations define AI “personhood”, where AIs are recognized as legal persons, with rights, responsibilities, and protections like those afforded to human beings. (This might seem farfetched, but corporations now share many rights with humans.) That said, in the short term, some humans will still be necessary, if only for overall accountability to the regulators.

The regulatory landscape would need to evolve significantly to address AGI’s role in insurance, potentially requiring new forms of oversight and consumer protection. The biggest question is how that regulation will play out overall. The only thing we can say for sure is that it will happen slowly.

Is There Anything We Can Do to Prepare?

The impact of AGI will be so transformative and unpredictable that our attempts to prepare will likely be based on a flawed vision of the future. The development of AGI, especially if it leads to an “intelligence explosion” as described by Bostrom, could happen so rapidly that even the most prepared companies might find themselves overwhelmed and unable to adapt quickly enough.

There are, however, a couple of things that the most forward-thinking insurance companies can engage in to ease the transition to a post-AGI world.

For example, instead of attempting to prepare business models, insurance companies with significant resources might better serve their long-term interests by investing in AI alignment research. This is geared towards ensuring that AGI, when developed, is aligned with human values and interests. Companies and individuals can also contribute to the crucial work of defining rights, responsibilities, and ethical frameworks for AGI, which could shape the post-AGI world.

Looking to the Future

As an industry analyst, this is a hard topic to write about. Blogs typically need a happy ending and a call to action. Yet the potential advent of AGI challenges the very foundations of our industry and indeed, our society.

It has the potential to solve our most pressing problems, from curing diseases to reversing climate change, but it also carries the risk of rendering human labor obsolete, exacerbating inequality, and even posing an existential threat to humanity.

It’s still science fiction thankfully, because if it were to happen today, we’d be woefully unprepared for it. The best we can do as an industry is recognize this and align our support with organizations that are working to ensure AGI development is conducted responsibly, ethically, and with humanity’s best interests at heart. The insurance industry, with its deep understanding of risk and its role in safeguarding society, has a unique perspective to offer in this ongoing conversation.