Scott Lawson is an Attorney at The Lawson Firm, providing legal counsel and related legal services to the insurance industry. Scott was interviewed by Michael Fiedel, Co-Founder at InsurTech Ohio and Co-Founder at PolicyFly, Inc.
Scott, what are some regulatory tenants that are important to keep in mind when creating new artificial intelligence (AI) tooling?
“There are certain things that are part of insurance regulation, and they are kind of set in stone. There tends to be an assumption among people who work in the tech area that as technology changes, the world changes. In fact, there are some things, especially the law, that tend to move a lot slower. Even slower to change are the principles that underlie what the laws are attempting to do. At its core, insurance regulation is meant to be protective of the consumer. Underlying that is this idea that insurance is good for society. It spreads risk, and it helps people avoid devastating financial consequences for whatever might happen to them as they move along in life. As part of that, there are two main principles to bear in mind:
The first is that insurance should be available, accessible and affordable. But, the idea is that when we're talking about deploying new technology or coming up with new processes in the insurance industry, we need to make sure that it's good for the consumer. The consumer needs to benefit from it and not be unfairly penalized. The outcomes that we're seeing must not be unfair from a regulatory or risk standpoint.
Second, insurance should be fair, so as we keep those things in mind, we also have to try to avoid unfair discrimination outcomes where certain classes of people are unfairly being uprated or denied insurance. Artificial intelligence has the capacity to develop data biases that, over time, could create those types of outcomes. As we move along, we're going to find that insurance companies, MGAs and large brokerages that are trying to deploy AI technology in their environments are going to be under increasing pressure to make sure that those outcomes don't occur.”
What are some of your concerns with AI self-governance that companies need to consider when balancing the pressure to scale?
“We're seeing AI being deployed throughout many aspects of the enterprise - not only underwriting and pricing, but also service, anti-fraud and claims. It's important to keep the kinds of principles we spoke about in mind: fairness, equity, accessibility and affordability as it's deployed at the outset because, as you scale up, trying to fix it later on becomes more problematic. We know that as AI is deployed, you may be starting from a particular data set that might be fair, equitable and pass regulatory muster. Further into deployment, however, the algorithms and AI’s abilities to learn may unintentionally develop some biases that could cause bad outcomes. That's something to keep in mind early on. Make sure that it's managed at the outset and throughout the whole deployment process.”
Do you have any examples of how an AI-driven startup should be going through the process of reviewing AI-based results, so they don't allow that creep into a discriminatory territory to occur?
“Developing the right kinds of guardrails right out of the gate is the best way to do it. Then test, test, test. Keep testing and keep validating as you're moving along. Think about what it is that you're finding out and what you don't know. Do some gap analysis between what it is you're seeing and what you think you need to see, and keep managing it early on and continuing through the process. You can't just let it run and have free range throughout the deployment process.”
What are regulators you’ve spoken to saying about the efficacy of AI technology in this current state?
“Regulators tend to get bad press, and people assume that they're going to say ‘no’ to everything. Many of the regulators I’ve spoken with in recent years are supportive of the potential efficiencies. They're supportive of the potential innovation that it could bring to the table, but they realize that there's going to be players who are not going to be as attentive to making sure that it's providing the kind of outcomes that they need to see. The big news on the regulatory front is that the National Association of Insurance Commissioners (NAIC) developed and promulgated what they call a ‘model bulletin’ on AI. It's putting the entire industry on notice as to how the collective regulatory world sees all of this and the steps they expect insurance companies, large brokerages and whoever's deploying AI within the industry to take.
Essentially this affirms what we’ve discussed today, which is that they expect the players in the industry to comply with existing laws. They expect companies to be mindful of existing laws and to use those laws as guardrails as they develop these technologies. Regulators want to make sure that outcomes are fair, and they're concerned about transparency. They want insurance companies to be able to explain the outcomes that they're seeing. That's going to be a big challenge for companies as they move along. Overall, what regulators are looking for is that companies develop governance frameworks and controls to make sure they meet certain guideposts as they're deploying and acquiring AI technology. Companies are then expected to use the guardrails to make sure the technology passes certain tests out of the gate and that outcomes are going to be fair, accurate, transparent and free from unfair bias. This is the first shot across the bow from the regulatory world saying, in effect, ‘Look, we're not going to let this get away from us. We want to make sure you understand this, and this is what we expect.’”
How do you see AI technology and regulations evolving into the future?
“Artificial intelligence is here to stay. It's going to continue to be deployed aggressively by the insurance industry. The tension between deploying technology in ways that companies want to do and that are economically efficient versus the regulatory world and the outcomes they're going to want to see is going to continue. AI is going to develop fast. The laws are going to move very slowly, if at all, in response to it. There's going to be this constant push and pull as we move along.”