top of page
Writer's pictureMichael Fiedel

InsurTech Ohio Spotlight with Thomas Holmes

Thomas Holmes is the Chief Actuary for the US Region at Akur8, is a co-author of the upcoming publication ‘Penalized Regression and Lasso Credibility’ and is a Fellow of the CAS. In his role at Akur8, he works directly with clients to provide modeling best practices and performs outreach to familiarize the industry with advanced modeling methodologies. Thomas is working to enable insurers to leverage the power of Machine Learning and predictive analytics to inject game-changing speed and accuracy to insurers’ pricing process while maintaining full transparency, auditability and control over the models created. Thomas was interviewed by Michael Fiedel, Co-Founder at InsurTech Ohio and Co-Founder at PolicyFly, Inc.



Thomas, how fast do you see the insurance industry adopting Artificial Intelligence (AI) and Machine Learning (ML), and what do you believe are the driving factors?


“The insurance industry is going to adopt AI and ML extremely fast, but I don't think it’s going to be adopted very smoothly. Everyone is excited about it, and every leader wants to be the first one to adopt it - for good reason. It has absolutely amazing potential. The algorithms available today are extremely powerful, and we have access to an abundance of data. You can look up geographic or demographic data by zip code, and there's thousands and thousands of different characteristics readily available. We also have the compute power to use all this data now. With cloud computing, people can apply these algorithms in practice on even the largest of datasets. I say that the adoption of AI and ML won't happen smoothly because it's very easy to misuse the algorithms or not fully understand exactly what’s coming out of them.


A lot of new AI/ML algorithms will give you powerful-looking signal but are blackbox. While the model’s test statistics may appear strong, it can be difficult to understand what’s going on when you get into the inner workings of the model. It’s very hard if not impossible to apply actuarial judgment and move the model into production. There have been notable examples in the news of a failure to fully understand a model, such as a lawyer using chat GPT that generated fake citations and an insurer implementing a fraud model that came under investigation for racial biases. There are going to be a lot of bumps in the road where people weren't anticipating trouble. 


I do expect a very strong rebound that will be focused around transparent AI and fully recognizing the limitations of black box AI. Most actuarial work will focus on the transparent algorithms that can be easily traced from data to algorithm to score. Blackbox items are going to be highly limited in actuarial analysis. To use a black box algorithm, an insurance company must be comfortable not knowing exactly how it got to the answer. A rough approximation must be sufficient. Surely for pricing applications, insureds and regulators will want something more exact. For less scrutinized or less material analysis, we're going to see some large language models (LLM) and other black box models being used in the insurance industry. However, I do believe the industry is headed toward transparent AI for the majority of use cases, driven by challenges that will be encountered in the near future.”


How should the industry see the role of actuarial scientists as stewards in using these technologies?


“Actuaries will be involved in understanding the assumptions behind each of these technologies and governing how they should be used. Some major questions that actuaries should be asking with every model are: How do you apply actuarial judgment to this model itself? Can you apply it during the model selection process? Can you only apply actuarial judgment to the final output of a model? In addition to tweaking the model, can an actuary assess the overall actuarial soundness of the scoring process? An easy example of the application of judgment is in a generalized linear model (GLM). There’s often a statistically best GLM that someone has built that’s not the actuarially best model for implementation. The only reason this is visible is because GLMs are a transparent algorithm. There can be many black box models that statistically are good, but without transparency, actuaries will not be able to identify the situation where a statistically superior model is not the actuarially best model for implementation. For some of these new technologies, applying actuarial adjustments may become far more complicated or impossible. Actuaries need to focus on explainability, implementation and the ability to apply actuarial judgment.


In the end, actuaries may also have to say no to some of these technologies in certain applications, specifically in insurance pricing applications. Actuaries and insurance companies have to be very transparent with their pricing process. Actuaries aren’t going to ask chat GPT what your insurance costs should be. This would be a fantastically bad use of that algorithm. This is a bit of a straw man argument, but it's an example that there are some very powerful technologies that should not be used for some specific actuarial applications due to the lack of clarity on how they arrived at a given outcome. Actuarial data scientists will really be stewards in governing the application, the interpretation, the adjustment and the governance of these models to make sure they are applied soundly.”


What examples have you seen of early failures to properly understand this emerging space?


“One of the best examples of an early failure is related to the difference between black box and transparent algorithms. In Europe, insurance pricing is significantly less regulated. You can use more complex algorithms without the need to explain it. Europe had been using black box algorithms in pricing, and then COVID happened. The question on everyone's mind was ‘Why are our models doing poorly?’ They wanted to react as quickly as they could. Unfortunately, with black box algorithms, this is a hard question to answer, and it’s not clear how to adjust an underperforming model. When you have shocks and need to adjust quickly, if you choose the wrong algorithm, you don’t know exactly why it’s wrong. Insurers in Europe have already encountered problems with black box pricing, and some companies have adopted more transparent methodologies for a significant portion of their analyses. 


One of the more salient examples is recognizing the gap between actuarial exams and the data science knowledge that actuaries need to gain to understand and apply these methodologies appropriately. A very simple example of this is the setup of a weather model. An inexperienced modeler might split their data randomly into training and testing. An experienced modeler would know that this would result in a poor model, because the same severe weather events would be in both the training and testing sets. This mistake is very easy to make, and avoiding it requires the merging of actuarial knowledge and the data science concept of target leakage. 


I don't think that actuaries should try to learn every single new algorithm. Instead, they should learn the principles behind them enough to reconcile these principles with actuarial appropriateness. 


Alongside the multitude of algorithms, there's a failure stemming from attempts to implement a one-size-fits-all governance approach to artificial intelligence and machine learning. These algorithms are vastly different and can’t be scrutinized with the same questions. The governance structure needs to be agile enough to accommodate different types and uses of advanced models. It should avoid placing unnecessary burdens on data scientists who are focused on building models and will give poor answers to questions that are not applicable. However, it also needs to ensure that all necessary questions are being addressed. Identifying these essential questions has been challenging for the industry and will continue to be difficult as AI and ML techniques evolve.”


What are some of the critical considerations for using this technology properly?


“For a lot of applications, I would consider your algorithm and lean toward transparent solutions for actuarial applications. If you're giving Netflix recommendations, maybe you don't need transparent AI. If you give somebody an incorrect recommendation, they’ll probably just laugh about it and joke to their friends. If you're pricing someone's insurance product or doing a fraud investigation and get it wrong, the public will have a vastly different reaction. I highly recommend transparent algorithms as the north star for actuarial AI and machine learning. 


There are three key pillars in operationalizing these models. The first is data science expertise, which involves building and analyzing models from the ground up, transforming raw data into a finished model. The second pillar is industry knowledge, ensuring that a strong statistical model performs well in practice. Lastly, you need an understanding of the implementation process to ensure it aligns with the assumptions behind the model.


Members of your team are usually specialized in one of these three pillars. Maybe they know a lot about the industry - and only a little bit about data science and a little bit about implementation. Or, they know all about how to implement these models - and just a little bit about how they're built and where actuaries fit in. Actuaries possess a well-rounded knowledge of these three fields. However, instead of owning the entire process, actuaries should facilitate communication among available experts in each area. Clear communication is essential between those who build the models, those with industry expertise and those who understand implementation and its real-world impact. This communication, supported by transparent algorithms, will be crucial for the successful implementation of AI and ML.”

 

  

InsurTech Ohio Thanks Its Presenting Partner


And Our Premier Partners



























77 views
bottom of page