Anthony Habayeb is the CEO and Co-Founder of Monitaur, helping carriers and insurance related companies build, manage and scale responsible and ethical model governance. Anthony was interviewed by Andrew Daniels, Founder and Managing Director at InsurTech Ohio.
Anthony, why does the insurance industry have a head start when it comes to adopting governance?
“Governance is the process of achieving business objectives; you implement governance to achieve business objectives and to manage risks. Insurance knows risk management and is a risk-centric industry. The idea of having good governance in place to mitigate risks and achieve business objectives is the core to how insurance works.
Model governance is crucial in the insurance industry as it enables comprehensive risk management, objective reviews and distribution of responsibilities across critical decisions being made by modeling systems in underwriting, claims, pricing and fraud. Holistic model governance organizes data quality, privacy and permissions, security, fairness and continuous monitoring with objectivity across the entire project lifecycle.
At some point, the insurance industry is going to have to insure AI, and to do so, it will want to see governance and risk management by companies building AI. Therefore, insurance as an industry has broader commercial rewards for building internal governance around their own AI. By governing and managing risks of their own use of models and AI, they learn how to underwrite and back our future AI economy.”
Does proper governance stop a company from innovating?
“Totally the opposite! Good governance accelerates innovation and causes repeatable patterns, transparency and alignment that more consistently achieve business objectives.
Governance can be this word that's automatically associated with regulation, risk and compliance. We joke that there's big G and little g governance. We work really hard with our customers to appreciate and realize the improvement in business achievement Monitaur delivers, not just the risk mitigation.”
How can you ensure that AI is used for good?
“The idea of knowing whether or not your AI is any good first comes down to how you define good. Every company needs to incorporate their corporate values and commitments to stakeholders like consumers, employees, regulators and shareholders in their AI strategy. Then good governance creates definition and measurement of achieving the objectives. Governance helps to go beyond good intentions.
That said, here are a few tactical examples of ways companies can help to ensure their AI is used for good:
Make sure the teams and individuals involved in building AI are educated about the risks and challenges with fairness of AI.
Require every AI project team to define their objectives, goals and awareness of key risks and what steps are being taken to mitigate those risks.
Realize that the majority of AI failures and adverse outcomes originate from issues with training data quality robustness, accuracy and appropriateness, so be thorough and intentional in the data preparation phase of an AI project.
Cause objective review and input throughout the project lifecycle require people independent of the model development to perform reviews of key governance steps.
Be comfortable challenging thoughts like, ‘Could we solve this problem just as well with a less complex modeling type?’ or ‘Just because we can do this, should we?’ Remember AI should be well aligned to corporate values.
Good is in the eye of the beholder, and it's the responsibility of every company building or implementing AI to have an understanding of what good means and to have a process through governance that helps to ensure it's achieving that outcome.”
How do you ensure governance across partnerships with carriers and solution providers? Who’s responsible for that AI governance?
“Responsibility for AI governance lies with the company that’s using the software, whether it’s built, bought or partnered with. It’s essential to ensure that partners and vendors meet the company's goals, objectives and requirements for good governance.
Commensurate with the risk and intended use of AI, it’s crucial that every company using AI establishes a way to gain assurances regarding the quality, safety, fairness, compliance and robustness of the AI application. There’s a growing regulatory and consumer expectation of what good, responsible and ethical AI looks like. Companies should incorporate these concerns into their internal policies.
Ultimately, the company that’s causing the impact or last decision automated or influenced by AI is responsible and accountable. As a result of that, we will start to see requirements and accountability pushed equally across data providers, solution providers, consultants and re-insurers to establish better governance throughout the value chain. There will not be space for companies building or selling AI to deflect responsibility.”