How do insurers assess model risk in AI-based underwriting systems?

Insurers face growing model risk when adopting AI for underwriting because complex algorithms can fail in ways that traditional statistical models do not. Causes include limited or biased training data, opaque model architectures, feedback loops with premium pricing, and changing external conditions that cause data drift. Consequences range from underwriting losses and regulatory sanctions to damaged trust among policyholders and distribution partners. Regulatory and standards bodies emphasize governance and evidence-based validation as central mitigants: guidance by the Board of Governors of the Federal Reserve System highlights the need for independent model validation and documentation, and the International Association of Insurance Supervisors outlines expectations for insurer risk management when deploying AI.

Model validation and governance

Insurers implement structured validation programs that mirror regulatory expectations: independent review teams, clear model inventory, documented development histories, and pre-deployment performance testing. Validation typically includes backtesting against historical outcomes, stress and sensitivity testing to explore failure modes, and benchmarking versus simpler models to avoid unnecessary complexity. These practices reflect the recommendations in SR 11-7 authored by the Board of Governors of the Federal Reserve System and the Office of the Comptroller of the Currency, and the International Association of Insurance Supervisors issues paper authored by the IAIS Secretariat, which both stress independence and documentation. Explainability methods and human-in-the-loop checkpoints are used not only to satisfy auditors but to detect economic or ethical surprises that pure performance metrics can miss.

Data stewardship, fairness, and monitoring

Robust assessment extends beyond algorithms to data quality and ongoing monitoring. Data lineage, provenance checks, and bias testing look for proxy discrimination where seemingly neutral inputs correlate with protected characteristics. The National Institute of Standards and Technology published the AI Risk Management Framework that recommends continuous monitoring, incident response plans, and lifecycle governance to manage emergent risks. Insurers must adapt these controls to cultural and territorial variations: consumer privacy norms and anti-discrimination rules differ between the European Union, where EIOPA guidance is influential, and jurisdictions such as the United States, affecting allowable inputs and remediation strategies. Environmental considerations also matter for large models: energy usage and infrastructure resilience can influence operational risk and costs in regions with constrained power systems. Together, validation, data stewardship, governance, and contextual adaptation form the practical toolkit insurers use to assess and mitigate model risk in AI-based underwriting.