TY - JOUR
T1 - Bridging transparency in insurance claims prediction
T2 - A comparative study of explainable AI and traditional linear models using vehicle telematics data
AU - McDonnell, Kevin
AU - Sheehan, Barry
AU - Murphy, Finbarr
N1 - Publisher Copyright:
© 2025
PY - 2026/2
Y1 - 2026/2
N2 - The proliferation of vehicle telematics data and advanced analytical Artificial Intelligence (AI) models presents an opportunity for transportation risk stakeholders. For example, within non-life insurance lines of business, traditional methods for claims prediction, such as the Generalised Linear Model (GLM), are limited by their inherent non-parametric properties. Conversely, AI methods are suited to high-dimensional datasets like vehicle telematics. However, within the highly regulated environment of non-life insurance, inherently interpretable decision-making models such as the GLM remain dominant over black-box AI methods. Explainable AI (XAI) methods can provide interpretable solutions to AI models, promoting regulatory adherence and adoption barriers. However, selecting the most appropriate method to interpret model predictions in the context of insurance is challenging. This research uses an extensive naturalistic vehicle telematics dataset containing 14,642 vehicles with 125 million driver trip observations. Given this unique dataset, this research provides a real-world comparison of XAI tools (SHAP, LIME, ExplainerDashboard and Dalex) with AI methods (XGboost, Random Forest, Decision Trees, Support Vector Machine, Deep Neural Network and TabNet) against traditional GLMs for claim prediction. Our results indicate varying levels of interpretability between each XAI and AI model tested. XGBoost with Dalex or ExplainerDashboard presents the most suitable model for claim prediction over traditional methods.
AB - The proliferation of vehicle telematics data and advanced analytical Artificial Intelligence (AI) models presents an opportunity for transportation risk stakeholders. For example, within non-life insurance lines of business, traditional methods for claims prediction, such as the Generalised Linear Model (GLM), are limited by their inherent non-parametric properties. Conversely, AI methods are suited to high-dimensional datasets like vehicle telematics. However, within the highly regulated environment of non-life insurance, inherently interpretable decision-making models such as the GLM remain dominant over black-box AI methods. Explainable AI (XAI) methods can provide interpretable solutions to AI models, promoting regulatory adherence and adoption barriers. However, selecting the most appropriate method to interpret model predictions in the context of insurance is challenging. This research uses an extensive naturalistic vehicle telematics dataset containing 14,642 vehicles with 125 million driver trip observations. Given this unique dataset, this research provides a real-world comparison of XAI tools (SHAP, LIME, ExplainerDashboard and Dalex) with AI methods (XGboost, Random Forest, Decision Trees, Support Vector Machine, Deep Neural Network and TabNet) against traditional GLMs for claim prediction. Our results indicate varying levels of interpretability between each XAI and AI model tested. XGBoost with Dalex or ExplainerDashboard presents the most suitable model for claim prediction over traditional methods.
KW - Artificial intelligence
KW - Explainable AI (XAI)
KW - Generalised linear model
KW - Insurance
KW - Telematics
UR - https://www.scopus.com/pages/publications/105021355824
U2 - 10.1016/j.techfore.2025.124418
DO - 10.1016/j.techfore.2025.124418
M3 - Article
AN - SCOPUS:105021355824
SN - 0040-1625
VL - 223
JO - Technological Forecasting and Social Change
JF - Technological Forecasting and Social Change
M1 - 124418
ER -