EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) APPROACHES FOR CYBER RISK ASSESSMENT IN FINANCIAL SERVICES

Authors

  • Saba Ashfaq MS IT - Software Design and Management: Washington University of Science and Technology, USA Author
  • Tonoy Kanti Chowdhury Master of Science in Information Technology, Washington University of Science and Technology, USA Author

DOI:

https://doi.org/10.63125/3gjcb322

Keywords:

Explainable AI, Cybersecurity, Financial Risk, Interpretability, Detection Performance

Abstract

This study investigated the quantitative influence of explainable artificial intelligence on cyber risk assessment within financial systems by evaluating how interpretability metrics contributed to model performance, alert clarity, and operational decision outcomes. Using three large-scale financial cybersecurity datasets consisting of 1.2 million fraud records, 4.8 million network intrusion events, and 930,000 authentication logs, the study analyzed both interpretable and non-interpretable machine learning models. Detection performance metrics, including precision, recall, F1-score, and AUC, were examined alongside interpretability measures such as fidelity, stability, and explanation complexity. Results showed that fidelity demonstrated strong positive correlations with performance metrics, ranging from r = .69 to r = .76 across datasets, while stability showed moderate to strong correlations (r = .64 to r = .72). Explanation complexity exhibited negative correlations with detection performance (r = –.49 to –.57), indicating that more complex explanations corresponded with weaker classification behavior. Multiple regression models revealed that fidelity significantly predicted improvements in F1-scores (β = .41, p < .001) and AUC (β = .47, p < .001), while stability also contributed positively but with smaller effects (β = .33 and β = .29). Complexity negatively predicted both outcomes (β = –.26 to –.31). Alert-quality analysis showed that higher interpretability reduced ambiguous alerts by 18–27% and increased explanation-assisted analyst accuracy by 22–31%. Cross-validation and bootstrapped reliability tests demonstrated low performance variability (SD range: .016–.028) and high stability for explanation metrics among interpretable models. Validity assessments confirmed strong construct, convergent, and criterion validity across all interpretability measures. Overall, the study provided robust numerical evidence that explainability was a significant operational factor in financial cyber risk modeling. Interpretability metrics were shown to enhance detection effectiveness, increase alert clarity, and improve the practical usability of machine learning outputs, particularly for complex black-box models used in high-stakes cybersecurity environments.

Downloads

Published

2023-09-28

How to Cite

Saba Ashfaq, & Tonoy Kanti Chowdhury. (2023). EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) APPROACHES FOR CYBER RISK ASSESSMENT IN FINANCIAL SERVICES. American Journal of Interdisciplinary Studies, 4(03), 96-135. https://doi.org/10.63125/3gjcb322

Cited By: