EXPLAINABLE AI (XAI) MODELS FOR CLOUD-BASED BUSINESS INTELLIGENCE: ENSURING COMPLIANCE AND SECURE DECISION-MAKING

Authors

  • Md Muzahidul Islam B.Sc in Computing Science and Technology, Jiangxi Normal University, Jiangxi, China Author
  • Md Mohaiminul Hasan Project Analyst; Quantanite, Dhaka, Bangladesh Author

DOI:

https://doi.org/10.63125/5etfhh77

Keywords:

Explainable AI, Cloud BI, Compliance, Security, Governance

Abstract

This study had investigated Explainable AI (XAI) models within cloud-based business intelligence (BI) systems to determine how explain ability quality supported regulatory compliance assurance and secure decision-making. A quantitative comparative design had been implemented in a simulated cloud BI pipeline, and 210 cross-functional participants had completed four BI tasks (demand forecasting, fraud/risk scoring, anomaly detection, and resource optimization) under intrinsic, post-hoc, and hybrid XAI conditions. Descriptive results had shown that hybrid XAI produced the strongest explain ability Quality (EQ) profile, with higher fidelity (M=4.31, SD=0.49), stability (M=4.18, SD=0.52), and human agreement (M=4.12, SD=0.50) than intrinsic and post-hoc conditions, while post-hoc methods displayed the widest stability dispersion (SD=0.70). Compliance Assurance (CA) was strongest under hybrid XAI for audit traceability (M=4.24, SD=0.50) and decision reproducibility (M=4.16, SD=0.54), whereas post-hoc models showed weaker reproducibility (M=3.61, SD=0.69) and higher fairness deviation (M=2.98, SD=0.73). Secure Decision-Making (SDM) outcomes followed the same direction, with hybrid XAI yielding higher robust decision integrity (M=4.20, SD=0.51) and adversarial detection (M=4.05, SD=0.58) and post-hoc explanations showing higher leak risk (M=3.42, SD=0.71). Regression analysis had indicated that EQ significantly predicted CA (R²=.534, ΔR²=.318), with stability (β=.31, p<.001) and fidelity (β=.24, p<.001) as dominant contributors. EQ also predicted SDM (R²=.460), led by stability (β=.28, p<.001) and fidelity (β=.21, p<.001), while sparsity/complexity showed a small negative SDM effect (β=−.09, p=.041). Mediation testing had confirmed a partial indirect pathway through CA (β_indirect=.17, p=.003). Overall, high-quality XAI had functioned as a measurable governance mechanism that improved compliance readiness and secure BI decision reliability in cloud environments.

Downloads

Published

2023-09-30

How to Cite

Md Muzahidul Islam, & Md Mohaiminul Hasan. (2023). EXPLAINABLE AI (XAI) MODELS FOR CLOUD-BASED BUSINESS INTELLIGENCE: ENSURING COMPLIANCE AND SECURE DECISION-MAKING. American Journal of Interdisciplinary Studies, 4(03), 208–249. https://doi.org/10.63125/5etfhh77

Cited By: