Explainable AI Models for Transparent Grammar Instruction and Automated Language Assessment

Authors

  • Fahimul Habib Master of Arts in Applied Linguistics and EL, Chittagong Independent University; Bangladesh Author

DOI:

https://doi.org/10.63125/wttvnz54

Keywords:

Explainable AI (XAI), Automated language assessment, Actionable grammar feedback, Transparency and trust, Technology acceptance

Abstract

This study addresses the problem that cloud-hosted AI grammar feedback and automated scoring tools are often experienced as opaque, which can weaken transparency, trust, and perceived fairness and ultimately reduce learning value and adoption in real institutional, enterprise-managed deployments. The purpose was to quantify how explainability features shape user outcomes and to test whether explanation clarity, actionability, and consistency predict perceived transparency, trust, fairness, perceived learning effectiveness, and acceptance or intention to use within a quantitative, cross-sectional, case-based design using a five-point Likert instrument and hypothesis testing through associations and prediction models. The sample comprised N = 210 end users from a single case setting with meaningful system exposure (2–4 weeks: 29.5%; 5–8 weeks: 44.8%; 9+ weeks: 25.7%), providing a realistic cloud or enterprise usage context for perceptions of explainable feedback and scoring. Key variables were operationalized as Explanation Clarity, Explanation Actionability, Explanation Consistency, Perceived Transparency, Trust in AI Outputs, Perceived Fairness, Perceived Learning Effectiveness, and Acceptance or Intention. The analysis plan applied descriptive statistics to profile construct levels, internal consistency reliability testing, Pearson correlations to evaluate hypothesized relationships, and multiple regression to estimate unique predictor effects while controlling overlap among constructs. Headline findings showed consistently positive perceptions above the neutral midpoint, including Clarity (M = 3.98, SD = 0.62), Actionability (M = 3.87, SD = 0.66), Transparency (M = 3.81, SD = 0.64), Trust (M = 3.76, SD = 0.68), Fairness (M = 3.69, SD = 0.73), Learning Effectiveness (M = 3.85, SD = 0.65), and Acceptance (M = 3.90, SD = 0.63). Reliability was strong across constructs (α range .83 to .90). Correlations supported the mechanism that clearer explanations strengthen transparency and that transparency supports trust, for example Clarity–Transparency r = .62 and Transparency–Trust r = .63 (p < .001), while Actionability–Learning Effectiveness r = .58 and Trust–Acceptance r = .59 (p < .001). In regression, the learning model achieved R² = .56 with Actionability as the strongest predictor (β = .36, p < .001), followed by Transparency (β = .21, p = .002) and Clarity (β = .17, p = .009); the acceptance model achieved R² = .59, led by Trust (β = .29, p < .001) and Fairness (β = .22, p = .001), with Transparency and Actionability also contributing. These findings imply that cloud and enterprise deployments should prioritize explanation designs that are not only understandable but concretely actionable, while governance and communication features that enhance transparency and fairness are central to calibrated trust and sustained adoption.

Downloads

Published

2023-04-29

How to Cite

Fahimul Habib. (2023). Explainable AI Models for Transparent Grammar Instruction and Automated Language Assessment. American Journal of Interdisciplinary Studies, 4(01), 27-54. https://doi.org/10.63125/wttvnz54

Cited By: