A SYSTEMATIC REVIEW OF HUMAN-AI COLLABORATION IN IT SUPPORT SERVICES: ENHANCING USER EXPERIENCE AND WORKFLOW AUTOMATION
DOI:
https://doi.org/10.63125/0fd1yb74Keywords:
Human-AI Collaboration, Workflow Automation, User Experience, IT Support Services, Service PerformanceAbstract
This study addresses the problem that enterprise IT service desks increasingly embed AI assistants in support portals and ticket workflows, yet many organizations lack quantitative evidence on whether human oversight and automation quality jointly improve user experience and service performance. The purpose was to test a quantitative, cross-sectional, case study-based model linking Human-AI Collaboration (HAC) and Workflow Automation Effectiveness (WAE) to User Experience (UX) and perceived IT Support Service Performance (SP). Survey data were collected from enterprise support cases; 320 questionnaires were distributed, 259 were returned, and 247 valid responses were analyzed (77.2% usable response rate; 71.7% end users and 28.3% IT support personnel; 54.3% used AI support weekly or more). Constructs were measured with multi-item five-point Likert scales and showed favorable perceptions: HAC M = 3.91 (SD = 0.64), WAE M = 3.84 (SD = 0.69), UX M = 3.88 (SD = 0.62), and SP M = 3.79 (SD = 0.66), with good to excellent reliability (Cronbach alpha 0.86 to 0.91). The analysis plan applied descriptive statistics, reliability testing, Pearson correlations, multiple regression, and bootstrapped mediation (5,000 samples). Associations were positive and significant (HAC with UX r = 0.62 and UX with SP r = 0.63, both p < .001). Regression indicated that HAC (beta = 0.41) and WAE (beta = 0.33) explained 49% of UX variance (R2 = 0.49, p < .001); WAE (beta = 0.38), HAC (beta = 0.21), and UX (beta = 0.29) explained 56% of SP variance (R2 = 0.56, p < .001). UX partially mediated the HAC to SP relationship (indirect beta = 0.29, 95% CI [0.19, 0.40]). Implications suggest that AI enabled IT support should be governed as a hybrid workflow with clear escalation rules and reliable automation, and continuously evaluated using joint metrics that track experience alongside efficiency outcomes.
