Quantitative Benchmarking of ERP Analytics Architectures: Evaluating Cloud vs On-Premises ERP Using Cost-Performance Metrics

Authors

  • B. M. Taslimul Haque Bachelor of Science in Computer Science & Engineering, American International University Bangladesh, Dhaka, Bangladesh Author
  • Md. Arifur Rahman Bachelor of Science (B.Sc.) in Computer Science & Engineering, Bangladesh University of Business & Technology, Bangladesh Author

DOI:

https://doi.org/10.63125/y05j6m03

Keywords:

Cloud ERP analytics, On-premises ERP, Cost-performance benchmarking, Workload normalization, Analytics effectiveness

Abstract

This study addresses the problem that organizations choose between cloud and on-premises ERP analytics without workload-normalized evidence, materially increasing cost and performance risk. The purpose was to benchmark ERP analytics architectures and test how architecture type relates to analytics effectiveness. Using a quantitative, cross-sectional, case-based design, objective cost and performance indicators were extracted over a fixed four-week window and paired with a 5-point Likert survey from cloud and on-prem cases (n = 152 valid responses; cloud-exposed n = 79, on-prem-exposed n = 73). Key variables included architecture type (cloud = 1), total monthly analytics-related cost and cost per active user, latency (median and 95th percentile), throughput under 50-user concurrency, availability, incident rate and recovery time, and perceptual constructs: System Quality (SQ), Information Quality (IQ), Service Quality (ServQ), User Satisfaction (US), and Analytics Effectiveness (AE). The analysis plan used descriptive statistics, reliability testing (Cronbach’s alpha: SQ = .89, IQ = .86, ServQ = .84, US = .88, AE = .90), Pearson correlations, and multiple regression with controls (usage frequency, experience, role). Results show that cloud delivered lower cost ($48,200 vs $61,750 per month; $214 vs $289 per user) and stronger performance (median latency 2.3 s vs 3.7 s; 95th percentile 6.9 s vs 10.8 s; throughput 1,420 vs 1,050 queries/hour; availability 99.91% vs 99.62%; incidents 3 vs 6 per month; mean time to recovery 38 vs 64 minutes). Perceptions aligned, with higher AE for cloud (M = 4.14, SD = 0.60) than on-prem (M = 3.49, SD = 0.73). AE correlated most strongly with US (r = .71) and measured performance (r = .55), and regression explained substantial variance (Adjusted R² = .49), with performance (β = .34, p < .001), architecture (β = .21, p = .002), and cost efficiency (β = .17, p = .009) as significant predictors. Implications are that ERP analytics selection should use repeatable workload definitions, percentile-based SLAs, and TCO accounting that includes support and downtime, because performance stability and cost efficiency jointly drive decision value.

Downloads

Published

2020-12-26

How to Cite

B. M. Taslimul Haque, & Md. Arifur Rahman. (2020). Quantitative Benchmarking of ERP Analytics Architectures: Evaluating Cloud vs On-Premises ERP Using Cost-Performance Metrics. American Journal of Interdisciplinary Studies, 1(04), 55-90. https://doi.org/10.63125/y05j6m03

Cited By: