A Meta-Analysis of Deep Reinforcement Learning for Dynamic Project Scheduling in Engineering Systems

Authors

  • Chapal Barua Master of Science in Administration, Engineering Management, Central Michigan University, Mount Pleasant, MI ; USA Author

DOI:

https://doi.org/10.63125/t5h1bc22

Keywords:

Deep Reinforcement Learning, Project Scheduling, Engineering Systems, Optimization, Meta-Analysis

Abstract

This study conducted a quantitative meta-analysis to evaluate the effectiveness of deep reinforcement learning (DRL) for dynamic project scheduling in engineering systems. The analysis synthesized data from 52 empirical studies across multiple domains, including manufacturing (38.5%), construction (23.1%), logistics (17.3%), and infrastructure systems (11.5%). The findings demonstrated that DRL-based scheduling models significantly outperformed traditional deterministic, heuristic, and classical reinforcement learning approaches across key performance indicators. The aggregated results indicated an average makespan reduction of 18.7%, resource utilization improvement of 14.2%, cost efficiency gain of 11.6%, tardiness reduction of 15.3%, and throughput improvement of 12.8%. Statistical analysis confirmed that these improvements were significant, with 84.6% of studies reporting p-values below 0.05. Effect size evaluation showed moderate to large effects, with makespan reduction achieving a standardized mean difference of 0.91 and resource utilization 0.84, indicating strong practical significance. Subgroup analysis revealed that hybrid DRL models achieved the highest overall improvement (21.5%), followed by Actor-Critic (16.8%), Deep Q-Network (17.0%), and Policy Gradient approaches (14.0%). Domain-specific results indicated more consistent improvements in manufacturing systems, while construction and infrastructure projects showed higher variability due to increased uncertainty. Heterogeneity analysis produced an I² value of 61.3%, reflecting moderate to high variability across studies, while meta-regression indicated that dataset size, domain, and algorithm type explained 47.8% of the variance in outcomes. Visual analysis supported these findings, showing consistent positive effect distributions and minimal publication bias. Overall, the study provided robust quantitative evidence that DRL-based scheduling models enhance efficiency, adaptability, and performance in complex engineering environments, particularly under dynamic and uncertain conditions.

Downloads

Published

2026-04-05

How to Cite

Chapal Barua. (2026). A Meta-Analysis of Deep Reinforcement Learning for Dynamic Project Scheduling in Engineering Systems. American Journal of Interdisciplinary Studies, 7(01), 578-616. https://doi.org/10.63125/t5h1bc22

Cited By: