Optimizing Distributed Energy resource Integration using deep reinforcement learning for Post-Disaster Recovery
International Journal of Development Research
Optimizing Distributed Energy resource Integration using deep reinforcement learning for Post-Disaster Recovery
Received 11th December, 2024; Received in revised form 29th December, 2024; Accepted 17th January, 2025; Published online 27th February, 2025
Copyright©2025, Alotaibi, Raed and Zohdy Mohamed A. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Distributed Energy Resources (DERs) have emerged as a significant advancement in modern power distribution systems, enabling integrating renewable energy sources and energy storage solutions to enhance the network performance. However, disasters, including natural calamities, introduce substantial management challenges for DERs, often resulting in infrastructure failures, resource overutilization, and disruptions to critical services. Overcoming these challenges requires developing optimal, self-aware architectures capable of managing DER integration and dispatch operations in real-time. To address this issue, this paper proposes a Deep Reinforcement Learning (DRL) framework based on Deep Q-Networks (DQN) to enhance post-disaster recovery in power distributionsystems. The proposed framework optimally allocates power to critical loads, reconfigures the network structure, and minimizes restoration time. Extensive simulations, conducted using OpenDSS and a Python-based platform, were evaluated across various disaster scenarios to assess the efficacy of the proposed framework. The results show that the proposed DRL framework outperforms traditional heuristic-based approaches, achieving a 20% reduction in recovery time and delivering 15% more critical loads under a 50% reduction in DER capacity. The framework’s scalability and potential for integration into existing grid systems are highlighted by key features, such as self-organizing and reconfigurable micro-grids, and dynamic resource management. Progressive learning advancements further improve the DRL agent’s decision-making capabilities, proving its value as a smart, adaptable, and scalable solution for disaster-stricken power systems. Future research will focus on integrating the proposed framework into existing grid structures and exploring alternative DRL architectures to enhance grid robustness.