open access publication

Article, 2024

Data-Driven hierarchical energy management in multi-integrated energy systems considering integrated demand response programs and energy storage system participation based on MADRL approach

Sustainable Cities and Society, ISSN 2210-6707, 2210-6715, Volume 103, Page 105264, 10.1016/j.scs.2024.105264

Contributors

Khodadadi, Amin [1] [2] Adinehpour, Sara [1] [2] Sepehrzad, Reza 0000-0001-7911-581X (Corresponding author) [3] Al-Durra, Ahmed Al-Durra 0000-0002-6629-5134 [4] Anvari-Moghaddam, Amjad 0000-0002-5505-3252 [5]

Affiliations

  1. [1] Department of Electrical Engineering, Arman Niroo Hormozgan Company, Bandar Abbas, Iran
  2. [NORA names: Iran; Asia, Middle East];
  3. [2] Islamic Azad University, Science and Research Branch
  4. [NORA names: Iran; Asia, Middle East];
  5. [3] Politecnico di Milano
  6. [NORA names: Italy; Europe, EU; OECD];
  7. [4] Khalifa University of Science and Technology
  8. [NORA names: United Arab Emirates; Asia, Middle East];
  9. [5] Aalborg University
  10. [NORA names: AAU Aalborg University; University; Denmark; Europe, EU; Nordic; OECD]

Abstract

In this study, an intelligent and data-driven hierarchical energy management approach considering the optimal participation of renewable energy resources (RER), energy storage systems (ESSs) and the integrated demand response (IDR) programs execution based on wholesale and retail market signals in the multi-integrated energy system (MIES) structure is presented. The proposed objective function is presented on four levels, which include minimizing operating costs, minimizing environmental pollution costs, minimizing risk costs, and reducing the destructive effects of cyberattacks such as false data injection (FDI). The proposed approach is implemented in the structure of the central controller and local controller and is based on the multi-agent deep reinforcement learning method (MADRL). The MADRL model is formulated based on the Markov decision process equations and solved by multi-agent soft actor-critic and deep Q-learning algorithms in two levels of offline training and online operation. The different scenario results show operation cost reduction equivalent to 19.51 %, risk cost equivalent to 19.69 %, cyber security cost equivalent to 24 %, and pollution cost equivalent to 20.24 %. The proposed approach has provided an important step in responding to smart cities challenges and requirements considering advantage of fast response, high accuracy and also reducing the computational time and burden.

Keywords

City Challenge, Markov, Q-learning algorithm, accuracy, actor-critic, algorithm, approach, burden, centralized control, challenges, computation time, control, cost, cost reduction, cyber, cyberattacks, data injection, deep Q-learning algorithm, deep reinforcement learning method, demand response, destructive effects, effects of cyberattacks, energy, energy management, energy management approach, energy resources, energy storage system, energy storage system participation, energy systems, environmental pollution cost, equations, execution, false data injection, fast response, function, injection, integrated demand response, learning methods, levels, local control, management, management approach, market signals, method, model, multi-agent deep reinforcement learning method, multi-integrated energy system, offline training, online operation, operating costs, operation, operational cost reduction, optimal participation, participants, participation of renewable energy resources, pollution, pollution costs, process equations, program, program execution, reduction, reinforcement learning method, renewable energy resources, requirements, resources, response, risk, risk cost, security costs, signal, smart city challenges, soft actor-critic, storage system, structure, study, system, system participants, time, training, wholesale

Funders

  • Khalifa University of Science and Technology

Data Provider: Digital Science