Our paper Reinforcement learning for demand response: A review of algorithms and modeling techniques led by IEL’s PhD student Jose has been published in Applied Energy. We’re thrilled that it has been selected by the editors to be included into the Special Section Progress in Applied Energy.
J.R. Vazquez-Canteli and Z. Nagy, Reinforcement learning for demand response: A review of algorithms and modeling techniques, Applied Energy, Vol. 235, 2019, pp. 1072-1089
- Review of the application of reinforcement learning (RL) for demand response (DR)
- DR is relevant for integrating renewable energy source into the smart grid.
- Considering RL/DR from energy generation and storage, to supply and user satisfaction.
- Typical algorithms and open research directions are discussed.
- Most articles focus on single-agent systems in stationary environments.
Buildings account for about 40% of the global energy consumption. Renewable energy resources are one possibility to mitigate the dependence of residential buildings on the electrical grid. However, their integration into the existing grid infrastructure must be done carefully to avoid instability, and guarantee availability and security of supply. Demand response, or demand-side management, improves grid stability by increasing demand flexibility, and shifts peak demand towards periods of peak renewable energy generation by providing consumers with economic incentives. This paper reviews the use of reinforcement learning, a machine learning algorithm, for demand response applications in the smart grid. Reinforcement learning has been utilized to control diverse energy systems such as electric vehicles, heating ventilation and air conditioning (HVAC) systems, smart appliances, or batteries. The future of demand response greatly depends on its ability to prevent consumer discomfort and integrate human feedback into the control loop. Reinforcement learning is a potentially model-free algorithm that can adapt to its environment, as well as to human preferences by directly integrating user feedback into its control logic. Our review shows that, although many papers consider human comfort and satisfaction, most of them focus on single-agent systems with demand-independent electricity prices and a stationary environment. However, when electricity prices are modelled as demand-dependent variables, there is a risk of shifting the peak demand rather than shaving it. We identify a need to further explore reinforcement learning to coordinate multi-agent systems that can participate in demand response programs under demand-dependent electricity prices. Finally, we discuss directions for future research, e.g., quantifying how RL could adapt to changing urban conditions such as building refurbishment and urban or population growth.