• GitHub
  • Home
  • People
  • Research
  • Publications
  • Courses
  • News
  • Contact
  • Internal
UT Shield

Intelligent Environments Laboratory

The University of Texas at Austin
  • Home
  • People
    • Prof. Zoltan Nagy, PhD
    • June Young Park
    • José Ramón Vázquez-Canteli
    • Megan K. McHugh, MSE
    • Ayşegül Demir Dilsiz
    • Hagen Fritz
  • Research
  • Publications
  • GitHub
  • Courses
  • News
  • Contact

November 19, 2018, Filed Under: Publication

Review paper published & selected for special section in APEN

Our paper Reinforcement learning for demand response: A review of algorithms and modeling techniques led by IEL’s PhD student Jose has been published in Applied Energy. We’re thrilled that it has been selected by the editors to be included into the Special Section Progress in Applied Energy.

J.R. Vazquez-Canteli and Z. Nagy, Reinforcement learning for demand response: A review of algorithms and modeling techniques, Applied Energy, Vol. 235, 2019, pp. 1072-1089

DOI: https://doi.org/10.1016/j.apenergy.2018.11.002

Highlights

  • Review of the application of reinforcement learning (RL) for demand response (DR)
  • DR is relevant for integrating renewable energy source into the smart grid.
  • Considering RL/DR from energy generation and storage, to supply and user satisfaction.
  • Typical algorithms and open research directions are discussed.
  • Most articles focus on single-agent systems in stationary environments.

Abstract
Buildings account for about 40% of the global energy consumption. Renewable energy resources are one possibility to mitigate the dependence of residential buildings on the electrical grid. However, their integration into the existing grid infrastructure must be done carefully to avoid instability, and guarantee availability and security of supply. Demand response, or demand-side management, improves grid stability by increasing demand flexibility, and shifts peak demand towards periods of peak renewable energy generation by providing consumers with economic incentives. This paper reviews the use of reinforcement learning, a machine learning algorithm, for demand response applications in the smart grid. Reinforcement learning has been utilized to control diverse energy systems such as electric vehicles, heating ventilation and air conditioning (HVAC) systems, smart appliances, or batteries. The future of demand response greatly depends on its ability to prevent consumer discomfort and integrate human feedback into the control loop. Reinforcement learning is a potentially model-free algorithm that can adapt to its environment, as well as to human preferences by directly integrating user feedback into its control logic. Our review shows that, although many papers consider human comfort and satisfaction, most of them focus on single-agent systems with demand-independent electricity prices and a stationary environment. However, when electricity prices are modelled as demand-dependent variables, there is a risk of shifting the peak demand rather than shaving it. We identify a need to further explore reinforcement learning to coordinate multi-agent systems that can participate in demand response programs under demand-dependent electricity prices. Finally, we discuss directions for future research, e.g., quantifying how RL could adapt to changing urban conditions such as building refurbishment and urban or population growth.

Research Highlight

Thermal Comfort & Smart Buildings

This is an excerpt from our review paper Comprehensive analysis of the relationship between thermal comfort and building control research - A Read more 

About Us

The Intelligent Environments Laboratory (IEL), led by Prof. Zoltán Nagy, is an interdisciplinary research group within the Building Energy & Environments (BEE) and Sustainable Systems (SuS) Programs of the Department of Civil, Architectural and Environmental Engineering (CAEE) in the Cockrell School of Engineering of the University of Texas at Austin.

The aim of our research is to rethink the built environment and define Smart Buildings and Cities as spaces that adapt to their occupants and reduce their energy consumption.

We combine data science with building science and apply machine learning to the building and urban scale

Take a look at our projects !

Tags

air handling unit Annex 79 architecture artificial neural network Bluetooth city learn Community engaged research earthquakes environmental monitoring fault detection and diagnostics HVAC integrated design intelligent energy management Lighting Control machine learning Megan McHugh multi-agent systems Occupancy Occupant Centered Control Reinforcement Learning Review Smart Building smart city teaching Thermal Comfort
Tweets by Z0ltanNagy

Research

  • All Projects

UT Energy App – Privacy Policy

Fault detection and diagnostics of air handling units using machine learning and expert rule-sets

Reinforcement Learning in the Built Environment

Reinforcement learning for urban energy systems & demand response

Multi-Agent Reinforcement Learning for demand response & building coordination

IEA-EBC Annex 79: Occupant Centric Design and Operation of Buildings

People

  • Prof. Zoltan Nagy, PhD
  • June Young Park
  • José Ramón Vázquez-Canteli
  • Megan K. McHugh, MSE

Tags

air handling unit Annex 79 architecture artificial neural network Bluetooth city learn Community engaged research earthquakes environmental monitoring fault detection and diagnostics HVAC integrated design intelligent energy management Lighting Control machine learning Megan McHugh multi-agent systems Occupancy Occupant Centered Control Reinforcement Learning Review Smart Building smart city teaching Thermal Comfort
ITS

301 E Dean Keeton St
Austin, TX 78712
512-555-5555
nagy@utexas.edu

UT Home | Emergency Information | Site Policies | Web Accessibility | Web Privacy | Adobe Reader

© The University of Texas at Austin 2021

  • All Projects