• GitHub
  • Home
  • People
  • Research
  • Publications
  • Courses
  • News
  • Contact
  • Internal
UT Shield

Intelligent Environments Laboratory

The University of Texas at Austin
  • Home
  • People
    • Prof. Zoltan Nagy, PhD
    • June Young Park
    • José Ramón Vázquez-Canteli
    • Megan K. McHugh, MSE
    • Ayşegül Demir Dilsiz
    • Hagen Fritz
  • Research
  • Publications
  • GitHub
  • Courses
  • News
  • Contact

November 25, 2018, Filed Under: Research

Reinforcement learning for urban energy systems & demand response

Demand response, or demand-side management, improves grid stability by increasing demand flexibility, and shifts peak demand towards periods of peak renewable energy generation by providing consumers with economic incentives. Reinforcement learning has been utilized to control diverse energy systems such as electric vehicles, HVAC systems, smart appliances, or batteries. The future of demand response greatly depends on its ability to prevent consumer discomfort and integrate human feedback into the control loop. Reinforcement learning is a potentially model-free algorithm that can adapt to its environment, as well as to human preferences by directly integrating user feedback into its control logic.

We reviewed all the literature about the use of reinforcement learning, in urban energy systems and for demand response applications in the smart grid [1]. Our review shows that although many papers consider human comfort and satisfaction, most of them focus on single-agent systems with demand-independent electricity prices and a stationary environment. However, when electricity prices are modeled as demand-dependent variables, there is a risk of shifting the peak demand rather than shaving it. Therefore, there is a need to further explore the applicability of reinforcement learning in multi-agent systems, which can participate in demand response. Reinforcement learning control algorithms have been tested in physical systems in only a small fraction of the articles we have reviewed. Therefore, in order to prove the reliability and adaptability of reinforcement learning algorithms, more real-world experiments need to be conducted with state of the art reinforcement learning methods. We observed that most of the studies are not easily reproducible, and so it is rather challenging to compare the performance of the controllers. Further standardization is needed in both the investigated control problems, and the used methods and simulation tools. We have proposed a basic framework to help in this standardization.

[1] Vázquez-Canteli, J.R., and Nagy, Z., “Reinforcement Learning for Demand Response: A Review of algorithms and modeling techniques”, Applied Energy 235, 1072-1089, 2019 (published in a special section: Progress in Applied Energy – reserved to the top 3% of the articles).

Research Highlight

UT Energy App – Privacy Policy

  The Intelligent Environments Laboratory is releasing the first version of the UT Energy App. This app is intended to provide a way to Read more 

About Us

The Intelligent Environments Laboratory (IEL), led by Prof. Zoltán Nagy, is an interdisciplinary research group within the Building Energy & Environments (BEE) and Sustainable Systems (SuS) Programs of the Department of Civil, Architectural and Environmental Engineering (CAEE) in the Cockrell School of Engineering of the University of Texas at Austin.

The aim of our research is to rethink the built environment and define Smart Buildings and Cities as spaces that adapt to their occupants and reduce their energy consumption.

We combine data science with building science and apply machine learning to the building and urban scale

Take a look at our projects !

Tags

air handling unit Annex 79 architecture artificial neural network Bluetooth city learn Community engaged research earthquakes environmental monitoring fault detection and diagnostics HVAC integrated design intelligent energy management Lighting Control machine learning Megan McHugh multi-agent systems Occupancy Occupant Centered Control Reinforcement Learning Review Smart Building smart city teaching Thermal Comfort
Tweets by Z0ltanNagy

Research

  • All Projects

UT Energy App – Privacy Policy

Fault detection and diagnostics of air handling units using machine learning and expert rule-sets

Reinforcement Learning in the Built Environment

Reinforcement learning for urban energy systems & demand response

Multi-Agent Reinforcement Learning for demand response & building coordination

IEA-EBC Annex 79: Occupant Centric Design and Operation of Buildings

People

  • Prof. Zoltan Nagy, PhD
  • June Young Park
  • José Ramón Vázquez-Canteli
  • Megan K. McHugh, MSE

Tags

air handling unit Annex 79 architecture artificial neural network Bluetooth city learn Community engaged research earthquakes environmental monitoring fault detection and diagnostics HVAC integrated design intelligent energy management Lighting Control machine learning Megan McHugh multi-agent systems Occupancy Occupant Centered Control Reinforcement Learning Review Smart Building smart city teaching Thermal Comfort
ITS

301 E Dean Keeton St
Austin, TX 78712
512-555-5555
nagy@utexas.edu

UT Home | Emergency Information | Site Policies | Web Accessibility | Web Privacy | Adobe Reader

© The University of Texas at Austin 2022

  • All Projects