Summary: | Nowadays, we are surrounded by electric appliances. Either at home by the washing machine, kettle, or oven, or work by the computer, cellphone, or printer. Such devices help us daily, but their popularization increased the energy consumption to concerning values. In an attempt to reduce energy consumption, governments started enforcing policies regarding energy education to teach homeowners how to reduce energy wastage on the demand side. One of those policies was the deployment of smart meters, which allow the consumer to know how much energy is being consumed at any given time through a display on the household energy meter. Even though this measure was well received, the studies show that the best results in energy conservation are obtained through real-time appliance level feedback. To get such feedback, one can either measure every outlet in a household, which is unviable for a broad deployment solution, or disaggregate the energy recorded by the smart meter. NILM or Non-Intrusive Load Monitoring is the name we give to the second option where we use the aggregated readings of a household to find the energy consumed by each appliance. There were many proposals to solve NILM ranging from HMMs to GSP, where deep learning models showed remarkable results, obtaining state-of-the-art results. With the intent to create a complete NILM solution, Withus partnered with the University of Aveiro and proposed this dissertation. The initial objective was to develop a machine learning model to solve NILM. Still, during the background analysis, we found the need to create a new dataset which led to the expansion of the initial proposal to include the dataset preprocessing and conversion. Regarding NILM, we proposed three new deep learning models: a convolutional neural network with residual blocks, a recurrent neural network, and a multilayer perceptron that uses discrete wavelet transforms as features. These models went through multiple iterations, being evaluated first in the simpler ON/OFF classification task and later modified and evaluated for the disaggregation task. We compared our models to the state-of-the-art ones proposed in NILMTK, where they presented better results than the real-time alternative, dAE, reducing the NRMSE on average by 49%. We also got close to the best option that classified with a 30 min delay, Seq2Point, increasing the error on average by 17%. Besides that, we also analyze the best models from the previous comparison on the benefit of transfer learning between datasets, where the results show a marginal performance improvement when using transfer learning. This document presents the solution outline definition, the multiple options considered for dataset processing and the best solution, the models’ evolution and results, and the comparison with the state-of-the-art models regarding generalization to different houses and under transfer learning.
|