Machine learning (ML) algorithms are now part of everyday life, as many technological devices use these algorithms. The spectrum of uses is wide, but it is evident that ML represents a revolution that may change almost every human activity. However, as for all innovations, it comes with challenges. One of the most critical of these challenges is providing users with an understanding of how models’ output is related to input data. This is called “interpretability”, and it is focused on explaining what feature influences a model’s output. Some algorithms have a simple and easy-to-understand relationship between input and output, while other models are “black boxes” that return an output without giving the user information as to what influenced it. The lack of this knowledge creates a truthfulness issue when the output is inspected by a human, especially when the operator is not a data scientist. The Building and Construction sector is starting to face this innovation, and its scientific community is working to define best practices and models. This work is intended for developing a deep analysis to determine how interpretable ML models could be among the most promising future technologies for the energy management in built environments.

A Future direction of machine learning for building energy management. Interpretable models / Gugliermetti, Luca; Cumo, Fabrizio; Agostinelli, Sofia. - In: ENERGIES. - ISSN 1996-1073. - 17:3(2024), pp. 1-5. [10.3390/en17030700]

A Future direction of machine learning for building energy management. Interpretable models

Gugliermetti, Luca
Primo
Writing – Review & Editing
;
Cumo, Fabrizio
Secondo
Supervision
;
Agostinelli, Sofia
Ultimo
Visualization
2024

Abstract

Machine learning (ML) algorithms are now part of everyday life, as many technological devices use these algorithms. The spectrum of uses is wide, but it is evident that ML represents a revolution that may change almost every human activity. However, as for all innovations, it comes with challenges. One of the most critical of these challenges is providing users with an understanding of how models’ output is related to input data. This is called “interpretability”, and it is focused on explaining what feature influences a model’s output. Some algorithms have a simple and easy-to-understand relationship between input and output, while other models are “black boxes” that return an output without giving the user information as to what influenced it. The lack of this knowledge creates a truthfulness issue when the output is inspected by a human, especially when the operator is not a data scientist. The Building and Construction sector is starting to face this innovation, and its scientific community is working to define best practices and models. This work is intended for developing a deep analysis to determine how interpretable ML models could be among the most promising future technologies for the energy management in built environments.
2024
machine learning; energy efficiency; building and constructions; machine learning interpretability
01 Pubblicazione su rivista::01a Articolo in rivista
A Future direction of machine learning for building energy management. Interpretable models / Gugliermetti, Luca; Cumo, Fabrizio; Agostinelli, Sofia. - In: ENERGIES. - ISSN 1996-1073. - 17:3(2024), pp. 1-5. [10.3390/en17030700]
File allegati a questo prodotto
File Dimensione Formato  
Gugliermetti _A Future Direction_2024.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 3.67 MB
Formato Adobe PDF
3.67 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1701011
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact