The increasing adoption of machine learning and deep learning models in critical applications raises the issue of ensuring their trustworthiness, which can be addressed by quantifying the uncertainty of their predictions. However, the black-box nature of many such models allows only to quantify uncertainty through ad hoc superstructures, which require to develop and train a model in an uncertainty-aware fashion. However, for applications where previously trained models are already in operation, it would be interesting to develop uncertainty quantification approaches acting as lightweight “plug-ins” that can be applied on top of such models without modifying and re-training them. In this contribution we present a research activity of the Pattern Recognition and Applications Lab of the University of Cagliari related to a recently proposed post hoc uncertainty quantification method, we named dropout injection, which is a variant of the well-known Monte Carlo dropout, and does not require any re-training nor any further gradient descent-based optimization; this makes it a promising, lightweight solution for integrating uncertainty quantification on any already-trained neural network. We are investigating a theoretically grounded solution to make dropout injection as effective as Monte Carlo dropout through a suitable rescaling of its uncertainty measure; we are also evaluating its effectiveness in the computer vision tasks of crowd counting and density estimation for intelligent video surveillance, thanks to our participation in a project funded by the European Space Agency.

Trustworthy AI in Video Surveillance: The IMMAGINA Project / Ledda, Emanuele; Putzu, Lorenzo; Delussu, Rita; Fumera, Giorgio; Roli, Fabio. - 3486:(2023), pp. 371-376. (Intervento presentato al convegno Ital-IA 2023: 3rd National Conference on Artificial Intelligence tenutosi a Pisa).

Trustworthy AI in Video Surveillance: The IMMAGINA Project

Emanuele Ledda
;
2023

Abstract

The increasing adoption of machine learning and deep learning models in critical applications raises the issue of ensuring their trustworthiness, which can be addressed by quantifying the uncertainty of their predictions. However, the black-box nature of many such models allows only to quantify uncertainty through ad hoc superstructures, which require to develop and train a model in an uncertainty-aware fashion. However, for applications where previously trained models are already in operation, it would be interesting to develop uncertainty quantification approaches acting as lightweight “plug-ins” that can be applied on top of such models without modifying and re-training them. In this contribution we present a research activity of the Pattern Recognition and Applications Lab of the University of Cagliari related to a recently proposed post hoc uncertainty quantification method, we named dropout injection, which is a variant of the well-known Monte Carlo dropout, and does not require any re-training nor any further gradient descent-based optimization; this makes it a promising, lightweight solution for integrating uncertainty quantification on any already-trained neural network. We are investigating a theoretically grounded solution to make dropout injection as effective as Monte Carlo dropout through a suitable rescaling of its uncertainty measure; we are also evaluating its effectiveness in the computer vision tasks of crowd counting and density estimation for intelligent video surveillance, thanks to our participation in a project funded by the European Space Agency.
2023
Ital-IA 2023: 3rd National Conference on Artificial Intelligence
trustworthy ai; uncertainty quantification; monte carlo dropout; dropout injection; crowd counting
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Trustworthy AI in Video Surveillance: The IMMAGINA Project / Ledda, Emanuele; Putzu, Lorenzo; Delussu, Rita; Fumera, Giorgio; Roli, Fabio. - 3486:(2023), pp. 371-376. (Intervento presentato al convegno Ital-IA 2023: 3rd National Conference on Artificial Intelligence tenutosi a Pisa).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1690457
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact