Drivers’ attention is a key element in safe driving and in avoiding possible accidents. In this paper, we present a new approach to the task of Visual Attention Estimation in drivers. The model we introduce consists of two branches, one which performs Gaze Point Detection to determine the exact point of focus of the driver, and the other which executes Object Detection to recognize all relevant elements on the road (e.g. vehicles, pedestrians, and traffic signs). The combination of the two outputs from the two branches allows us to determine whether the driver is attentive and, eventually, on which element of the road they are focusing. Two models are tested for the gaze detection task: the GazeCNN model and a model consisting of a CNN+Transformer. The performance of both models is evaluated and compared with other state-of-the-art models to choose the best approach for the task. Finally, the results of the Visual Attention Estimation performed on 3761 pairs of images (driver view and corresponding road view) from the DGAZE dataset are reported and analyzed.

A Fully Automatic Visual Attention Estimation Support System for A Safer Driving Experience / Fiani, F.; Russo, S.; Napoli, C.. - 3695:(2023), pp. 40-50. (Intervento presentato al convegno 9th Scholar's Yearly Symposium of Technology, Engineering and Mathematics, SYSTEM 2023 tenutosi a ita).

A Fully Automatic Visual Attention Estimation Support System for A Safer Driving Experience

Fiani F.
Co-primo
Investigation
;
Russo S.
Co-primo
Investigation
;
Napoli C.
Ultimo
Supervision
2023

Abstract

Drivers’ attention is a key element in safe driving and in avoiding possible accidents. In this paper, we present a new approach to the task of Visual Attention Estimation in drivers. The model we introduce consists of two branches, one which performs Gaze Point Detection to determine the exact point of focus of the driver, and the other which executes Object Detection to recognize all relevant elements on the road (e.g. vehicles, pedestrians, and traffic signs). The combination of the two outputs from the two branches allows us to determine whether the driver is attentive and, eventually, on which element of the road they are focusing. Two models are tested for the gaze detection task: the GazeCNN model and a model consisting of a CNN+Transformer. The performance of both models is evaluated and compared with other state-of-the-art models to choose the best approach for the task. Finally, the results of the Visual Attention Estimation performed on 3761 pairs of images (driver view and corresponding road view) from the DGAZE dataset are reported and analyzed.
2023
9th Scholar's Yearly Symposium of Technology, Engineering and Mathematics, SYSTEM 2023
ADAS (Autonomous Driver Assistance Systems); DGAZE; GazeCNN; Visual Attention Estimation; Visual Transformers
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
A Fully Automatic Visual Attention Estimation Support System for A Safer Driving Experience / Fiani, F.; Russo, S.; Napoli, C.. - 3695:(2023), pp. 40-50. (Intervento presentato al convegno 9th Scholar's Yearly Symposium of Technology, Engineering and Mathematics, SYSTEM 2023 tenutosi a ita).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1714648
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact