Drivers’ attention is a key element in safe driving and in avoiding possible accidents. In this paper, we present a new approach to the task of Visual Attention Estimation in drivers. The model we introduce consists of two branches, one which performs Gaze Point Detection to determine the exact point of focus of the driver, and the other which executes Object Detection to recognize all relevant elements on the road (e.g. vehicles, pedestrians, and traffic signs). The combination of the two outputs from the two branches allows us to determine whether the driver is attentive and, eventually, on which element of the road they are focusing. Two models are tested for the gaze detection task: the GazeCNN model and a model consisting of a CNN+Transformer. The performance of both models is evaluated and compared with other state-of-the-art models to choose the best approach for the task. Finally, the results of the Visual Attention Estimation performed on 3761 pairs of images (driver view and corresponding road view) from the DGAZE dataset are reported and analyzed.

A Fully Automatic Visual Attention Estimation Support System for A Safer Driving Experience / Fiani, F.; Russo, S.; Napoli, C.. - 3695:(2023), pp. 40-50. (Intervento presentato al convegno 9th Scholar's Yearly Symposium of Technology, Engineering and Mathematics, SYSTEM 2023 tenutosi a Roma; Italia).

A Fully Automatic Visual Attention Estimation Support System for A Safer Driving Experience

Fiani F.
Co-primo
Investigation
;
Russo S.
Co-primo
Investigation
;
Napoli C.
Ultimo
Supervision
2023

Abstract

Drivers’ attention is a key element in safe driving and in avoiding possible accidents. In this paper, we present a new approach to the task of Visual Attention Estimation in drivers. The model we introduce consists of two branches, one which performs Gaze Point Detection to determine the exact point of focus of the driver, and the other which executes Object Detection to recognize all relevant elements on the road (e.g. vehicles, pedestrians, and traffic signs). The combination of the two outputs from the two branches allows us to determine whether the driver is attentive and, eventually, on which element of the road they are focusing. Two models are tested for the gaze detection task: the GazeCNN model and a model consisting of a CNN+Transformer. The performance of both models is evaluated and compared with other state-of-the-art models to choose the best approach for the task. Finally, the results of the Visual Attention Estimation performed on 3761 pairs of images (driver view and corresponding road view) from the DGAZE dataset are reported and analyzed.
2023
9th Scholar's Yearly Symposium of Technology, Engineering and Mathematics, SYSTEM 2023
ADAS (Autonomous Driver Assistance Systems); DGAZE; GazeCNN; Visual Attention Estimation; Visual Transformers
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
A Fully Automatic Visual Attention Estimation Support System for A Safer Driving Experience / Fiani, F.; Russo, S.; Napoli, C.. - 3695:(2023), pp. 40-50. (Intervento presentato al convegno 9th Scholar's Yearly Symposium of Technology, Engineering and Mathematics, SYSTEM 2023 tenutosi a Roma; Italia).
File allegati a questo prodotto
File Dimensione Formato  
Fiani_A-Fully_2023.pdf

accesso aperto

Note: https://ceur-ws.org/Vol-3695/p06.pdf
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 8.63 MB
Formato Adobe PDF
8.63 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1714648
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact