Recent approaches on trajectory forecasting use tracklets to predict the future positions of pedestrians exploiting Long Short Term Memory (LSTM) architectures. This paper shows that adding vislets, that is, short sequences of head pose estimations, allows to increase significantly the trajectory forecasting performance. We then propose to use vislets in a novel framework called MX-LSTM, capturing the interplay between tracklets and vislets thanks to a joint unconstrained optimization of full covariance matrices during the LSTM backpropagation. At the same time, MX-LSTM predicts the future head poses, increasing the standard capabilities of the long-term trajectory forecasting approaches. With standard head pose estimators and an attentional-based social pooling, MX-LSTM scores the new trajectory forecasting state-of-the-art in all the considered datasets (Zara01, Zara02, UCY, and TownCentre) with a dramatic margin when the pedestrians slow down, a case where most of the forecasting approaches struggle to provide an accurate solution.

MX-LSTM: mixing tracklets and vislets to jointly forecast trajectories and head poses / Hasan, I.; Setti, F.; Tsesmelis, T.; Del Bue, A.; Galasso, F.; Cristani, M.. - (2018), pp. 6067-6076. (Intervento presentato al convegno 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 tenutosi a Salt Lake City; United States) [10.1109/CVPR.2018.00635].

MX-LSTM: mixing tracklets and vislets to jointly forecast trajectories and head poses

Galasso F.;
2018

Abstract

Recent approaches on trajectory forecasting use tracklets to predict the future positions of pedestrians exploiting Long Short Term Memory (LSTM) architectures. This paper shows that adding vislets, that is, short sequences of head pose estimations, allows to increase significantly the trajectory forecasting performance. We then propose to use vislets in a novel framework called MX-LSTM, capturing the interplay between tracklets and vislets thanks to a joint unconstrained optimization of full covariance matrices during the LSTM backpropagation. At the same time, MX-LSTM predicts the future head poses, increasing the standard capabilities of the long-term trajectory forecasting approaches. With standard head pose estimators and an attentional-based social pooling, MX-LSTM scores the new trajectory forecasting state-of-the-art in all the considered datasets (Zara01, Zara02, UCY, and TownCentre) with a dramatic margin when the pedestrians slow down, a case where most of the forecasting approaches struggle to provide an accurate solution.
2018
31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
computer vision; machine learning; forecasting; pose estimation
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
MX-LSTM: mixing tracklets and vislets to jointly forecast trajectories and head poses / Hasan, I.; Setti, F.; Tsesmelis, T.; Del Bue, A.; Galasso, F.; Cristani, M.. - (2018), pp. 6067-6076. (Intervento presentato al convegno 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 tenutosi a Salt Lake City; United States) [10.1109/CVPR.2018.00635].
File allegati a questo prodotto
File Dimensione Formato  
Hasan_MX-LSTM_2018.pdf

accesso aperto

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.55 MB
Formato Adobe PDF
2.55 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1341870
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 104
  • ???jsp.display-item.citation.isi??? 80
social impact