In the application of computer-vision-based displacement measurement, an optical target is usually required to prove the reference. If the optical target cannot be attached to the measuring objective, edge detection and template matching are the most common approaches in target-less photogrammetry. However, their performance significantly relies on parameter settings. This becomes problematic in dynamic scenes where complicated background texture exists and varies over time. We propose virtual point tracking for real-time target-less dynamic displacement measurement, incorporating deep learning techniques and domain knowledge to tackle this issue. Our approach consists of three steps: 1) automatic calibration for detection of region of interest; 2) virtual point detection for each video frame using deep convolutional neural network; 3) domain-knowledge based rule engine for point tracking in adjacent frames. The proposed approach can be executed on an edge computer in a real-time manner (i.e. over 30 frames per second). We demonstrate our approach for a railway application, where the lateral displacement of the wheel on the rail is measured during operation. The numerical experiments have been performed to evaluate our approach’s performance and latency in a harsh railway environment with dynamic complex backgrounds. We make our code and data available at https://github. com/quickhdsdc/Point-Tracking-for-Displacement-Measurement-in-Railway-Applications.

Deep learning based virtual point tracking for real-time target-less dynamic displacement measurement in railway applications / Shi, Dachuan; ˇSabanoviˇc, Eldar; Rizzetto, Luca; Skrickij, Viktor; Oliverio, Roberto; Kaviani, Nadia; Ye, Yunguang; Bureika, Gintautas; Ricci, Stefano; Hecht, Markus. - In: CASE STUDIES IN MECHANICAL SYSTEMS AND SIGNAL PROCESSING. - ISSN 2351-9886. - 166:(2021), pp. 1-20. [10.1016/j.ymssp.2021.108482]

Deep learning based virtual point tracking for real-time target-less dynamic displacement measurement in railway applications

Luca Rizzetto;Roberto Oliverio;Nadia Kaviani;Stefano Ricci;
2021

Abstract

In the application of computer-vision-based displacement measurement, an optical target is usually required to prove the reference. If the optical target cannot be attached to the measuring objective, edge detection and template matching are the most common approaches in target-less photogrammetry. However, their performance significantly relies on parameter settings. This becomes problematic in dynamic scenes where complicated background texture exists and varies over time. We propose virtual point tracking for real-time target-less dynamic displacement measurement, incorporating deep learning techniques and domain knowledge to tackle this issue. Our approach consists of three steps: 1) automatic calibration for detection of region of interest; 2) virtual point detection for each video frame using deep convolutional neural network; 3) domain-knowledge based rule engine for point tracking in adjacent frames. The proposed approach can be executed on an edge computer in a real-time manner (i.e. over 30 frames per second). We demonstrate our approach for a railway application, where the lateral displacement of the wheel on the rail is measured during operation. The numerical experiments have been performed to evaluate our approach’s performance and latency in a harsh railway environment with dynamic complex backgrounds. We make our code and data available at https://github. com/quickhdsdc/Point-Tracking-for-Displacement-Measurement-in-Railway-Applications.
File allegati a questo prodotto
File Dimensione Formato  
Shi_ Deep-learning-based_2021.pdf

accesso aperto

Tipologia: Licenza (contratto editoriale)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 993.18 kB
Formato Adobe PDF
993.18 kB Adobe PDF Visualizza/Apri PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1578000
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
social impact