The aim of this work is to exploit Machine Learning (ML) for the analysis of Georadar (or Ground Penetrating Radar, GPR) images. In particular, the objective is to apply a Deep Learning (DL) architecture to extract from B-scan images of infinite buried Perfect Electric Conductor (PEC) cylinders: the cylinder radius, the depth with respect to the ground, and the relative dielectric permittivity εr of the medium in which the cylinder is immersed. The architecture chosen is the DenseNet. The main feature of this network is that each layer is connected to all subsequent layers, through the concatenation of the feature maps. Indeed, traditional convolutional networks, composed of L layers, present L connections, one for each layer, while DenseNet presents L(L+1)/2 direct connections. The DenseNet network has many advantages: it reduces the problem of the evanescent gradient, strengthens the propagation of features, encourages the reuse of parameters and substantially reduces the number of parameters. The GPR images are obtained through the GprMax software simulation tool, combining the radius and depth of the cylinder, and the relative permittivity of the medium. The network is trained to extract 19 labels opportunely selected from the images (Table 1): Table 1: Labels. radius [cm] 1 2 3 4 5 depth [cm] 9 10 11 12 13 14 15 εr (relative dielectric permittivity) 2 3 4 5 6 7 8 The input images, of initial size 3453×1772 pixels, are resized to 32×32 pixels, in order to speed up the training phase. For the purpose of extracting the features of the images, multi-label classification is used. Since the data set is small, the k-fold cross validation is performed by dividing the data set into 10 parts. Therefore 10% of the data constitutes the validation set and the remaining part is chosen as training set. The training of the network is performed by varying opportunely the learning rate. The study has shown interesting results in terms of the ability of the DenseNet in classifying B-scan images, despite a small data set.

F. Ponti, F. Barbuto, P. P. Di Gregorio, F. Mangini, P. Simeoni, M. Troiano, F. Frezza, “Deep Learning for analysis of GPR images”, Radar and Remote Sensing Workshop (RRSW) 2019, Roma, 30-31 maggio 2019 / Ponti, Francesca; Barbuto, Francesco; Di Gregorio, Pietro Paolo; Mangini, Fabio; Simeoni, Patrizio; Troiano, Maurizio; Frezza, Fabrizio. - (2019). (Intervento presentato al convegno Radar and Remote Sensing Workshop (RRSW) tenutosi a ROME; ITALY).

F. Ponti, F. Barbuto, P. P. Di Gregorio, F. Mangini, P. Simeoni, M. Troiano, F. Frezza, “Deep Learning for analysis of GPR images”, Radar and Remote Sensing Workshop (RRSW) 2019, Roma, 30-31 maggio 2019.

Ponti, Francesca
Primo
;
Di Gregorio, Pietro Paolo;Mangini, Fabio;Simeoni, Patrizio;Troiano, Maurizio;Frezza, Fabrizio
2019

Abstract

The aim of this work is to exploit Machine Learning (ML) for the analysis of Georadar (or Ground Penetrating Radar, GPR) images. In particular, the objective is to apply a Deep Learning (DL) architecture to extract from B-scan images of infinite buried Perfect Electric Conductor (PEC) cylinders: the cylinder radius, the depth with respect to the ground, and the relative dielectric permittivity εr of the medium in which the cylinder is immersed. The architecture chosen is the DenseNet. The main feature of this network is that each layer is connected to all subsequent layers, through the concatenation of the feature maps. Indeed, traditional convolutional networks, composed of L layers, present L connections, one for each layer, while DenseNet presents L(L+1)/2 direct connections. The DenseNet network has many advantages: it reduces the problem of the evanescent gradient, strengthens the propagation of features, encourages the reuse of parameters and substantially reduces the number of parameters. The GPR images are obtained through the GprMax software simulation tool, combining the radius and depth of the cylinder, and the relative permittivity of the medium. The network is trained to extract 19 labels opportunely selected from the images (Table 1): Table 1: Labels. radius [cm] 1 2 3 4 5 depth [cm] 9 10 11 12 13 14 15 εr (relative dielectric permittivity) 2 3 4 5 6 7 8 The input images, of initial size 3453×1772 pixels, are resized to 32×32 pixels, in order to speed up the training phase. For the purpose of extracting the features of the images, multi-label classification is used. Since the data set is small, the k-fold cross validation is performed by dividing the data set into 10 parts. Therefore 10% of the data constitutes the validation set and the remaining part is chosen as training set. The training of the network is performed by varying opportunely the learning rate. The study has shown interesting results in terms of the ability of the DenseNet in classifying B-scan images, despite a small data set.
2019
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1412516
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact