The analysis of aerial and satellite images for building footprint detection is one of the major challenges in photogrammetry and remote sensing. This information is useful for various applications, such as urban planning, disaster monitoring, and 3D city modeling. However, it has become a significant challenge due to the diverse characteristics of buildings, such as shape, size, and shadow interference. This study investigated the simultaneous use of aerial and satellite images to improve the accuracy of deep learning models in building footprint detection. For this purpose, aerial images with a spatial resolution of 30 cm and Sentinel-2 satellite imagery were employed. Several satellite-derived spectral indices were extracted from the Sentinel-2 image. Then, U-Net models combined with ResNet-18 and ResNet-34 were trained on these data. The results showed that the combination of the U-Net model with ResNet-34, trained on a dataset obtained by integrating aerial images and satellite indices, referred to as RGB–Sentinel–ResNet34, achieved the best performance among the evaluated models. This model attained an accuracy of 96.99%, an F1-score of 90.57%, and an Intersection over Union of 73.86%. Compared to other models, RGB–Sentinel–ResNet34 showed a significant improvement in accuracy and generalization capability. The findings indicated that the simultaneous use of aerial and satellite data can substantially enhance the accuracy of building footprint detection.

Fusion of aerial and satellite images for automatic extraction of building footprint information using deep neural networks / Haghighi Gashti, Ehsan; Bahiraei, Hanieh; Valadan Zoej, Mohammad Javad; Ghaderpour, Ebrahim. - In: INFORMATION. - ISSN 2078-2489. - 16:5(2025). [10.3390/info16050380]

Fusion of aerial and satellite images for automatic extraction of building footprint information using deep neural networks

Ghaderpour, Ebrahim
2025

Abstract

The analysis of aerial and satellite images for building footprint detection is one of the major challenges in photogrammetry and remote sensing. This information is useful for various applications, such as urban planning, disaster monitoring, and 3D city modeling. However, it has become a significant challenge due to the diverse characteristics of buildings, such as shape, size, and shadow interference. This study investigated the simultaneous use of aerial and satellite images to improve the accuracy of deep learning models in building footprint detection. For this purpose, aerial images with a spatial resolution of 30 cm and Sentinel-2 satellite imagery were employed. Several satellite-derived spectral indices were extracted from the Sentinel-2 image. Then, U-Net models combined with ResNet-18 and ResNet-34 were trained on these data. The results showed that the combination of the U-Net model with ResNet-34, trained on a dataset obtained by integrating aerial images and satellite indices, referred to as RGB–Sentinel–ResNet34, achieved the best performance among the evaluated models. This model attained an accuracy of 96.99%, an F1-score of 90.57%, and an Intersection over Union of 73.86%. Compared to other models, RGB–Sentinel–ResNet34 showed a significant improvement in accuracy and generalization capability. The findings indicated that the simultaneous use of aerial and satellite data can substantially enhance the accuracy of building footprint detection.
2025
deep learning; ResNet; semantic segmentation; Sentinel-2; U-Net
01 Pubblicazione su rivista::01a Articolo in rivista
Fusion of aerial and satellite images for automatic extraction of building footprint information using deep neural networks / Haghighi Gashti, Ehsan; Bahiraei, Hanieh; Valadan Zoej, Mohammad Javad; Ghaderpour, Ebrahim. - In: INFORMATION. - ISSN 2078-2489. - 16:5(2025). [10.3390/info16050380]
File allegati a questo prodotto
File Dimensione Formato  
Haghighi Gashti_Fusion_2025.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 2.92 MB
Formato Adobe PDF
2.92 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1745564
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 13
social impact