Weed spotting through image classification is one of the methods applied in precision agriculture to increase efficiency in crop damage reduction. These classifications are nowadays typically based on deep machine learning with convolutional neural networks (CNN), where a main difficulty is gathering large amounts of labeled data required for the training of these networks. Thus, synthetic dataset sources have been developed including simulations based on graphic engines; however, some data inputs that can improve the performance of CNNs like the near infrared (NIR) have not been considered in these simulations. This paper presents a simulation in the Unity game engine that builds fields of sugar beets with weeds. Images are generated to create datasets that are ready to train CNNs for semantic segmentation. The dataset is tested by comparing classification results from the bonnet CNN network trained with synthetic images and trained with real images, both with RGB and RGBN (RGB+near infrared) as inputs. The preliminary results suggest that the addition of the NIR channel to the simulation for plant-weed segmentation can be effectively exploited. These show a difference of 5.75% for the global mean IoU over 820 classified images by including the NIR data in the unity generated dataset.
Simulation of near infrared sensor in unity for plant-weed segmentation classification / Carbone, Carlos; Potena, C.; Nardi, D.. - 1:Proceedings of the 10th International Conference on Simulation and Modeling Methodologies, Technologies and Applications(2020), pp. 81-90. (Intervento presentato al convegno 10th International Conference on Simulation and Modeling Methodologies, Technologies and Applications, SIMULTECH 2020 tenutosi a Parigi; Francia) [10.5220/0009827900810090].
Simulation of near infrared sensor in unity for plant-weed segmentation classification
Carbone Carlos;Potena C.Conceptualization
;Nardi D.Supervision
2020
Abstract
Weed spotting through image classification is one of the methods applied in precision agriculture to increase efficiency in crop damage reduction. These classifications are nowadays typically based on deep machine learning with convolutional neural networks (CNN), where a main difficulty is gathering large amounts of labeled data required for the training of these networks. Thus, synthetic dataset sources have been developed including simulations based on graphic engines; however, some data inputs that can improve the performance of CNNs like the near infrared (NIR) have not been considered in these simulations. This paper presents a simulation in the Unity game engine that builds fields of sugar beets with weeds. Images are generated to create datasets that are ready to train CNNs for semantic segmentation. The dataset is tested by comparing classification results from the bonnet CNN network trained with synthetic images and trained with real images, both with RGB and RGBN (RGB+near infrared) as inputs. The preliminary results suggest that the addition of the NIR channel to the simulation for plant-weed segmentation can be effectively exploited. These show a difference of 5.75% for the global mean IoU over 820 classified images by including the NIR data in the unity generated dataset.File | Dimensione | Formato | |
---|---|---|---|
Carbone_Simulation_2020.pdf
accesso aperto
Note: DOI: 10.5220/0009827900810090
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
496.17 kB
Formato
Adobe PDF
|
496.17 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.