Precision Agriculture (PA) is now a term used throughout the agricultural domain worldwide. It gained popularity and increasing interest from the research community due to the wide range of potential benefits and to the availability of new off-the-shelf sensing technologies. PA methods, indeed, promise to increase the quantity and quality of agricultural outputs, while using less input (e.g., water, energy, fertilizers, pesticides, . . . ). The aim is to save costs, reduce environmental impact and produce more and better food. In this domain, a promising solution that is rapidly growing up is robotic farming. By combining the aerial survey capabilities of Unmanned Aerial Vehicles (UAVs) with multi-purpose agricultural Unmanned Ground Vehicles (UGVs), a robotic system will be able to survey a field from the air, perform a targeted intervention on the ground, and provide detailed information for decision support, all with minimal user intervention. In the last years, despite great progress in automating farming activities by using robotic platforms, most of the existing systems do not provide a sufficient autonomy level. Making farming robots more autonomous brings the benefits of completing tasks faster and adapting to different purposes and farm fields, which make them more useful and increase their profitability. However, making farming robots more autonomous involves increasing their perception and awareness of their surrounding environment. A typical agricultural scenario presents unique characteristics, such as highly repetitive visual and geometrical patterns, and the lack of distinguishable landmarks. These features do not allow to directly apply most of the state-of-the-art perception methods from other robotic domains. This thesis focuses on perception methods that enable robots to autonomously operate in farming environments, specifically a localization method and a collaborative mapping between aerial and ground robots. They improve the robot perception capabilities by exploiting the unique context-based characteristics of farm fields and by fusing together several heterogeneous sensors. Additionally, this thesis addresses the problem of crop/weed mapping by employing end-to-end visual classifiers. This thesis also presents contributions in perception-based control methods. Such approaches allow the robot to navigate the environment while taking into account the perception constraints. The following is a full list of contributions: • Development of crop/weed detection and classification algorithms based on deep neural networks. • A method to summarize a big dataset by information entropy maximization. The manual annotation of the summarized dataset allows the trained network to obtain a similar classification accuracy while sensibly reducing the manual annotation effort. • A model-based dataset generation method for crop and weed detection. The generated data can be used to both augment or to supplement a real-world training dataset. The synthetic data are made available as open-source. • A multi-cue positioning system for ground farming robots that fuses several heterogeneous sensors and incorporates context-based characteristics. • A novel multimodal environment representation that at the same time enhances the key characteristics of the farm field, while filtering out redundant information. • A collaborative mapping method that registers maps acquired by both aerial and ground vehicles. • Perception-based control methods that steer the robot to the desired location while satisfying perception constraints. • A novel temporal registration method that registers maps over time to monitor the evolution of the farm field (work in progress). Moreover, another important outcome of this thesis is a set of open-source software modules released and datasets generated, which I hope the community will benefit from. The work developed in this thesis has been done following the operating scenario proposed by the Flourish project, in which Sapienza, University of Rome, participated as a consortium partner.

Perception and environment modeling in robotic agriculture contexts / Potena, Ciro. - (2020 Feb 28).

Perception and environment modeling in robotic agriculture contexts

POTENA, CIRO
28/02/2020

Abstract

Precision Agriculture (PA) is now a term used throughout the agricultural domain worldwide. It gained popularity and increasing interest from the research community due to the wide range of potential benefits and to the availability of new off-the-shelf sensing technologies. PA methods, indeed, promise to increase the quantity and quality of agricultural outputs, while using less input (e.g., water, energy, fertilizers, pesticides, . . . ). The aim is to save costs, reduce environmental impact and produce more and better food. In this domain, a promising solution that is rapidly growing up is robotic farming. By combining the aerial survey capabilities of Unmanned Aerial Vehicles (UAVs) with multi-purpose agricultural Unmanned Ground Vehicles (UGVs), a robotic system will be able to survey a field from the air, perform a targeted intervention on the ground, and provide detailed information for decision support, all with minimal user intervention. In the last years, despite great progress in automating farming activities by using robotic platforms, most of the existing systems do not provide a sufficient autonomy level. Making farming robots more autonomous brings the benefits of completing tasks faster and adapting to different purposes and farm fields, which make them more useful and increase their profitability. However, making farming robots more autonomous involves increasing their perception and awareness of their surrounding environment. A typical agricultural scenario presents unique characteristics, such as highly repetitive visual and geometrical patterns, and the lack of distinguishable landmarks. These features do not allow to directly apply most of the state-of-the-art perception methods from other robotic domains. This thesis focuses on perception methods that enable robots to autonomously operate in farming environments, specifically a localization method and a collaborative mapping between aerial and ground robots. They improve the robot perception capabilities by exploiting the unique context-based characteristics of farm fields and by fusing together several heterogeneous sensors. Additionally, this thesis addresses the problem of crop/weed mapping by employing end-to-end visual classifiers. This thesis also presents contributions in perception-based control methods. Such approaches allow the robot to navigate the environment while taking into account the perception constraints. The following is a full list of contributions: • Development of crop/weed detection and classification algorithms based on deep neural networks. • A method to summarize a big dataset by information entropy maximization. The manual annotation of the summarized dataset allows the trained network to obtain a similar classification accuracy while sensibly reducing the manual annotation effort. • A model-based dataset generation method for crop and weed detection. The generated data can be used to both augment or to supplement a real-world training dataset. The synthetic data are made available as open-source. • A multi-cue positioning system for ground farming robots that fuses several heterogeneous sensors and incorporates context-based characteristics. • A novel multimodal environment representation that at the same time enhances the key characteristics of the farm field, while filtering out redundant information. • A collaborative mapping method that registers maps acquired by both aerial and ground vehicles. • Perception-based control methods that steer the robot to the desired location while satisfying perception constraints. • A novel temporal registration method that registers maps over time to monitor the evolution of the farm field (work in progress). Moreover, another important outcome of this thesis is a set of open-source software modules released and datasets generated, which I hope the community will benefit from. The work developed in this thesis has been done following the operating scenario proposed by the Flourish project, in which Sapienza, University of Rome, participated as a consortium partner.
28-feb-2020
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Potena.pdf

accesso aperto

Tipologia: Tesi di dottorato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 25.76 MB
Formato Adobe PDF
25.76 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1365111
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact