In precision agriculture, autonomous ground and aerial vehicles can lead to favourable improvements in field operations, extending crop scouting to large fields and performing field tasks in a timely and effective way. However, automated navigation and operations within a complex scenarios require specific and robust path planning and navigation control. Thus, in addition to proper knowledge of their instantaneous position, robotic vehicles and machines require an accurate spatial description of their environment. An innovative modelling framework is presented to semantically interpret 3D point clouds of vineyards and to generate low complexity 3D mesh models of vine rows. The proposed methodology, based on a combination of convex hull filtration and minimum area c-gon design, reduces the amount of instances required to describe the spatial layout and shape of vine canopies allowing the amount of data to be reduced without losing relevant crop shape information. The algorithm is not hindered by complex scenarios, such as non- linear vine rows, as it is able to automatically process non uniform vineyards. Results demonstrated a data reduction of about 98%; from the 500 Mb ha1 required to store the original dataset to 7.6 Mb ha1 for the low complexity 3D mesh. Reducing the amount of data is crucial to reducing computational times for large original datasets, thus enabling the exploitation of 3D point cloud information in real-time during field operations. When considering scenarios involving cooperating machines and robots, data reduction will allow rapid communication and data exchange between in field actors.

Semantic interpretation and complexity reduction of 3D point clouds of vineyards

Lorenzo Comba
First
;
Shahzad Zaman;Alessandro Biglia;Davide Ricauda Aimonino;Paolo Gay
Last
2020-01-01

Abstract

In precision agriculture, autonomous ground and aerial vehicles can lead to favourable improvements in field operations, extending crop scouting to large fields and performing field tasks in a timely and effective way. However, automated navigation and operations within a complex scenarios require specific and robust path planning and navigation control. Thus, in addition to proper knowledge of their instantaneous position, robotic vehicles and machines require an accurate spatial description of their environment. An innovative modelling framework is presented to semantically interpret 3D point clouds of vineyards and to generate low complexity 3D mesh models of vine rows. The proposed methodology, based on a combination of convex hull filtration and minimum area c-gon design, reduces the amount of instances required to describe the spatial layout and shape of vine canopies allowing the amount of data to be reduced without losing relevant crop shape information. The algorithm is not hindered by complex scenarios, such as non- linear vine rows, as it is able to automatically process non uniform vineyards. Results demonstrated a data reduction of about 98%; from the 500 Mb ha1 required to store the original dataset to 7.6 Mb ha1 for the low complexity 3D mesh. Reducing the amount of data is crucial to reducing computational times for large original datasets, thus enabling the exploitation of 3D point cloud information in real-time during field operations. When considering scenarios involving cooperating machines and robots, data reduction will allow rapid communication and data exchange between in field actors.
2020
197
216
230
https://www.sciencedirect.com/science/article/pii/S1537511020301264
Precision agriculture, Photogrammetry, Big data, UAV remote sensing, Semantic interpretation, 3D point cloud segmentation
Lorenzo Comba, Shahzad Zaman, Alessandro Biglia, Davide Ricauda Aimonino, Fabrizio Dabbene, Paolo Gay
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S1537511020301264-main.pdf

Accesso aperto

Descrizione: pdf editoriale
Tipo di file: PDF EDITORIALE
Dimensione 4.36 MB
Formato Adobe PDF
4.36 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1746193
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 18
social impact