The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B3-2020
https://doi.org/10.5194/isprs-archives-XLIII-B3-2020-1483-2020
https://doi.org/10.5194/isprs-archives-XLIII-B3-2020-1483-2020
22 Aug 2020
 | 22 Aug 2020

UAV IMAGES AND DEEP-LEARNING ALGORITHMS FOR DETECTING FLAVESCENCE DOREE DISEASE IN GRAPEVINE ORCHARDS

M. A. Musci, C. Persello, and A. M. Lingua

Keywords: Precision viticulture, Unmanned Aerial Vehicle (UAV), Flavescence dorée grapevine disease, Object Detection, Deep-Learning, Faster R-CNN

Abstract. One of the major challenges in precision viticulture in Europe is the detection and mapping of flavescence dorée (FD) grapevine disease to monitor and contain its spread. The lack of effective cures and the need for sustainable preventive measures are nowadays crucial issues. Insecticides and the plants uprooting are commonly employed to withhold disease infection, even if these solutions imply serious economic consequences and a strong environmental impact. The development of a rapid strategy to identify the disease is required to cover large portions of the crop and thus to limit damages in a time-effective way. This paper investigates the use of Unmanned Aerial Vehicles (UAVs), a cost-effective approach to early detection of diseased areas. We address this task with an object detection deep network, Faster R-CNN, instead of a traditional pixel-wise classifier. This work tests Faster R-CNN performance on this specific application through a comparative analysis with a pixel-wise classification algorithm (Random Forest). To take advantage of the full image resolution, the experimental analysis is performed using the original UAV imagery acquired in real conditions (instead of the derived orthomosaic). The first result of this paper is the definition of a new dataset for FD disease identification by UAV original imagery at the canopy scale. Moreover, we demonstrate the feasibility of applying Faster-R-CNN as a quasi-real-time alternative solution to semantic segmentation. The trained Faster-R-CNN achieved an average precision of 82% on the test set.