Abstract
Unmanned aerial systems (UAS) are able to deliver images of agricultural fields of high spatial and temporal resolution. It is, however, not trivial to extract quantitative information about weed infestations from images. This study contributes to weed research by using state-of-the-art computer vision techniques to assess pre-harvest weed infestations in cereals based on true color (RGB) images from consumer graded cameras mounted on UAS. The objective is to develop a fully automatic algorithm in an open programming language, Python, to discriminate and quantify weed infestations in cereals before harvest. Results are compared with an in-house image analysis procedure developed in the commercial eCognition Developer software. The importance of flight altitude and robustness across fields are emphasised.
Image acquisition took place during the summer of 2013 and 2014 in a number of fields under different weather and lighting conditions in spring and winter cereals (barley, wheat and oats). Images were acquired in different altitudes in the range of 10 to 50 m to give different image resolutions. There were perennial weeds in all fields with Cirsium arvense as the most frequent species.
In order to provide ground truth prior to the modeling phase in Python, a subset of 600 images was annotated by experts with 16000 regions of weeds or crop. Following this, images were segmented into regions with weeds or crop by subdividing each image into 64 by 64 pixel patches and classifying each patch as either crop or weed. A collection of geo-referenced segmented images may subsequently be used to map weed occurrences in fields. To find a robust and fully automated assessment method both texture and color information was used to build a number of different competing weed-crop classifiers, including several variants of the excess green (2G-R-B) vegetation index, and normalizations. The performance of these was measured in terms of classification accuracy.
Models were trained offline on the annotated ground truth data (not used for testing). In particular for the texture-based methods, this training is necessary to learn the statistical properties of filter responses from weed and crop patches.
Results emphasise the importance of a broad training context. If models were trained and tested on images representing narrow ranges of color and illumination variations, it was possible to achieve more than 95% accuracy, which approaches the potential maximum and fully satisfies practical mapping requirements. However, if models were evaluated on images from fields not included in training data, results were varying and unreliable in some fields. In general, the automated image analysis procedure based on color was not competitive with results achieved with eCognition, which provided accuracies in the range of 86% to 92%. Flight altitude and image resolution (3 to 15 mm/pixel) were not important for the accuracy and ortho-mosaicking had no clear impact. Models including texture-based methods were not fully evaluated because they required hours of computer time per image, and it is doubtful whether their performance can justify the computational expenses. Results are discussed in a practical context and the consequences of varying accuracies are evaluated in different scenarios.
Image acquisition took place during the summer of 2013 and 2014 in a number of fields under different weather and lighting conditions in spring and winter cereals (barley, wheat and oats). Images were acquired in different altitudes in the range of 10 to 50 m to give different image resolutions. There were perennial weeds in all fields with Cirsium arvense as the most frequent species.
In order to provide ground truth prior to the modeling phase in Python, a subset of 600 images was annotated by experts with 16000 regions of weeds or crop. Following this, images were segmented into regions with weeds or crop by subdividing each image into 64 by 64 pixel patches and classifying each patch as either crop or weed. A collection of geo-referenced segmented images may subsequently be used to map weed occurrences in fields. To find a robust and fully automated assessment method both texture and color information was used to build a number of different competing weed-crop classifiers, including several variants of the excess green (2G-R-B) vegetation index, and normalizations. The performance of these was measured in terms of classification accuracy.
Models were trained offline on the annotated ground truth data (not used for testing). In particular for the texture-based methods, this training is necessary to learn the statistical properties of filter responses from weed and crop patches.
Results emphasise the importance of a broad training context. If models were trained and tested on images representing narrow ranges of color and illumination variations, it was possible to achieve more than 95% accuracy, which approaches the potential maximum and fully satisfies practical mapping requirements. However, if models were evaluated on images from fields not included in training data, results were varying and unreliable in some fields. In general, the automated image analysis procedure based on color was not competitive with results achieved with eCognition, which provided accuracies in the range of 86% to 92%. Flight altitude and image resolution (3 to 15 mm/pixel) were not important for the accuracy and ortho-mosaicking had no clear impact. Models including texture-based methods were not fully evaluated because they required hours of computer time per image, and it is doubtful whether their performance can justify the computational expenses. Results are discussed in a practical context and the consequences of varying accuracies are evaluated in different scenarios.
Bidragets oversatte titel | Detektion af flerårigt ukrudt i korn fra drone- (UAS) billeder |
---|---|
Originalsprog | Engelsk |
Publikationsdato | 2015 |
Antal sider | 1 |
Status | Udgivet - 2015 |
Emneord
- Det Natur- og Biovidenskabelige Fakultet
- weed detection
- drone image analysis
- weed assessment
- image processing