Soil-borne plant-parasitic nematodes exist in many soils. Some of them can cause up to 15 to 20 percent annual yield losses. Walnut has high economic value, and most edible walnuts in the US are produced in the fertile soils of the California Central Valley. Soil-dwelling nematode parasites are a significant threat, and cause severe root damage and affect the walnut yields. Early detection of plant-parasitic nematodes is critical to design management strategies. In this study, we proposed use of a new low-cost proximate radio frequency tridimensional sensor "Walabot" and machine learning classification algorithms. This pocket-sized device, unlike the remote sensing tools such as unmanned aerial vehicles (UAVs), is not limited by ight time and payload capability. It can work flexibly in the field and provide data information more promptly and accurately than UAVs or satellite. Walnut leaves from trees of different nematodes infestation levels were placed on this sensor, to test if the Walabot can detect small changes of the nematode infestation levels. Hypothetically, waveforms generated by different signals may be useful to estimate the damage caused by nematodes. Scikit-learn classification algorithms, such as Neural Networks, Random forest, Adam optimizer, and Gaussian processing, were applied for data processing. Results showed that the Walabot predicted nematodes infestation levels with an accuracy of 72% so far.
In the last decade, technologies of unmanned aerial vehicles (UAVs) and small imaging sensors have achieved a significant improvement in terms of equipment cost, operation cost and image quality. These low-cost platforms provide flexible access to high resolution visible and multispectral images. As a result, many studies have been conducted regarding the applications in precision agriculture, such as water stress detection, nutrient status detection, yield prediction, etc. Different from traditional satellite low-resolution images, high-resolution UAVbased images allow much more freedom in image post-processing. For example, the very first procedure in post-processing is pixel classification, or image segmentation for extracting region of interest(ROI). With the very high resolution, it becomes possible to classify pixels from a UAV-based image, yet it is still a challenge to conduct pixel classification using traditional remote sensing features such as vegetation indices (VIs), especially considering various changes during the growing season such as light intensity, crop size, crop color etc. Thanks to the development of deep learning technologies, it provides a general framework to solve this problem. In this study, we proposed to use deep learning methods to conduct image segmentation. We created our data set of pomegranate trees by flying an off-shelf commercial camera at 30 meters above the ground around noon, during the whole growing season from the beginning of April to the middle of October 2017. We then trained and tested two convolutional network based methods U-Net and Mask R-CNN using this data set. Finally, we compared their performances with our dataset aerial images of pomegranate trees. [Tiebiao- add a sentence to summarize the findings and their implications to precision agriculture]
Many studies have shown that hyperspectral measurements can help monitor crop health status, such as water stress, nutrition stress, pest stress, etc. However, applications of hyperspectral cameras or scanners are still very limited in precision agriculture. The resolution of satellite hyperspectral images is too low to provide the information in the desired scale. The resolution of either field spectrometer or aerial hyperspectral cameras is fairly high, but their cost is too high to be afforded by growers. In this study, we are interested in if the flow-cost hyperspectral scanner SCIO can serve as a crop monitoring tool to provide crop health information for decision support. In an onion test site, there were three irrigation levels and four types of soil amendment, randomly assigned to 36 plots with three replicates for each treatment combination. Each month, three onion plant samples were collected from the test site and fresh weight, dry weight, root length, shoot length etc. were measured for each plant. Meanwhile, three spectral measurements were made for each leaf of the sample plant using both a field spectrometer and a hyperspectral scanner. We applied dimension reduction methods to extract low-dimension features. Based on the data set of these features and their labels, several classifiers were built to infer the field treatment of onions. Tests on validation dataset (25 percent of the total measurements) showed that this low-cost hyperspectral scanner is a promising tool for crop water stress monitoring, though its performance is worse than the field spectrometer Apogee. The traditional field spectrometer yields the best accuracy as high as above 80%, whereas the best accuracy of SCIO is around 50%.
Thermal cameras have been widely used in small Unmanned Aerial Systems (sUAS) recently. In order to analyze a particular object, they can translate thermal energy into visible images and temperatures. The thermal imaging has a great potential in agricultural applications. It can be used for estimating the soil water status, scheduling irrigation, estimating almond trees yields, estimating water stress, evaluating maturity of crops. Their ability to measure the temperature is great, though, there are still some concerns about uncooled thermal cameras. Unstable outdoor environmental factors can cause serious measurement drift during flight missions. Post-processing like mosaicking might further lead to measurement errors. To answer these two fundamental questions, it finished three experiments to research the best practice for thermal images collection. In this paper, the thermal camera models being used are ICI 9640 P-Series, which are commonly used in many study areas. Apogee MI-220 is used as the ground truth. In the first experiment, it tries to figure out how long the thermal camera needs to warm up to be at (or close to) thermal equilibrium in order to produce accurate data. Second, different view angles were set up for thermal camera to figure out if the view angle has any effect on a thermal camera. Third, it attempts to find out that, after the thermal images are processed by Agisoft PhotoScan, if the stitching has any effect on the temperature data.
Thanks to the development of camera technologies, small unmanned aerial systems (sUAS), it is possible to collect aerial images of field with more flexible visit, higher resolution and much lower cost. Furthermore, the performance of objection detection based on deeply trained convolutional neural networks (CNNs) has been improved significantly. In this study, we applied these technologies in the melon production, where high-resolution aerial images were used to count melons in the field and predict the yield. CNN-based object detection framework-Faster R-CNN is applied in the melon classification. Our results showed that sUAS plus CNNs were able to detect melons accurately in the late harvest season.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.