PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 1253901 (2023) https://doi.org/10.1117/12.2690505
This PDF file contains the front matter associated with SPIE Proceedings Volume 12539, including the Title Page, Copyright information, Table of Contents, and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine-Learning Analysis of Plant Spectral Data from Autonomous Systems
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 1253902 (2023) https://doi.org/10.1117/12.2665768
Citrus black spot (CBS) is a quarantine fungal disease caused by Phyllosticta citricarpa that can limit market access for fruit. It causes lesions on fruit surfaces and may lead to premature fruit drops, reducing yield. Leaf symptoms are uncommon for CBS, although the fungus reproduces in leaf litter. Similarly, citrus canker is another serious disease caused by the bacterium Xanthomonas citri subsp. citri (syn. X. axonopodis pv. citri) and leads to economic losses for growers from fruit drops and blemishes. Therefore, early detection and management of groves infected by CBS or canker via fruit and/or leaf inspection can benefit the Florida citrus industry. Manual inspection to classify disease symptoms on either fruits or leaves is a tedious and labor intensive process. Hence, there is need to develop computer vision system for autonomous classification of fruits and leaves that can speed up their management in fields. In this paper, we demonstrate the capability of convolution neural network (CNN)-based deep learning along with classical machine learning (ML) based computer vision algorithms to classify ‘Valencia’ orange fruit surfaces with CBS infection along with four other conditions and ‘Furr’ mandarin leaves with canker and four other conditions. Fruits with CBS and four other conditions (marketable, greasy spot, melanose and wind scar) were classified using a custom shallow CNN with SoftMax and RBF SVM at an overall accuracy of 89.8% and 92.1%, respectively. Similarly, a custom VGG16 network with SoftMax could classify canker leaves with F1-score of 85% and overall accuracy of 82% including other four conditions (control/healthy, greasy spot, melanose and scab). In addition, it was found that by replacing SoftMax with RBF SVM in the VGG16 network, the overall classification accuracy improved to 93% i.e., an increment of 11% points (13.41%). The preliminary findings reported in this paper demonstrate the capability of HSI system for automated citrus fruit and leaf disease classification using shallow and deep CNN generated features and ML classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 1253903 (2023) https://doi.org/10.1117/12.2666260
Early crop yield estimation aids growers of cotton and other crops in making in-season management decisions and estimating the crop’s market value. However, predicting yield early in the growing season is challenging for various reasons including environmental factors and soil variability. Several techniques have been used to estimate cotton yield, and in the last decade machine learning (especially artificial neural networks or ANNs) have been widely adopted. In a standard ANN model, all the input data are collated without considering the temporal characteristics of the data, such as when the data were collected relative to the growth stage of the plants. A modular network called plus artificial neural network (ANN+) was devised to independently receive crop data that has been collected at different stages of crop growth. This study evaluated the potential of adopting ANN+ for early estimation of cotton yield. For this purpose, a field experiment was conducted in central Texas in e2020 and 2021. The study site consisted of three different treatments: variable nitrogen rate, variable fertilizer and irrigation × variety. An unmanned aerial vehicle equipped (UAV) with a five-band multispectral sensor was flown at various cotton growth stages to collect remote sensing data multiple times within 100 days after planting. The UAV was flown at 30 m above ground level, producing a spatial resolution of approximately 0.02 m. The multispectral imagery was used to extract crop spectral, textural and structural information. Along with this information, weather information in term of growing degree days, solar insolation and precipitation were collected for yield estimation. The custom ANN+ model achieved an R2 of 0.90 and mean absolute percentage error of 12.29% for cotton yield estimation. Seasonal temperature data contributed the most information to the model but crop structural and textural metrics from the image data also contributed strongly to the model, suggesting that autonomous aerial systems can be an important part of providing cotton growers early predictions of yield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 1253904 (2023) https://doi.org/10.1117/12.2664408
This paper presents the development and validation of machine learning models for locating strawberry plants and weeds as well as determining the health of strawberry plants. TensorFlow Lite Model Maker was used for object detection and model training using a custom dataset of marked images. The data used in the dataset are images collected from an unmanned aerial vehicle (UAV) and are annotated using LabelImg, a popular tool for annotating bounding boxes over images. The locations of the weeds and strawberry plants were found in both latitude/longitude coordinates as well as Degrees, Meters, Seconds (DMS) format by using the ground sample distance formula (GSD). The greenness indices were found by using OpenCV image alignment on the multispectral sensors to calculate the corresponding greenness index. The developed machine learning models can well predict plant health, detect weeds, and determine their locations. The overall goal of the project is to use UAV-based remote sensing and machine learning techniques for precision farming that aims to optimize the use of water and chemicals using site-specific and optimal applications water and chemicals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 1253905 (2023) https://doi.org/10.1117/12.2663817
Crop nitrogen (N) content reflects crop nutrient status and is an important trait in crop management. Over the decades, non-destructive N estimation has greatly benefited from remote sensing and data-intensive computational approaches. However, previous studies mostly focused on the estimation accuracy under a specific environment; few of them considered estimation robustness across varying growth conditions. As climate change intensifies, crops are facing more unexpected stresses. It is critical to improve N estimation under changing environments with better model generalizability. Thus, we proposed a novel hybrid method with merits of both mechanistic and machine learning models and integrating in-situ data and simulated data for an improved model training. The in-situ data were the canopy reflectance extracted from hyperspectral images collected by an Unmanned Aerial Vehicle (UAV) and destructively sampled plant N content;the simulated data referred to the canopy reflectance simulated by a mechanistic model, the PROSAIL-PRO. The performance of the hybrid method was compared with one of the most popular machine learning models (i.e., Gaussian Process Regression, GPR) across three study sites. Results showed that the hybrid method outperformed the GPR by reducing RRMSE up to 6.84% on canopy nitrogen content (CNC) estimation. It also achieved more stable performances across varying soil water and N availabilities. Altogether, we demonstrated an approach to estimate CNC under diversesoil and environmentalconditions from remotely sensed spectral data with better accuracy and generalizability. It leverages the robustness of mechanistic models and the computational efficiency of machine learning models and has great potential to be transferred to other crops and many common crop traits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 1253906 (2023) https://doi.org/10.1117/12.2663287
Leaf area index (LAI) is one of the most effective biophysical parameters for characterizing vegetation dynamics and crop productivity. Acquiring a time series of accurately estimated LAI in rice canopies allows to monitor and analyze growth dynamics during the crop season and contributes to a better understanding of photosynthesis, water use, biomass, and yield. Advances in technology platforms and navigation systems have enabled the acquisition of high-resolution images, offering new insights in innovative ways in an era when climate change imposes severe challenges on the agricultural sector. Field trials were conducted during two growing seasons in 2021 and 2022 in the Nataima research center of Agrosavia in El Espinal, Tolima, Colombia. The field trial consisted of three irrigation techniques applied in four Fedearroz 67 rice variety replicates. Multispectral and RGB images were taken from the UAV at 40m (1.83cm/0.49cm GSD), 60m (2.8cm/0.75cm GSD), and 80m (3.77cm/1.0cm GSD) above the crop. Images were then processed using the ViCTool, to compute vegetation indices. In addition, ground-truth LAI was indirectly determined by measuring the fresh and dry weight. Comparative results report significant differences in specific indices and trends for the two growing seasons regarding multispectral vegetation indices (NDRE, NDVI, GNDVI, GVI, SR, OSAVI, and SAVI). For the assessed RGB indices (ExG, GA, and GGA), there were no matching patterns or trends between flight height differences along cycles. These findings also reveal that although significant differences are observed, no greater improvement is seen in the determination coefficients (R2 ) for LAI estimation using linear regression at any height.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 1253907 (2023) https://doi.org/10.1117/12.2664334
Spatial information on plant-water requirement is the most crucial input for designing an efficient site-specific irrigation system. In quantifying this spatial information, canopy temperature-derived crop water stress maps could provide a potential solution. With the support of modern, advanced, and cost-effective remote sensing platforms like Unmanned Aerial Vehicle (UAV), aircraft, and Satellite, remote sensing data can be systematically collected with varying degrees of efficiency for spatial canopy temperature assessment. However, each platform provides remote sensing data at varying degrees of spectral and spatial resolutions, which can impact the user’s ability to develop canopy temperature-based spatial water stress maps and implement precision irrigation systems. Therefore, the main goals of this study were 1) to assess the feasibility and accuracy of UAV, aircraft, and satellite-based imaging for crop canopy temperature and health mapping; and 2) to compare and contrast the resolution of water-stressed regions identification for precision irrigation technology implementation. Thermal infrared (TIR) and multispectral images were obtained over a four-acre cornfield using a quadcopter (Matrice-100), aircraft (Ceres Imaging), and Satellite (Landsat-8). Spatial maps of canopy temperature and NDVI were developed using these images and analyzed for capacity to capture water requirements and crop health accurately. UAV imagery outperformed the other two platforms in providing detailed imagery and sensing changes in crop health throughout the field. For a sample area of dimension 82 m x 44 m, the UAV imagery provided 683 different types of canopy temperature values. In contrast, aircraft imagery provided 158 different values, followed by satellite imagery which provided only 5-6 variations in canopy temperature to represent the same area. Moderate and low spatial resolution imagery from aircraft (0.9-1.2 m/pixel) and satellite (30 m/pixel) was limited in detecting inter-row variability and outputting the average pixels of the crop canopy and inter-row space. Whereas high-resolution UAV imagery (1.5 cm/pixel – 6 cm/pixel) precisely distinguished inter-row gap from plants and provided crop-only pixels without mixing with background soil. UAV imagery was precise and sensitive in detecting crop variability between two nozzles of an irrigation pivot, while aircraft imagery was less precise and sensitive. Satellite imagery was not able to capture the variations at this small scale. So, overall, UAV and aircraft imagery remains competitive in providing infield crop health variability for site-specific management in agriculture. Satellite imagery is limited in providing infield crop health variability to design site-specific irrigation, especially for small-scale farms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precise Crop Measurements from Ground-based Platforms
Xin Zhang, Thevathayarajh Thayananthan, Muhammad Usman, Wenbo Liu, Yue Chen
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 1253908 (2023) https://doi.org/10.1117/12.2663367
Blackberry crop production is an essential sector of high-value specialty crops. Blackberries are delicate and easy to be damaged during harvest process. Besides, the blackberries in an orchard are not ripe at the same time so that multiple passes of harvesting are often needed. Therefore, the production is highly labor intensive and could be addressed using robotic solutions while maintaining the post-harvest berry quality for desired profitability. To further empower the developed tendon-driven soft robotic gripper specifically designed for berries, this study aims at investigating a state-of-the-art deep-learning YOLOv7 for accurately detecting the blackberries at multi-ripeness level in field conditions. In-field blackberry localization is a challenging task since blackberries are small objects and differ in color due to various levels of ripeness. Furthermore, the outdoor light condition varies depending on the time of day/location. Our study focused on detecting in-field blackberries at multi-ripeness levels using the state-of-the-art YOLOv7 model. In total, 642 RGB images were acquired targeting the plant canopies in several commercial orchards in Arkansas. The images were augmented to increase the diversity of data set using various methods. There are mainly three ripeness levels of blackberries that can present simultaneously in individual plants, including ripe (in black color), ripening (in red color), and unripe berries (in green color). The differentiation of ripeness levels can help the system to specifically harvest the ripe berries, and to keep track of the ripening/unripe berries in preparation for the next harvesting pass. The aggregation of total number of berries at all ripeness levels can also help estimate the crop-load for growers. The YOLOv7 model with seven configurations and six variants were trained and validated with 431 and 129 images, respectively. Overall, results of the test set (82 images) showed that YOLOv7-base was the best configuration with mean average precision (mAP) of 91.4% and F1-score of 0.86. YOLOv7-base also achieved 94% of mAP and 0.93 of True Positives (TPs) for ripe berries, 91% and 0.88 for ripening berries, and 88% and 0.86 for unripe berries under the Intersection-over-Union (IoU) of 0.5. The inference speed for YOLOv7-base was 21.5 ms on average per image with 1,024x1,024 resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 1253909 (2023) https://doi.org/10.1117/12.2664131
Graphical user interfaces (GUIs) interacting with hardware, data, and models, are beneficial for accelerating the deployment and adoption of machine vision technology in precision agriculture. They are particularly important for end users without the necessary technical expertise in computer programming. Making GUIs open-source and public is further beneficial by enabling community efforts into rapid iterations of prototyping and testing. Weed detection is important for the realization of machine vision-based precision weeding, thereby protecting crops, reducing resource inputs, and managing herbicide resistance. Considerable research has been done on weed imaging and deep learning (DL) modeling for weed detection, but there are few GUI tools publicly available for image collection, visualization, model deployment, and evaluation. This study is therefore to present a simple open-source easy-to-use GUI, i.e., OpenWeedGUI, for weed imaging and DL-based weed detection, with the goal to bridge the gap between machine vision technology and users. The GUI was developed in Python with the aid of three major open-source libraries including PyQt, Vimba (for camera interfacing), and OpenCV, for image collection, transformation, weed detection, and visualization. It is featured with a window for live display of weed images and detection results highlighted with bounding boxes, and supports flexible user control of imaging settings (e.g., exposure time, resolution, frame rate, etc.) and the deployment of a large suite of trained YOLO object detection models for real-time weed detection and allows users to save images and weed detection results in a local directory on demand. The OpenWeedGUI was tested on a mobile machine vision platform for weed imaging and detection. This GUI can be adapted for other machine vision tasks in precision agriculture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 125390A (2023) https://doi.org/10.1117/12.2663888
As the global population continues to increase, the demand for food production rises accordingly. The water availability of crops has a significant impact on their yield during the processes of photosynthesis and transpiration. Crops exchange carbon dioxide and water with the atmosphere through stomata. When crops undergo water stress, they tend to close their stomata to reduce water loss. However, this can also negatively affect the crop's photosynthetic rate and carbon assimilation, leading to low yields. Stomatal conductance (SC) quantifies the rate of gas exchange between crops and the atmosphere and can inform the crop's water status. SC measurements require the use of contact-type instruments, which is time-consuming and labor-intensive. This study examined the accuracy of multiple linear regression (MLR), support vector regression (SVR), and convolutional neural network (CNN) models for SC estimation in corn and soybean using RGB, near-infrared, and thermal-infrared images from a field phenotyping platform. The results show that the CNN model outperformed other two models, with R 2 value of 0.52. Furthermore, adding soil moisture as a variable to the model improved its accuracy, decreasing model RMSE from 0.147 to 0.137 mol/(m2*s). This study highlights the potential of estimating SC from remote sensing platforms to help growers obtain information about their crop water status and plan irrigation more effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotic and Collaborative Air-Ground Plant Sensing
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 125390B (2023) https://doi.org/10.1117/12.2666389
Perception plays a significant role in agricultural robots. If a robot fails to detect a target in the perception step, it will not perform any actions towards that target even if the control and manipulation systems are very effective. A robotic cotton harvester was tested in the field to evaluate its perception system performance. A ZED 2i stereo camera in conjunction with YOLOv4-tiny was utilized to detect and localize cotton bolls. To train the object detection network image data was gathered in two steps. Adding a black background panel behind the target row in the second step of image gathering eliminated the cotton bolls from other rows in the image. It also helped to improve object detection performance. The robot could detect 78% of the cotton bolls on the plant and localized 70% of the detected bolls. Assessing the precision of the localization system showed that, the mean absolute error on the X, Y, and Z axes in the camera’s coordinate system was 5.8, 5.2, and 8.1 mm respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 125390C (2023) https://doi.org/10.1117/12.2664604
This paper presents the collaboration between unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) for site-specific application of chemical (herbicides, pesticides, and other chemicals). The paper shows and discusses the experimental plot used in the project, methods for collaboration between the UAVs and UGVs for detection and isolation of weeds and strawberry plants using machine learning and remote sensing techniques, and methods for site specific application of chemicals. The paper also discusses the method of communication between UAVs and UGVs for data sharing and hardware used in the project including UAVs, UGVs, sensors, and communication devices. Some experimental results are shown and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 125390D (2023) https://doi.org/10.1117/12.2663661
The accuracy of data collected by Unmanned aerial vehicles (UAVs) for phenotyping is limited by current sensor technology. Ground control points (GCPs) and calibrate reflectance panels can increase the accuracy of the data with a few limitations. GCPs require additional labor and time to set up in large agricultural settings and calibrated panels can generally only be imaged prior to takeoff, limiting their usefulness for calibrating large sets of data through changing atmospheric conditions. An autonomous mobile ground control point (AMGCP) solves these problems by providing an efficient means of georeferencing and calibration of radiometric, height, and thermal data across large fields. Previous development of such a system has included laboratory and limited field testing. An improved version of an AMGCP was designed, constructed, and field tested. Among the improvements, certain suspension components were redesigned to provide a more robust vehicle with a broader range of motion under challenging terrain. Other improvements included changes to thermal calibration panels on the AMGCP. Target values for cooling temperature were modified to be more in line with expected values for canopy temperature, in order to enable ready measurement of canopy-temperature depression. This modification required a redesign of cooling components to increase the temperature differential that could be achieved. Additionally, the emissivity of the panel materials was modified to raise the emissivity of the panel surface. These modifications, along with others have enabled the AMGCP to be more effective in real world conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Timothy Sellers, Tingjun Lei, Daniel Carruth, Chaomin Luo
Proceedings Volume Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, 125390F (2023) https://doi.org/10.1117/12.2665844
In order to face the everyday growing population in today’s world, the deployment of autonomous vehicles is a promising direction for precision agriculture. Autonomous vehicles (AVs) have been developed and deployed for various agricultural needs such as field planting, harvesting, soil collection, and crop data collection. One method of achieving those task is complete coverage path planning (CCPP), which constructs a continuous path that covers a wide area of interest. However, in a large farm with multiple fields, those tasks have been extremely complicated and computationally expensive on a navigation system when utilizing a single AV. A heterogeneous system is proposed to sense the fields and solve the navigation and routing problem within multi-field path planning. We developed a deep learning-based routing scheme for Unmanned Aerial Vehicles (UAVs) to sense mature crops for harvest. The deep learning routing scheme utilizes a goal embedding feature and coordinate position feature to generate an optimal path for the Unmanned Aerial Vehicles, which allows them to find several candidate solutions. A deep learning-based complete coverage path planning (DL-CCPP) navigation scheme is also proposed for our Unmanned Ground Vehicle (UGVs) to navigate through the fields and collect the mature crops within them. The DL-CCPP uses UAV’s images in its deep learning network to construct the CCPP path from the AV coordinates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.