In fallowed fields, the presence of broadleaf and grassy weeds poses a significant threat to crop yield and quality if left uncontrolled. Broadleaf weeds, characterized by their wide leaves, and grassy weeds, with their narrow blades, compete vigorously with crops for essential resources such as sunlight, water, and nutrients. Identifying and managing these weed species effectively is paramount for agricultural success. Traditional weed control methods often rely on the use of broad-spectrum herbicides applied across entire fields, regardless of the specific weed composition. This method not only contributes to environmental damage but also incurs unnecessary costs for farmers. In recent years, the Vision Transformers (ViT) have revolutionized the field of Computer Vision, offering unprecedented capabilities in image understanding and analysis. This technique can be applied as a powerful tool to automatically detect and classify both broadleaf and grassy weeds in pre-planting herbicide spraying (known as green-on-brown application). This study aims to develop a system to detect and classify broadleaf and grassy weeds in fallowed fields using a Transformer-based algorithm, YOLOS (You Only Look One Sequence). The dataset comprises 15, 542 images collected from a real fallowed field. Images were splitted into three distinct subsets: training (10, 879 images ≈ 70%), validation (2, 798 images ≈ 18%), and test (1, 865 images ≈ 12%) sets. The model achieved an overall precision of 90.7% (88.3% for broadleaf weeds and 93.0% for grassy weeds) and an average recall of 86.3% (85.3% for broadleaf weeds and 87.2% for grassy weeds). The results suggest that the YOLOS presents a compelling alternative for distinguishing between broadleaf and grassy weeds in fallowed fields.
Keratoconus is a chronic-degenerative disease which results in progressive corneal thinning and steeping leading to irregular astigmatism and decreased visual acuity that in severe cases may cause debilitating visual impairment. In recent years, Machine Learning methods, especially Convolutional Neural Networks (CNN), have been applied to classify images according to either presence or absence of the disease, based on different corneal maps. This study aims to develop a novel CNN architecture to classify axial curvature maps of the anterior corneal surface in five different grades of disease (i: normal eye; ii: suspect eye; iii: subclinical keratoconus; iv: keratoconus; and v: severe keratoconus). The dataset comprises 3, 832 axial curvature maps represented on relative scale and labeled by ophthalmologists. The images were splitted into three distinct subsets: training (2, 297 images ≈ 60%), validation (771 images ≈ 20%), and test (764 images ≈ 20%) sets. The model achieved an overall accuracy of 78.53%, a macro-average sensitivity of 74.53% (87.50% for normal eyes, 46.56% for suspect eyes, 65.41% for subclinical keratoconus, 93.42% for keratoconus, and 79.25% for severe keratoconus) and a macro-average specificity of 94.42% (92.14% for normal eyes, 95.30% for suspect eyes, 93.82% for subclinical keratoconus, 91.24% for keratoconus, and 99.58% for severe keratoconus). Additionally, the model achieved AUC scores of 0.97, 0.92, 0.90, 0.98, and 0.94 for normal eye, suspect eye, subclinical keratoconus, keratoconus, and severe keratoconus, respectively. The results suggest that the CNN exhibited notable proficiency in distinguishing between normal eyes and various stages of keratoconus, offering potential for enhanced diagnostic accuracy in ocular health assessment.
Keratoconus is a chronic-degenerative disease which results in progressive corneal thinning and steepening leading to irregular astigmatism and decreased visual acuity that in severe cases may cause debilitating visual impairment. In recent years, different Machine Learning methods have been applied to distinguish either normal and keratoconic eyes. These methods utilize both corneal curvature maps and their corresponding numeric indices to perform the classification. The main objective of this study is to evaluate the performance of features extracted with Histograms of Oriented Gradients (HOG) and with Convolutional Neural Networks (CNN) in the classification of normal and keratoconic eyes, using axial map of the anterior corneal surface. Two distinct models were trained using the same Multilayer Perceptron (MLP) architecture: one of them using the HOG features as input, and the other with the CNN features. The Topographic Keratoconus Classification index (TKC) provided by Pentacam™ was used as a label and the KC2-labeled maps were defined as keratoconus. Each model was trained using 3,000 images of normal and 3,000 keratoconic eyes, and then validated and tested on 1,000 images of each label. The model trained with HOG features exhibited a sensitivity of 99.1% and specificity of 98.7%, with an Area Under the Curve (AUC) of 0.999143. The model trained with CNN features showed both sensitivity and specificity of 99.5%, and AUC = 0.999778. The results suggest that the performance of the classifier is similar for both types of features.
Precision Agriculture stands out as one of the most promising areas for the development of new technologies around the world. Some advances from this area include the mapping of productivity areas and the development of sensors for climate and soil analysis, improving the smart use of resources during crop management and helping farmers during the decision-making stages. Among the problems of modern agriculture, the intensive and non-localized use of herbicides causes environmental issues, contributes to elevated costs in farmers’ budgets and results in applications of chemical substances in non-target organisms. Although there are many selective herbicide spraying systems available for use, the majority working principle is based upon chlorophyll detectors, thus not being able to distinguish crop plants from weeds with high accuracy in crop’s post-emergence herbicide applications (“green-on-green” application). The main objective of this study is to develop a multispectral camera system for in-crop weed recognition using Computer Vision techniques. The system was built with four monochromatic CMOS sensor cameras with monochromatic wavelength bandpass filters (green, red, near infrared and infrared) and a RGB camera. Soybean and weed plants images were captured in a controlled environment using an automated v-slot rail system to simulate the movement of a spray tractor in the field. Infrared images presented higher precision (90.5%) and recall (89.3%) values compared to the other monochromatic bands, followed by RGB (87.0% and 86.1%, respectively) and near infrared images (83.6% and 87.9%), suggesting that infrared wavelengths plays an important role in plant detection and classification. Our results state that the combination of Computer Vision and multispectral images of plants is a more efficient approach for targeting weeds among crop plants for post-emergence herbicide applications.
Dry eye is one of the most reported eye health conditions and it is characterized by dryness, decreased tear production, or increased tear film evaporation. Middle-aged and elderly people are most commonly affected because of the high prevalence of contact lens usage, systemic drug effects, autoimmune diseases, and refractive surgeries. Corneal topography images have been recently used for noninvasive assessment, based on the Placido rings pattern. The rings in normal eyes are smooth and have no distortion, whereas they are distorted in affected eyes. We developed a method of analysis that process the corneal topography image to determine the Tear Break-up Time (TBUT), using the Tear Film Surface Quality (TFSQ) measurement. To avoid distortions not caused by the tear film break-up, the method dynamically removes eyelashes shadows from the image processing area. The results show that the proposed analysis is able to determine the TBUT based on the graphical analysis, and it can be used to help eye care specialists to diagnose dry eye disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.