PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13036, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The adaptation of deep network models to new environments, with significantly different distributions compared to the training data, has both theoretical interest and practical implications. Domain Adaptation (DA) aims to overcome the dataset bias problem by closing the gap in classification performance between the source domain used for training and the target domain where testing takes place. In this talk, we present a new framework for Continual Domain Adaptation, where the target domain samples are acquired in small batches over time and adaptation takes place continually in changing environments. Our Continual Domain Adaptation approach utilizes concepts from both DA and continual learning and demonstrate state-of-the-art results on various datasets under challenging conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past two decades, numerous Compressive Imaging (CI) techniques have been developed to reduce acquired data. Recently, these CI methods have incorporated Deep Learning (DL) tools to optimize both the reconstruction algorithm and the sensing model. However, most of these DL-based CI methods have been developed by simulating the sensing process without considering the limitations associated with the optical realization of the optimized sensing model. Since the merit of CI stands with the physical realization of the sensing process, we revisit the leading DL-based CI methods. We present a preliminary comparison of their performances while focusing on practical aspects such as the realizability of the sensing matrix and robustness to the measurement noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multispectral imagery is instrumental across diverse domains, including remote sensing, environmental monitoring, agriculture, and healthcare, as it offers a treasure trove of data over various spectral bands, enabling profound insights into our environment. However, with the ever-expanding volume of multispectral data, the need for efficient compression methods is becoming increasingly critical. Enhanced compression not only conserves precious storage space, but also facilitates rapid data transmission and analysis, ensuring the accessibility of vital information. In particular, in applications such as satellite imaging, where bandwidth constraints and storage limitations are prevalent, superior compression techniques are essential to minimize costs and maximize resource utilization.
Neural network-based compression methods are emerging as a solution to address this escalating challenge. While autoencoders have become a common neural network approach to image compression, they face limitations in generating customized quantization maps for training images, relying on feature extraction. However, the integration of bespoke quantization maps alongside feature extraction can elevate compression performance to levels previously considered unattainable. The concept of end-to-end image compression, encompassing both quantization maps and feature extraction, offers a comprehensive approach to represent an image in its simplest form.
The proposed method considers not only the compression ratio and image quality but also the substantial computational costs associated with current approaches. Designed to capitalize on similarities within and across spectral channels, it ensures accurate reproduction of the original source information, promising a more efficient and effective solution for multispectral image compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Principal Component Analysis (PCA) is commonly used for dimensionality reduction, feature extraction, data denoising, and visualization. The L1-PCA is known to confer robustness or a resistance to outliers in the data. In this paper, a new method for L1-PCA is explored using quantum annealing hardware. To showcase performance increases as compared to other PCA types, results for a fault detection scenario are presented and the speedup of L1-PCA using quantum annealing is demonstrated. Additionally, L1-PCA has better fault detection rates than L2-PCA when in the presence of outliers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lidar remote sensing systems are utilized across different platforms such as satellites, airplanes, and drones. These platforms play a crucial role in determining the sampling characteristics of the imaging system they carry. For instance, low-altitude lidars offer high photon count and spatial resolution but are limited to small, localized areas. In contrast, satellite lidars cover larger areas globally but suffer from lower photon counts and sparse sampling along swath line trajectories. This paper presents current state-of-the-art approaches in addressing the limitations of satellite imaging systems using a novel class of satellite remote sensing lidars coined Compressive Satellite Lidars (CS-Lidars). CS-Lidars leverage compressive sensing and machine learning techniques to capture Earth’s features from hundreds of kilometers above its surface. By doing so, they reconstruct 3D imagery with high resolution and coverage, akin to data collected from airborne platforms flying hundreds of meters above ground level. The paper also compares different machine learning methods used to reconstruct compressive lidar measurements, aiming for high-resolution, dense coverage, and broad field-of-view per swath pass. Training data for these machine learning models is obtained from NASA’s G-LiHT imaging missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lithium-ion batteries (LIBs) play a big part in the vision of a net-zero emission economy, yet it is commonly reported that only a small percentage of LIBs are recycled worldwide. An outstanding barrier to making recycling LIBs economical throughout the supply chain pertains to the uncertainty surrounding their remaining useful life (RUL). How do operating conditions impact initial useful life of the battery? We applied sparse identification of nonlinear dynamics method (SINDy) to understand the life-cycle dynamics of LIBs with respect to sensor data observed for current, voltage, internal resistance and temperature. A dataset of 124 commercial lithium iron phosphate/graphite (LFP) batteries have been charged and cycled to failure under 72 unique policies. Charging policies were standardized, reduced to PC scores, and clustered by a k-means algorithm. Sensor data from the first cycle was averaged within clusters, characterizing a ”good as new” state. SINDy method was applied to discover dynamics of this state and compared amongst clusters. This work contributes to the effort of defining a model that can predict the remaining useful life (RUL) of LIBs during degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Department of Defense (DoD) including the US Air force have been seeing increased physical security risks, such as gate runners, active shooter situations, and other use cases (such as Drone ISR, Rapid Deployment, Force Protection, and Security Force), requiring increased coordination and data cooperation. Additionally, technology and operational environments are getting more complex, interconnected and diverse. Recently a team of developers within the US Air Force developed a plug-in, which analyzes sensor data from tactical edge device’s (e.g., cell phone) onboard accelerometer and gyroscope to determine the movement of a person when they walk. The plug-in uses machine learning (ML) algorithms to create a model of that person’s gait, and then sends pertinent data through the associated human gait model to authenticate a user. The novelty of our effort lies into enhancing this human gait authentication by using different features extracted from spectral information of the accelerometer and gyroscope signals from the smartphone using a public human activity recognition dataset (WISDM) as a proof of concept, marking a previously unexplored approach. By leveraging spectral data, we seek to enhance the accuracy and robustness of authentication systems in military contexts. After the feature extraction is performed, Kernal Discriminant Analysis (KDA) is utilized to reduce the dimensions of the spectral features to 50; after including the non-spectral features, our total number of final features becomes 65. After adding the total features, we perform feature-level fusion utilizing ML algorithms and the performance shows promising for authentication utilizing 51 users. The SVM-rbf classifiers achieved a mean Equal Error Rate (EER) and mean Accuracy (ACC) of 2% and 97.3% , while the GBM classifiers achieved a mean EER and ACC of 0.4% and 99.1%, and the CNN classifiers achieved a mean EER and ACC of 10% and 90.4% respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern ubiquitous sensing produces immense information collections that offer unprecedented amounts of data for knowledge extraction, inference, and learning. Consequently, the significance of harnessing available artificial intelligence tools to boost human learning capabilities and accelerate the learning process is growing exponentially. Human learning relies on the activation of brain regions containing multi-level trees of knowledge that can be effectively built into human pretrained libraries through asking key questions at each level. In this pursuit, Multiple Choice Questions (MCQs) are frequently used due to their efficiency in grading and providing feedback. In particular, well-designed MCQs can assess knowledge across different levels of Bloom's Taxonomy, a framework that classifies different levels of cognitive skills and abilities that students use to learn. Thus, by asking these MCQs, we help learners to activate neural pathways involved in perception, cognition, and high-level functions such as meta-cognition, analysis, evaluation, and synthesis, as well as those related to information encoding, retrieval, and long-term memory formation. This study explores an AI-driven approach to creating and evaluating Multiple Choice Questions (MCQs) in domain-independent scenarios. The methodology involves generating Bloom's Taxonomy-aligned questions through zero-shot prompting with GPT-3.5, validating question alignment with Bloom’s Taxonomy with RoBERTa–a language model grounded in transformer architecture–, evaluating question quality using Item Writing Flaws (IWF)--issues that can arise in the creation of test items or questions--, and validating questions using subject matter experts. Our research demonstrates GPT-3.5's capacity to produce higher-order thinking questions, particularly at the "evaluation" level. We observe alignment between GPT-generated questions and human-assessed complexity, albeit with occasional disparities. Question quality assessment reveals differences between human and machine evaluations, correlating inversely with Bloom's Taxonomy levels. These findings shed light on automated question generation and assessment, presenting the potential for advancements in AI-driven human-learning enhancement approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study explores an innovative approach to optimizing seaweed cultivation within Integrated Multi-Trophic Aquaculture (IMTA) systems at Harbor Branch Oceanographic Institute (HBOI) through the development of advanced sensor technologies and computational models. Building on the foundation of the Pseudorandom Encoded Light for Evaluating Biomass (PEEB) sensor deployed at the seaweed tank in the HBOI IMTA system, we refine the process of biomass estimation by introducing a methodology that combines the Random Sample Consensus (RANSAC) algorithm for sensor data refinement and non-linear regression models for predicting seaweed growth and biomass. The proposed framework adopts RANSAC to filter out data outliers, and utilizes weekly non-linear regression analyses to predict seaweed biomass and optimize harvest timing. The results demonstrate the effectiveness of our polynomial regression model in estimating the daily-averaged seaweed biomass, and potential of sensor-based biomass estimation in complex aquatic environments. We discuss the impact of data quality on prediction accuracy, the challenges posed by limited sensor calibration, and the short duration of sensor deployment on model reliability. Our study contributes to the sustainable management of IMTA systems by providing a data-driven foundation for automated seaweed cultivation, emphasizing the critical role of advanced technologies in the future of aquaculture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global food and water security are threatened by several events such as changing climate, ballooning populations, stress on land and water, demographic changes, pandemics, and wars. The need to grow sufficient food and nutrition to feed the populations of twenty-first century and beyond require us to carefully understand, model, map, and monitor cropland dynamics over time and space. To achieve this, we have proposed and established a global food security support analysis data (GFSAD) project to develop multiple high-resolution agricultural cropland products encompassing the entire world. In this presentation, we will demonstrate production of Landsat satellite derived 30m global cropland extent product as well as irrigated versus rainfed cropland product using petabyte-scale big-data analytics, and multiple machine learning algorithms by coding and computing on the Google Earth Engine (GEE) cloud. Accuracies, errors, and uncertainties of the products will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edge computing in remote sensing often necessitates on-device learning due to bandwidth and latency constraints. However, limited memory and computational power on edge devices pose challenges for traditional machine learning approaches with large datasets and complex models. Continuous learning offers a potential solution for these scenarios by enabling models to adapt to evolving data streams. This paper explores the concept of leveraging a strategically selected subset of archival training data to improve performance in continual learning. We introduce a feedback-based intelligent data sampling method that utilizes a log-normal distribution to prioritize informative data points from the original training set, focusing on samples which the model struggled with during initial training. This simulation-based exploration investigates the trade-off between accuracy gains and resource utilization with different data inclusion rates, paving the way for the deployment of this approach in real-world edge devices. This approach can lead to better decision making in the field, improved operational efficiency through reduced reliance on cloud resources, and greater system autonomy for remotely sensing tasks. This will lead to the development of robust and efficient edge-based learning systems that enable real-time, autonomous, and data-driven decisions for critical tasks in remote locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability of sparse arrays to significantly reduce the hardware cost and complexity over a uniform linear array (ULA) is advantageous for a variety of applications with large array sizes. While the hardware complexity is reduced, the optimum selection of active antennas for the sparse array involves iterative solutions of an optimization problem. In a dynamic environment, such a solution is deemed impractical, specifically in rapidly time-varying source and interference temporal and spatial characteristics. In essence, the computational complexity of the optimization algorithms impedes the implementation of a fast perception-action cycle, a necessity for cognitive sensing. In this regard, replacing the traditional optimization algorithms with automatic data-driven learning techniques offers a means towards real time configuration design of sparse arrays and, as such, provides prompt response to sudden changes in the operating environment. This paper examines optimum sparse array design using deep learning. We consider the case of two sources which need to be separately isolated for corresponding signal recovery and classification. One source is fixed at broadside, whereas the direction of the other changes over a 0.5◦ grid between 0◦ and 179.5◦. Multi-layer-perceptron (MLP) and convolutional neural network (CNN) architectures are both utilized on datasets varying from few to unlimited snapshots and incorporating various SNR values. The machine learning approaches demonstrate strong correlation with the optimum array configurations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar-based sensing emerges as a promising alternative to cameras and wearable devices for indoor human activity recognition. Unlike wearables, radar sensors offer non-contact and unobtrusive monitoring, while being insensitive to lighting conditions and preserving privacy as compared to cameras. This paper addresses the task of continuous and sequential classification of daily life activities, unlike the problem to isolate distinct motions in isolation. Upon acquiring raw radar data containing sequences of motions, an event detection algorithm, the Short-Time-Average/Long-Time-Average (STA/LTA) algorithm, is utilized to detect individual motion segments. By recognizing breaks between transitions from one motion type to another, the STA/LTA detector isolates individual activity segments. To ensure consistent input shapes for activities of varying durations, image resizing and cropping techniques are employed. Furthermore, data augmentation techniques are applied to modify micro- Doppler signatures, enhancing the classification system’s robustness and providing additional data for training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Researchers are exploring how radio frequency (RF) sensors can be used to create new interfaces and smart environments that respond to human movement. This technology has the potential to be used in things like gesture recognition or smart home systems. While there are different types of RF sensors that can be used, this study focuses on using Wi-Fi signals for this purpose. The researchers collected data using a Raspberry Pi equipped with special software. They then analyzed this data to see if it could be used to identify different human activities. They made their data and code publicly available so that others can build on their work. The study found that Wi-Fi signals could be used to identify activities with an accuracy of around 65%. This suggests that Wi-Fi has potential for being used to monitor activity indoors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional clinical vital sign measurement methods are often contact-based, causing discomfort for patients and practitioners and rendering it inconvenient for continuous monitoring. Additionally, close proximity during measurement poses the risk of disease transmission and allows only one patient to be monitored at a time. To address these challenges, contactless measurement methods are being explored, with radar technology emerging as a promising alternative for vital sign monitoring. The proposed design utilizes a MIMO radar system to remotely detect subtle chest movements caused by breathing and heartbeat. The primary challenge lies in separating weaker heartbeat movements from stronger breathing motions, in the presence of body movements which mask the chest movements due to vital signs. We employ filtering techniques and chirp averaging using slow-time oversampling to enable the precise estimation of breathing and heartbeat patterns. We collect radar vital sign data from various individuals with different resting heart rates in a controlled lab environment. The system’s performance is evaluated by comparing it with ground truth information obtained from pulse oximeter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In computer vision, human pose estimation (HPE) through convolutional neural networks (CNNs) has emerged as a promising avenue with broad applicability. This study bridges a novel application of HPE, targeting the early detection of Alzheimer’s disease (AD), a condition expected to affect roughly 13.4 million Americans by 2026. Traditionally, AD diagnostic methodologies like brain imaging, Electroencephalography, and blood/neuropsychological tests are not only expensive and protracted but also require specialized medical expertise. Addressing these constraints, we introduce a cost-efficient and universally accessible system to detect AD, harnessing conventional cameras and employing pose estimation, signal processing, and machine learning. Data was sourced from videos capturing a 10-meter curve walk of 73 cognitively healthy older adults (HC) and 34 AD patients. The recording apparatus was a camera offering a resolution of 1920x1080 pixels at 30 frames/second, stationed laterally to the walking path. Using OpenPose, a state-of-the-art, bottom-up multi-person HPE method based on CNNs, we derived 25 distinctive body joint coordinates from the footage. Subsequently, 48 gait parameters were extracted from these joints and subjected to statistical scrutiny. A noticeable difference was observed in 39 out of the 48 gait parameters between the HC and AD groups. Leveraging a Support Vector Machine (SVM) to classify the data, the distinctiveness of these gait markers was further affirmed. The system accomplished a commendable accuracy rate of 90.01% and an F-score of 86.20% for AD identification. In essence, our findings advocate that the amalgamation of everyday cameras, sophisticated HPE techniques, signal processing, and machine learning can pave the way for practical AD detection in non-specialized settings, including home environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.