PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 1211501 (2022) https://doi.org/10.1117/12.2644358
This PDF file contains the front matter associated with SPIE Proceedings Volume XXXXX, including the Title Page, Copyright information, Table of Contents, and Committee Page.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 1211502 (2022) https://doi.org/10.1117/12.2618675
Efficiently and effectively processing LiDAR data at high speeds is a difficult problem that must be addressed if autonomous vehicles are to travel at speeds as high as their manned counterparts (i.e. over 80mph for ground vehicles on highways). Many processing algorithms avoid this "high throughput" problem by using a sensor with less spatial resolution or a more powerful (and more expensive) processor. This research is intended to help individuals who do not have these options when designing their obstacle detection pipeline.
The challenge of processing LiDAR data as quickly and efficiently as possible was addressed with the recently developed event map and importance map. These are images created from LiDAR scans that use biology-inspired principles to highlight areas in a scene that can be classified as obstacles. However, the importance map has three main flaws: there is no distinction among types of object movement, the output is extremely noisy, and static object tracking does not work well at high speeds.
This research reduces these three flaws by: implementing the constant-angle principle to identify motion towards the ego vehicle, using a recursive filter to remove noise, and deriving a new static object tracking algorithm to have consistent static object tracking. After implementing these changes, the new and old importance maps are compared using LiDAR data from the KITTI dataset. The importance maps are thresholded to create obstacle masks. Through comparison of true positive and false positive rates, the new importance map shows significant improvement over the previous implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 1211503 (2022) https://doi.org/10.1117/12.2619430
In addition to providing convenience and improving safety autonomous vehicle technologies offer an opportunity to reduce energy use by up to twenty percent or more. One strategy for reducing energy use is careful positioning of an autonomous vehicle, the ego vehicle, behind one or more lead vehicles. Most perception pipelines fit a bounding box around the center of mass of a detected object. That approach may not be accurate enough to allow for precise positioning. Here we compare different methods of identifying vehicle boundaries and vehicle type using a combination of simulation and field testing. Approaches will be compared based on required LiDAR resolution and algorithm complexity relative to potential improvement in energy efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 1211504 (2022) https://doi.org/10.1117/12.2618065
This paper describes results from an ongoing research and development effort to evaluate the potential for airborne long-range infrared target detection (aka, infrared search and track (IRST)) to meet critical requirements for small unmanned aircraft system (sUAS) airborne detect and avoid (DAA). Established sensors used for manned-aircraft airborne DAA are generally heavy, expensive, active, high-power devices that are difficult to scale to the size-weight-and-power/cost (SWaP/C) constraints of sUAS. Current low-SWaP sensors developed for sUAS DAA are not meeting the Well Clear (Safe Separation) detection range and coverage requirements to avoid non-cooperative aircraft. In this work, a low-SWaP staring IRST airborne DAA sensor payload system is being developed and tested to evaluate system performance against long range small airborne threats (e.g., sUAS, birds) and to guide system design studies. This paper presents results and analyses from a recent initial data collection using low-SWaP LWIR microbolometers suitable for Group 1-2 sUAS DAA applications against Group 1-2 sUAS targets. The modeling and results to date suggest that low-SWaP IR sensors such as those evaluated in this paper can provide the necessary detection range to support Group 1-2 sUAS DAA operations. Wide area coverage requirements, emerging from NASA and FAA UAS DAA studies, can be met through multiple cameras with decreasing angular resolution as they rotate aft to minimize overall system SWaP. Because of the limited SWaP capacity of sUAS, an IRST DAA system without extensive mechanical stabilization is desired. Analysis of UAS non-stabilized IR sensor vibration data found that the effects of motion blur on range performance were not significant and could be largely mitigated through deconvolution filters. The UAS sensor vibration introduced excessive scene clutter that interfered with effective UAS target tracking. The clutter was due to a time-scale conflict between the platform vibration energy and the baseline detector algorithms. Algorithms are being developed and tested to mitigate these effects and extend the baseline approach. Additional data reduction and analysis are planned to corroborate and extend these findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 1211505 (2022) https://doi.org/10.1117/12.2618521
Object detection provides information needed for target tracking and plays a core role in autonomous driving. In this work, we study the uncertainty in the estimation of the centroid (position) of a bounding box of the measurements from an object detected by the sensor of an autonomous vehicle (AV). The estimated centroid uncertainty will be used in object tracking as measurement noise variance, which is not available from the sensor manufacturer, for measurement association and target state estimation. When the (position) uncertainty that captures the noise inherent in the sensor observations is available for each detected point (this can be done using Bayesian deep learning), the bounding box centroid uncertainty is obtained using a Least-Squares estimator (LS). When the uncertainty for each detected point is not available, one can assume a uniform distribution of the clustered points in a single rectangular bounding box. A Maximum Likelihood estimator is used for the bounding box centroid estimation. Experiments using real data are carried out to show the performance of proposed methods for autonomous driving applications. A comparison with the sample mean approach showed the superiority of new algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 1211506 (2022) https://doi.org/10.1117/12.2619031
PLX’s Tracking Laser Range Finder (T-LRF) uses PLX’s core Monolithic Optical Structure Technology™ (M.O.S.T), combined with PLX’s Active optics precision technology, to create a compact high-performance tracking solution in a highly integrated system. Once the target is acquired, the T-LRF locks onto the target and feeds real time 3-dimensional bearing information to the host system, enabling further actions against the target. It can do this at long ranges against small, fast moving, hard to track targets such as consumer and military drones. The T-LRF can be fitted onto a Counter-UAS system by replacing the conventional LRF module. A prototype is available to demonstrate the performance of the T-LRF. PLX's solution can provide sub arc second accuracy in the harshest operating conditions, making the Tracking Laser Range Finder a game-changing technology in the security, defense, and combat arena.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Security and Defense Applications of Autonomous Vehicles I
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 1211509 (2022) https://doi.org/10.1117/12.2618558
With the global coronavirus pandemic still persisting, the repeated disinfection of large spaces and small rooms has become a priority and matter of focus for researchers and developers. The use of ultraviolet light (UV) for disinfection is not new; however, there are new efforts to make the methods safer, more thorough, and automated. Indeed, continuous very low dose-rate far-UVC light in indoor public locations is a promising, safe and inexpensive tool to reduce the spread of airborne-mediated microbial diseases. This paper investigates the problem of disinfecting surfaces using autonomous mobile robots equipped with UV light towers. In order to demonstrate the feasibility of our autonomous disinfection framework, we also present a teleoperated robotic prototype. It consists of a robotic rover unit base, on which two separate UV light towers carrying 254 nm UVC and 222 nm far-UVC lights are mounted. It also includes a live-feed camera for remote operation, as well as power and communication electronics for the remote operation of the UV lamps. The 222 nm far-UVC light has been recently shown to be non-inflammatory and non-photo carcinogenic when radiated on mammalian skin, while still sterilizing the coronavirus on irradiated surfaces. With far-UVC light, disinfection robots may no longer require the evacuation of spaces to be disinfected. The robot demonstrates promising disinfection performance and potential for future autonomous applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Security and Defense Applications of Autonomous Vehicles II
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150A (2022) https://doi.org/10.1117/12.2631524
This paper will describe and illustrate the real-life performance of the Rendezvous and Proximity Operations (RPO) sensors used by Space Logistics LLC’s Mission Extension Vehicles (MEV) built by Northrop Grumman. MEV-1 launched in 2019 and performed rendezvous, proximity operations, and docking (RPOD) with the Intelsat 901 satellite in the GEO graveyard orbit approximately 300km above GEO in February of 2020. MEV-2 launched in 2020 and performed a similar RPOD sequence with the Intelsat 10-02 satellite directly in geostationary orbit in February and March of 2021. These vehicles use three dissimilar sensing phenomenologies to provide all required relative navigational data to enable the above RPOD capabilities. These include visible spectrum imagers (narrow and wide field of view), long wave infrared (LWIR) imagers (narrow and wide field of view), and active scanning LIDAR. This paper will explore the performance of each of these sensors during these real-life missions at GEO and potential implications for future Space Situational Awareness capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150B (2022) https://doi.org/10.1117/12.2623017
Multicast services are a one-to-many communication typology representing an important user services technique mostly used for those networks naturally broadcast. The nodes in the network have to send explicitly join and leave messages in order to participate in the multicast communication using the well-known Internet Group Management Protocol (IGMP). This paper focuses on the introduction of a multicast paradigm on a hybrid multi-layer wireless system. The protocol used is the Core Based Tree (CBT) with the introduction of QoS requirements in order to guarantee the constraints required by the users. A precise scenario has been considered and, for the simulation campaigns an ad-hoc simulator developed in java language has been used. Finally, the obtained results for the considered scenario have been described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150C (2022) https://doi.org/10.1117/12.2619250
In recent years, Unmanned Aerial Vehicles (UAVs) have seen significant technological advances, with a wide range of applications. However, their arbitrary uses continue to pose a great threat to public safety and privacy. This has sparked the interest of the research community, which is developing solutions based on Artificial Intelligence (AI) to detect and track in real time these unmanned flying objects in sensitive areas. In this paper, we propose a vision-based Deep Reinforcement Learning (DRL) algorithm to track drones in various simulated scenarios, within the Microsoft AirSim simulator. The proposed approach is promising and achieves high tracking accuracy in different realistic simulated environments. It allowed to process videos at high frame rates and achieved a mean average precision (mAP) above 80%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150D (2022) https://doi.org/10.1117/12.2618011
The United State Army Corp of Engineers (USACE) Engineering Research and Development Center (ERDC) has developed a suite of computational tools called the Computational Test Bed (CTB) for advanced high-fidelity physics-based autonomous vehicle sensor and environment simulations. These tools provide insights into onboard navigation, image processing, sensor fusion techniques, and rapid data generation for artificial intelligence and machine learning techniques across the full spectrum (visible, NIR, MWIR, and LWIR) and for various sensor modalities (LiDAR, EO, radar). This paper presents ERDC’s CTB that allows the community to design, develop, test, and evaluate the entire autonomy space from machine learning algorithm development using augmented synthetic data to large-scale autonomous system testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150E (2022) https://doi.org/10.1117/12.2618523
Sensors are prone to biases which can lead to inaccurate associations and hence poor results in target tracking. The sensors used on autonomous vehicles (AV) are placed together or very close (practically collocated) which makes the bias estimation challenging. This work considers the bias estimation for two collocated synchronized sensors with slowly varying additive biases. The biases’ observability condition is met when the two sensors’ biases are Ornstein-Uhlenbeck stochastic processes with different time constants. The proposed bias estimation is independent of state estimation and bias models are identified based on sample autocorrelations. With bias-compensated observations, the fused measurement can be obtained using the Maximum Likelihood fusion technique. In experiments, two collocated lidars (different manufacturer models) are tested in real time. It is shown that the uncertainties of biases are significantly reduced by the estimation algorithm presented. The observation error reduction is up to 77% with bias-compensated measurement fusion and the bias uncertainty (root mean square error) has reduction up to 45% after fusion compared to the single lidar scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150F (2022) https://doi.org/10.1117/12.2623376
As autonomous systems proliferate, empirical measurement of their fitness is paramount. Several frameworks have been developed that provide guidance on what should be measured. However, these frameworks require users to develop their own metrics. Additionally, these frameworks focus on the autonomous systems rather than the enablers. An enabler could be the process used by developers. This research introduces novel techniques to analyze metrics used to measure fitness of autonomy architectures for developers. Crucially, this will be generalizable across autonomy measurement frameworks. The results are new techniques acquisition professionals can use to help better make tradeoffs development-wise for different architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150G (2022) https://doi.org/10.1117/12.2618846
Modeling and simulation (M&S) tools are used extensively throughout GVSC and in the Army in order to perform analysis of ground vehicles more quickly and less expensively than through physical testing. The CREATE-GV project is one such M&S software effort that focuses on mobility and autonomous vehicle simulation and analysis, using physics-based 3- dimensional modeling in order to accurately calculate a variety of ground vehicle metrics and parameters of interest. However, because these simulations are high-fidelity, they often require a great deal of computational power and time. One approach to reducing simulation time that has proved effective in certain contexts is the creation of “surrogate models” through machine learning (ML) algorithms. However, it is often very challenging to accurately predict the mobility of a ground vehicle system in general, and there is no existing model that can predict the mobility of autonomous systems. A great deal of uncertainty exists in the mobility and autonomy area of physics-based simulation models related to modeling assumptions, terrain conditions, and insufficient knowledge related to interactions between the vehicle and terrain. Understanding how the uncertainties inherent in autonomous mobility prediction affect model accuracy is still an open fundamental research question. In this work, we present a surrogate modeling approach leveraging machine learning algorithms to work with CREATE-GV in order to increase the computation speed of the mobility assessments, while still considering the reliability of the mobility predictions under uncertainty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150H (2022) https://doi.org/10.1117/12.2619424
Michigan Tech’s unique climatology allows for relatively effortless collection of autonomous vehicle winter driving data featuring notionally severe winter weather. Over the past two years we have collected over twenty-five terabytes of winter driving data in suburban and rural settings. Year one focused on phenomenology of snowfall in the context of autonomous vehicle sensors, specifically LiDAR. Year two focused on more severe conditions, longer wavelength LiDAR, and first attempts at applying perception pipeline processing to the dataset. For year three we focus on simultaneous RADAR and LiDAR data collection in arctic-like conditions and LiDAR designs likely to be used in ADAS and production autonomous vehicles. We also introduce a point-wise labeled portion of our dataset to aid machine learning based autonomy and a snow removal filter to reduce clutter noise and improve existing object detection algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150I (2022) https://doi.org/10.1117/12.2619433
One approach to autonomous control of high mobility ground vehicle platforms operating on challenging terrain is with the use of predictive simulation. Using a simulated or virtual world, an autonomous system can optimize use of its control systems by predicting interaction between the vehicle and ground as well as the vehicle actuator state. Such a simulation allows the platform to assess multiple possible scenarios before attempting to execute a path. Physically realistic simulations covering all of these domains are currently computationally expensive, and are unable to provide fast execution times when assessing each individual scenario due to the use of high simulation frequencies (> 1000Hz). This work evaluates using an Unreal Engine 4 vehicle model and virtual environment, leveraging its underlying PhysX library to build a simple unmanned vehicle platform. The simulation is demonstrated to successfully run at low simulation frequencies down to a lower threshold of 190Hz with minimal average cross-track-error and heading angle deviation when performing multiple real off road driving maneuvers. Real vehicle telemetry was used as input to drive the unmanned vehicle’s integrated Pure Pursuit and PID autonomous driving control algorithms within the simulation and used as ground truth for comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150J (2022) https://doi.org/10.1117/12.2620502
Failures by autonomous ground vehicles (AGV) may be caused by many different factors in hardware, software, or integration. Effective safety and reliability testing for AGV is complicated by the fact that failures are not only infrequent but also difficult to diagnose. In this work, we will discuss the results of a three-phase project to develop a simulation-based approach to AGV architecture design, test implementation, and simulation integration. This approach features a modular AGV architecture, reliability testing with a physics-based simulator (the MSU Autonomous Vehicle Simulator, or MAVS), and validation with a limited number of field trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensing, Processing, and Safety for Ground Vehicles I: Joint Session with Conferences 12115 and 12124
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150K (2022) https://doi.org/10.1117/12.2617569
Autonomous vehicles commonly include light detection and ranging (LiDAR) scanners in their suite of sensors. LiDARs are usually mounted on the vehicle’s roof to generate a point cloud of surrounding surfaces. In winter driving, falling snow casts randomly distributed shadow areas in the path of the LiDAR laser beam. This causes the scene to be obscured from the sensor to a degree proportional to the rate of snowfall, and snowflake size. In this paper, a post-processing model is developed for simulating the effect of synthesized snowfall on surrounding vehicle detection accuracy. This additive noise filter synthesizes the effect of falling snow in LiDAR data based on laser ray path and is applied to data from the popular, clear weather, KITTI road driving dataset. Object detection accuracy was quantified using metrics developed to study this effect, the mean and standard deviation of detections bounding box centroid error, the percentage error in detection bounding box volumes and the percentage of hidden original scene points in the shadow of synthetic noise points. These are important metrics an autonomous car needs to safely interact with other detected vehicles on the road. Object correlation between normal and noisy frames has been used to ensure the accuracy of the metrics as the noise introduced to the point cloud alters the number of detections. The simulation results show the effect of synthesized noise on the number and location of detections in each frame. The effect can be seen as lost detections in some cases. In others, it is present by introducing false positive detections in the scene. The testing at various noise levels also shows an increasing detection centroid mean error and bounding box volume percentage error with increasing noise. As the noise level increases, the point cloud in a frame grows with reflections of the synthesized snowflakes; however, the percentage of covered original LiDAR points shadowed by snowfall remain almost constant at a mean percentage value of 0.24%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150L (2022) https://doi.org/10.1117/12.2617585
Sensors used in autonomous driving are affected variably in adverse weather conditions. The reliability of each sensor changes with different lighting, precipitation type and intensity, and visibility conditions. Common sensing modalities in autonomous driving include cameras and LiDARs, which have a high resolution signal, but degrade in poor weather conditions. Camera image contrast is also sensitive to changes in lighting. On the other hand, RADARs have a long range, but relatively low spatial resolution. Nonetheless, they are resilient to varying lighting and weather fluctuations. A compact weather detection system can be used to dynamically steer other algorithms used in autonomous vehicles (e.g., vehice control, object detection and tracking) toward the most reliable sensors. By adjusting the weights for different sensors, a fusion scheme can be made to change focus to the currently best performing sensor combination. Alternatively, a weather detection system could be used to switch between weather-specific models or ensemble algorithms. The idea of using multi-model algorithms for autonomous driving has been gaining popularity recently; a model trained in sunny conditions will likely underperform in non-ideal weather conditions. This paper presents a compact Convolutional Neural Network (CNN) that detects current driving weather conditions based on a narrow strip of the grayscale forward facing camera image. Standard multi-class classification metrics are used to assess the performance. The weighted average of the f1-score of all classes was 94%. The model was trained and tested on the RADIATE dataset, which contains multimodal sensor data of driving in different weather conditions including sunny, snow, overcast, fog, and rain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensing, Processing, and Safety for Ground Vehicles II: Joint Session with Conferences 12115 and 12124
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150M (2022) https://doi.org/10.1117/12.2618516
Adaptive Cruise Control (ACC) is an important part of automotive autonomy. To have a robust Adaptive Cruise Control system, a well-defined sensor fusion system is required. This paper focuses on feature-level sensor fusion system where three fusion techniques are explored: independent Kalman filtering, interacting multiple model (IMM) filtering, and a novel hybrid technique using a combination of both an IMM Filter and deep neural network (DNN). Kalman Filtering is a popular approach for automotive sensor fusion, but heavily relies on the given kinematic models to make state estimations. We explore the IMM filter to capture multi-model motion as well as a DNN approach to build a data-driven model of the system. Our IMM filter is built with several Kalman Filters featuring different kinematic models. The proposed DNN used is a Long Short-Term Memory (LSTM) network trained on radar and camera data to forecast a vehicle’s state. To overcome the weaknesses in both IMM filtering and LSTM networks, we propose a hybrid technique of consisting of both the IMM filter and the LSTM network. The proposed hybrid system uses a single filter for each sensor. The filtered sensor data is synchronized and used as the input for a trained LSTM network. Our LSTM network was trained on over 100 simulated highway driving scenarios that might occur in an ACC application. Simulations and code were developed using MATLAB and the Autonomous Driving Toolbox. Our hybrid IMM-LSTM system outperformed the independent Kalman Filtering approach in longitudinal and lateral tracking accuracy (RMSE) by 23 percent and had promising results compared to the IMM filter where the tracking error was decreased by nearly 50 percent in certain driving scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, 121150N (2022) https://doi.org/10.1117/12.2618720
Navigating through an unknown environment is one of the key capabilities of the autonomous ground vehicle (AGV). It is relatively easier to traverse through an on-road environment, but moving through an off-road environment is challenging because of the inadequate driving path, obstacles, surface roughness, slope, dense vegetation, poor soil conditions, lack of road signs, etc. This complexity requires simulation of the AGV through many environments before facing an unknown situation in the real world. This paper proposes a dynamic path planning technique for AGV navigation in an off-road environment. First, a brief discussion on the traversability model and the factors mentioned in state of art such as vegetation density, soil condition, surface roughness, and slope individually. Secondly, we have proposed some modifications in the traversability model by introducing weights and exponents with each factor. Then, the A* algorithm has been analyzed by penalizing the weight and exponent values to get an optimal path. We used the Mississippi State University Autonomous Vehicular Simulator (MAVS) for simulation by creating an off-road scenario and using an AGV. We have generated a cost map based on the traversability score. The higher the score, the better the result. The optimal path is selected considering the traversability score. The novelty of this work is that we are exploring the linearity and non-linearity of the traversability model and applying the A* algorithm for path planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.