Low altitude Unmanned Aerial Systems (UASs) provide a highly flexible and capable platform for remote sensing and autonomous control. There are many applications that would benefit from an additional bird’s eye view, including mapping, environmental reconnaissance, and search and rescue to name a few. An autonomous (or partially autonomous) drone could assist in several of these scenarios, freeing the operator to focus on higherlevel strategic planning. While numerous commercial drones exist on the market, none truly provide a flexible foundation for vision guided autonomy research. Herein, we propose the design of a physical UAS platform, called VADER (Visually Aware Drone for Environmental Reconnaissance), and an accompanying simulation environment that addresses many of these tasks. In particular, we show how Commercial Off The Shelf (COTS) hardware and open source software can now be combined to realize powerful end-to-end UAS research solutions. The beauty of unifying these factors is accelerated prototyping and minimal time to migrate and test in the real world. This article outlines VADER and case studies are presented to demonstrate capabilities.
Numerous real-world applications require the intelligent combining of disparate information streams from sensors to create a more complete and enhanced observation in support of underlying tasks like classification, regression, or decision making. An often overlooked and underappreciated part of fusion is context. Herein, we focus on two contextual fusion challenges, incomplete (limited knowledge) models and metadata. Examples of metadata available to unmanned aerial systems (UAS) include time of day, platform/sensor position, etc., all of which have a potentially drastic impact on sensor measurements and subsequently our decisions derived from them. Additionally, incomplete models limit machine learning, specifically under-sampling of training data. To address these challenges, we investigate contextually adaptive online Choquet integration. First, we cluster and partition the training metadata. Second, a single machine learning model is trained per partition. Third, a Choquet integral is learned for the combination of these models per partition. Fourth, at test/run time we compute the degree of typicality of a new sample to our known contexts. Fifth, our trained integrals are decomposed into a bag of underlying aggregation operators and a new contextually relevant operator is imputed using a combination of the metadata clustering and observation statistics of the integral variables. This process enables machine learning model selection, ensemble fusion, and metadata outlier detection, with subsequent mitigation strategy identification or decision suppression. The above ideas are demonstrated on explosive hazard detection using surrogate data simulated by the Unreal Engine. In particular, the Unreal Engine is used because it provides us with flexibility to explore the proposed ideas across a range of diverse and controlled experiments. Our preliminary results show improved performance for fusion in different contexts and a sensitivity analysis is performed with respect to metadata degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.