Unmanned aerial systems (UAS) equipped with visual sensors can be quickly deployed to map novel regions, a useful ability in GPS-denied regions, search and rescue operations, disaster response, and defense. Assisted by such a UAS, a ground vehicle could safely navigate a given region, aware of the potential hazards seen from airborne sensors. Here, we propose a pipeline for identifying and mapping maneuverable regions and objects pertinent to safe navigation (cars, barriers, etc.) in sequential imagery captured from UAS sensors. First we use a semantic labeling deep neural network for identifying roads, an object detection neural network for detecting hazards of known classes, and a model that uses linear features to detect potential road hazards in labeled road pixels. This visual evidence regarding maneuverability is collected across temporally-sequential images and is spatially fused into a single map via visual feature correspondence. Fusion of road evidence is done on a per-pixel basis while clustering techniques are used to find objects given a set of co-mapped detections. We show the use of this pipeline for the quick and automated creation of maps that contain useful information with regard to safe navigation of a region captured by UAS sensors. These techniques serve as a part in the development of a model for safe, ecient navigation of GPS-denied/rapidly changing regions through the use of UAS-enabled mapping.
|