Presentation + Paper
4 June 2019 Into the wild: a study in rendered synthetic data and domain adaptation methods
Author Affiliations +
Abstract
Rendering synthetic imagery from gaming engine environments allows us to create data featuring any number of object orientations, conditions, and lighting variations. This capability is particularly useful in classification tasks, where there is an overwhelming lack of labeled data needed to train state-of-the-art machine learning algorithms. However, the use of synthetic data is not without limit: in the case of imagery, training a deep learning model on purely synthetic data typically yields poor results when applied to real world imagery. Previous work shows that "domain adaptation," mixing real-world and synthetic data, improves performance on a target dataset. In this paper, we train a deep neural network with synthetic imagery, including ordnance and overhead ship imagery and investigate a variety of methods to adapt our model to a dataset of real images.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Marissa Dotter, Chelsea Mediavilla, Jonathan Sato, Chris M. Ward, Shibin Parameswaran, and Josh Harguess "Into the wild: a study in rendered synthetic data and domain adaptation methods", Proc. SPIE 10992, Geospatial Informatics IX, 109920D (4 June 2019); https://doi.org/10.1117/12.2518774
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image classification

Data modeling

Detection and tracking algorithms

Image quality

Statistical analysis

Visualization

Earth observing sensors

Back to Top