Poster + Paper
13 June 2023 Extending neural radiance fields (NeRF) for synthetic aperture radar (SAR) novel image generation
William Snyder, Stephen DelMarco, Dylan Snover, Amit Bhatia, Scott Kuzdeba
Author Affiliations +
Conference Poster
Abstract
An important application of deep learning classifiers is to recognize vehicles or ships in satellite images. Neural Radiance Field (NeRF) methods apply a limited number of 2D electro-optical (EO) views of an object to learn its 3D shape and view-dependent radiance properties. The resulting latent model generates novel views for training a deep learning classifier. Space-based synthetic aperture radar (SAR) sensors present a new, useful source of wide-area imagery. Because SAR phenomenology and geometry are different from EO, we construct a suitable NeRF-like approach for SAR and demonstrate generation of realistic simulated SAR imagery..Several commercial and military applications classify vehicles or ships in satellite images. In many cases, it is infeasible to acquire looks at the objects over the wide range of views and conditions needed for machine learning classifier training. Neural Radiance Fields (NeRF) and other related methods apply a limited number of 2D views of an object to learn its 3D shape and view-dependent radiance properties. One application of these techniques is to generate additional, novel views of objects for training deep learning classifiers. Current NeRF and NeRF-like methods have been demonstrated with electro-optical (EO) imagery. The emergence of space-based synthetic aperture radar (SAR) imaging sensors presents a new, useful source of wide-area imagery with day/night, all-weather commercial and military applications. Because SAR imaging phenomenology and projection geometry are different from EO, the application of NeRF-like methods to generate novel SAR images of objects for training a classifier presents new challenges. For example, unlike EO, the mono-static SAR illumination source moves with the sensor view geometry. In addition, the 2D SAR image projection is angle-range, not angle-angle. In this paper, we evaluate the salient differences between EO and SAR, and construct a processing pipeline to generate realistic synthetic SAR imagery. The synthetic SAR imagery provides additional training data, augmenting collected image data, for machine learning-based Automatic Target Recognition (ATR) algorithms. We provide examples of synthetic SAR image creation using this approach.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
William Snyder, Stephen DelMarco, Dylan Snover, Amit Bhatia, and Scott Kuzdeba "Extending neural radiance fields (NeRF) for synthetic aperture radar (SAR) novel image generation", Proc. SPIE 12529, Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications, 1252912 (13 June 2023); https://doi.org/10.1117/12.2666925
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Synthetic aperture radar

3D modeling

Education and training

3D image processing

Data modeling

3D acquisition

Image processing

Back to Top