PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10201, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SAL simulation tool Laider Tracer models speckle: the random variation in intensity of an incident light beam across a rough surface. Within Laider Tracer, the speckle field is modeled as a 2-D array of jointly Gaussian random variables projected via ray tracing onto the scene of interest. Originally, all materials in Laider Tracer were treated as ideal diffuse scatterers, for which the far-field return computed uses the Lambertian Bidirectional Reflectance Distribution Function (BRDF). As presented here, we implement material properties into Laider Tracer via the Non-conventional Exploitation Factors Data System: a database of properties for thousands of different materials sampled at various wavelengths and incident angles. We verify the intensity behavior as a function of incident angle after material properties are added to the simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent success of deep learning has lead to growing interest in applying these methods to signal processing problems. This paper explores the applications of deep learning to synthetic aperture radar (SAR) image formation. We review deep learning from a perspective relevant to SAR image formation. Our objective is to address SAR image formation in the presence of uncertainties in the SAR forward model. We present a recurrent auto-encoder network architecture based on the iterative shrinkage thresholding algorithm (ISTA) that incorporates SAR modeling. We then present an off-line training method using stochastic gradient descent and discuss the challenges and key steps of learning. Lastly, we show experimentally that our method can be used to form focused images in the presence of phase uncertainties. We demonstrate that the resulting algorithm has faster convergence and decreased reconstruction error than that of ISTA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The combined response of a pair of complementary waveforms has zero range sidelobes and could significantly improve synthetic aperture radar (SAR) image quality by reducing multiplicative noise. However, complementary waveforms may not be practical for SAR imaging for reasons such as Doppler tolerance and unimodular waveform constraints. By using mismatched filters to achieve either a complementary or near-complementary response, two or more practical waveforms could be employed and SAR image quality improved. A closed-form approach was developed that calculates mismatched filters so that the coherent sum of the range responses from each waveform and its corresponding mismatched filter is complementary. A second approach reduced sidelobes while retaining a frequency response close to the waveforms’ frequency responses. Images processed using X-band radar data collected under the Air Force Gotcha program exhibited improvements in image quality over those processed using matched filters. The closed-form approach is presented for both complementary and reduced-sidelobe mismatched filters and image quality is quantified. The approach developed in this work offers improved image quality, is suitable for near real-time operation, and is independent of the waveforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The extraction of objects from advanced geospatial intelligence (AGI) products based on synthetic aperture radar (SAR) imagery is complicated by a number of factors. For example, accurate detection of temporal changes represented in two-color multiview (2CMV) AGI products can be challenging because of speckle noise susceptibility and false positives that result from small orientation differences between objects imaged at different times. These cases of apparent motion can result in 2CMV detection, but they obviously differ greatly in terms of significance. In investigating the state-of-the-art in SAR image processing, we have found that differentiating between these two general cases is a problem that has not been well addressed. We propose a framework of methods to address these problems. For the detection of the temporal changes while reducing the number of false positives, we propose using adaptive object intensity and area thresholding in conjunction with relaxed brightness optical flow algorithms that track the motion of objects across time in small regions of interest. The proposed framework for distinguishing between actual motion and misregistration can lead to more accurate and meaningful change detection and improve object extraction from a SAR AGI product. Results demonstrate the ability of our techniques to reduce false positives up to 60%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR) images are corrupted with speckle noise, which manifests as a multiplicative gamma noise and reduces the contrast in imagery, making detection and classifi- cation using SAR images a difficult task. Many speckle reduction techniques aim to reduce this noise without including available prior knowledge about the speckle and the scene contents. In this investigation, we develop a new technique for speckle reduction which incorporates both the statistical model of speckle and the a priori knowledge about the sparsity of edges present in the scene. Using the proposed technique, we despeckle a synthetic image, a SAR image from the MSTAR data set and a SAR image from the Gotcha data set. Our results show that, with our method, we are able to visually improve the quality of SAR images. We show quantitatively that we are able to reduce speckle in homogeneous areas beyond comparable methods, while maintaining edge and target intensity information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic metasurface antennas are planar structures that exhibit remarkable capabilities in controlling electromagnetic wave-fronts, advantages which are particularly attractive for microwave imaging. These antennas exhibit strong frequency dispersion and produce diverse radiation patterns. Such behavior presents unique challenges for integration with conventional imaging algorithms. We analyze an adapted version of the range migration algorithm (RMA) for use with dynamic metasurfaces in image reconstruction. Focusing on the the proposed pre-processing step, that ultimately allows a fast processing of the backscattered signal in the spatial frequency domain from which the fast Fourier transform can efficiently reconstruct the scene. Numerical studies illustrate imaging performance using both conventional methods and the adapted RMA, demonstrating that the RMA can reconstruct images with comparable quality in a fraction of the time. In this paper, we demonstrate the capabilities of the algorithm as a fast reconstruction tool, and we analyze the limitations of the presented technique in terms of image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the implications pertaining to the problem of attempting to invert synthetic aperture radar (SAR) measurement data to yield unique estimates of the underlying motion of slow targets in the imaged scene. A recent analysis has demonstrated that ambiguities exist in estimating the kinematics parameters of surface targets for general bistatic SAR collection data. In particular, a procedure has been developed which generates alternate target trajectories which give the same SAR measurements as that of the true target motion. The current paper extends the earlier analysis by generating specific numeric examples of alternate target trajectories corresponding to the motion of a given slowly moving target. This slow-target case reveals the counter-intuitive result that a single SAR collection data set can be generated by target trajectories with significantly different, and possibly opposing, heading directions. For example, the true motion of a given target can be moving towards the mean radar position during the SAR collection interval, whereas a valid alternate trajectory can correspond to a target that is moving away from the radar. The present analysis demonstrates the extent of the challenges associated with attempting to estimate of the underlying motion of targets using SAR measurement data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With all of the new remote sensing modalities available, and with ever increasing capabilities and frequency of collection, there is a desire to fundamentally understand/quantify the information content in the collected image data relative to various exploitation goals, such as detection/classification. A fundamental approach for this is the framework of Bayesian decision theory, but a daunting challenge is to have significantly flexible and accurate multivariate models for the features and/or pixels that capture a wide assortment of distributions and dependen- cies. In addition, data can come in the form of both continuous and discrete representations, where the latter is often generated based on considerations of robustness to imaging conditions and occlusions/degradations. In this paper we propose a novel suite of ”latent” models fundamentally based on multivariate Gaussian copula models that can be used for quantized data from SAR imagery. For this Latent Gaussian Copula (LGC) model, we derive an approximate, maximum-likelihood estimation algorithm and demonstrate very reasonable estimation performance even for the larger images with many pixels. However applying these LGC models to large dimen- sions/images within a Bayesian decision/classification theory is infeasible due to the computational/numerical issues in evaluating the true full likelihood, and we propose an alternative class of novel pseudo-likelihoood detection statistics that are computationally feasible. We show in a few simple examples that these statistics have the potential to provide very good and robust detection/classification performance. All of this framework is demonstrated on a simulated SLICY data set, and the results show the importance of modeling the dependencies, and of utilizing the pseudo-likelihood methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Integral in locating point scatterers in Synthetic Aperture Radar (SAR) data is the ability to match location
estimates in each dimension. This is due in some sense to the fact that the fundamental theorem of algebra nds
unique locations only in one dimension. In SAR images this involves at least a search of four possible combination
for two scatterers. In a set of multiple elevation SAR (3-D) images with more than one scatterer combinations
increase dramatically. The paper examines several suboptimal methods and their e¢ ciency matching scatterers
in one or more dimensions to their unique locations compared to the (un-achievable) exhaustive search. Many
heuristic methods exist in two dimension (location maxima, alternating maximization in each dimension) and
some (radar tracking) methods exist in three dimensions (Munkres, probabilistic maximization). Algorithms
range from simply selecting maximums (easy in 2D; complex in multiple images, 3D) to multidimensional con-
strained interpolations. In some algorithms the extra degrees of freedom present in two dimensional localization
are exploited to increase accuracy. These methodologies can also be extended to three dimensions. The paper
examines proposed combinations of these especially suitable to the 3-D SAR problem. Simulations with results
for di¤erent algorithms compare promising alternatives to solve this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional synthetic aperture radar (SAR) systems tend to discard phase information of formed complex radar imagery
prior to automatic target recognition (ATR). This practice has historically been driven by available hardware storage,
processing capabilities, and data link capacity. Recent advances in high performance computing (HPC) have enabled
extremely dense storage and processing solutions. Therefore, previous motives for discarding radar phase information in
ATR applications have been mitigated. First, we characterize the value of phase in one-dimensional (1-D) radar range
profiles with respect to the ability to correctly estimate target features, which are currently employed in ATR algorithms
for target discrimination. These features correspond to physical characteristics of targets through radio frequency (RF)
scattering phenomenology. Physics-based electromagnetic scattering models developed from the geometrical theory of
diffraction are utilized for the information analysis presented here. Information is quantified by the error of target parameter
estimates from noisy radar signals when phase is either retained or discarded. Operating conditions (OCs) of signal-tonoise
ratio (SNR) and bandwidth are considered. Second, we investigate the value of phase in 1-D radar returns with
respect to the ability to correctly classify canonical targets. Classification performance is evaluated via logistic regression
for three targets (sphere, plate, tophat). Phase information is demonstrated to improve radar target classification rates,
particularly at low SNRs and low bandwidths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the task of estimating the scattering coefficients and locations of the scattering centers that exhibit limited azimuthal persistence for a wide-angle synthetic aperture radar (SAR) sensor operating in spotlight mode. We exploit the sparsity of the scattering centers in the spatial domain as well as the slow-varying structure of the scattering coefficients in the azimuth domain to solve the ill-posed linear inverse problem. Furthermore, we utilize this recovered model as a template for the task of target recognition and pose estimation. We also investigate the effects of missing pulses in the initial recovery step of the model on the performance of the proposed method for target recognition. We empirically establish that the recovered model can be used to estimate the target class and pose simultaneously for the case of missing measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information– theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are ”loose” or ”tight” bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.