PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A deterministic model-based restoration procedure is presented. The algorithm is effective for the restoration of coarsely sampled images degraded by diffraction and noise. The a priori information, an assumed parametric object model, is used to arrive at the solution. The object model is convolved with the optical system and sensing mechanism degradations and then matched against the limited number of available samples. The unknown parameters are then estimated using a numerical least mean square error optimization procedure. The method has been tested with a double delta function model via digital simulations. A zero-mean additive white Gaussian background noise process was assumed. The technique requires high signal-to-noise ratio (SNR) to resolve the doublet with a separation distance smaller than the detector width. With increasing separation, good restoration can be achieved at low SNR. The procedure is applicable to restore generalized objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A comparison of two complementary approaches to real-time digital image formation processing of multiple-look, strip-map synthetic aperture radar data is presented. Emphasis is placed on tradeoffs regarding memory and throughput requirements, as well as the complexity in the control and timing for each processor. It is shown that the well-known range dependence of the synthetic aperture length results in a special set of design considerations which are important if uniform azimuth resolution and ideal look registration are the primary design goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speckle is a granular noise that inherently exists in all types of coherent imaging systems. The presence of speckle in an image reduces the resolution of the image and the detectability of the target. Many speckle reduction algorithms assume speckle noise is multiplicative. We instead model the speckle according to the exact physical process of coherent image formation. Thus, the model includes signal-dependent effects and accurately represents the higher order statistical properties of speckle that are important to the restoration procedure. Various adaptive restoration filters for intensity speckle images are derived based on different speckle model assumptions and a nonstationary image model. These filters respond adaptively to the signal-dependent speckle noise and the nonstationary statistics of the original image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe here a digital image restoration technique for image data degraded by an a-priori known blur function. To be more specific, we are interested in processing digital image data, obtained using coherent illumination (either optical or microwaves), that has been degraded by a known blur factor, The complex valued image i (amplitude) is related to the measured blurred data i by i = i * b, where * denotes convolution and b is a known complex-valued blur function. An inverse filtering technique is traditionally used in the Fourier domain. But, for some image analysis applications, a more direct, deblurring approach in the image domain may be more desirable. An illustrative possible scenario is given by an image interpreter looking at a small portion of some blurred image. In this case a direct deblurring method applied to the selected image area might be more flexible. We present here a direct image plane deconvolution method using reasonably short length convolution kernels. In the more specific case of quadratic phase-type blurs, a direct image plane Fresnel transform approach is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that the noise processes corrupting an image are in general signal-dependent. An interesting aspect of signal-dependent noise is that there is a certain amount of signal-information embedded in the noise. Most of the image restoration techniques, however, attempt to suppress the noise terms to obtain an estimate of the image and do not exploit the additional signal information contained in the noise. A simple technique designed to demonstrate the potential for signal extraction from signal-dependent noise is presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for removing periodic background patterns from pictures is presented. The basic spatial frequency composition of the pattern is determined from an estimate of the power spectrum of the picture. A digital restoration filter is then created from a modified version of the power spectrum. The method is extremely effective and can be automated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines some of the reasons that tactical target recognition performance has not yet achieved a level necessary to satisfy the requirements for a totally autonomous target recognition system. In particular, those factors that adversely impact recognizer performance are identified and qualitatively analyzed to show how they impact performance. These factors include: inadequate pixels on target, random thermal signature, training with inappropriate data sets, unknown target aspect angle, and segmentation errors. The paper also identifies several techniques for improving recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A statistically based tracking algorithm is described which utilizes a powerful segmentation algorithm. Multiple features such as intensity, edge magnitude, and spatial frequency are combined to form a joint probability distribution to characterize a region containing a target and its immediate surround. These distributions are integrated over time to provide a stable estimate of the target region and background statistics. A Bayesian decision rule is implemented using these distributions to classify individual pixels as target or nontarget. An adaptive gate process is used to estimate desired changes in the tracking window size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A statistical approach for forward looking infrared (FLIR) target classification is presented. The implemented functions include enhancement, segmentation, feature extraction and classification. A 5 x 5 median filter is used for image smoothing. Segmentation involves an adaptive thresholding technique. This technique is capable of automatic selection of local thresholds of individual targets based on the local property in terms of minimal change in target area. Features extracted from segmented target candidates characterize grade shade, texture and geometry properties of these regions. Among several evaluated classifiers the Bayes decision rule is used for its performance, flexibility and future modifications. The presented approach has been applied to 92 FLIR images from three different data sets. Five types of target candidates examined in this study are tanks, APC's, jeeps, burning hulks, and noise regions. Among 281 targets of interest, 260 belong to these five categories. The Bayes classifier has achieved 87.69% detection and 76.92% classification with a FAR of 0.07 per image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital video processor (DVP) was developed for use in a real-time image generation facility. The DVP is capable of transforming a video input image into a scaled, translated, and intensity-modified video output image. The processing is unique for three reasons. First, it allows a high degree of resolution in the scale factor (i.e., noninteger values). Second, it allows the desired scale, translation, and intensity modification factors to be stipulated immediately before the transformation occurs (i.e., nonpreprogrammed operation). Third, it performs the transformation in one video frame time (i.e., in near real time).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of image skeletonization to object position measurement is described. A cellular logic Medial Axis Transform (MAT) and an improved Distance Transform (DT) are compared to skeletonize either multi-labeled or binary region images. The region images used for experiments were produced by segmentation of video imagery or were synthetically-generated. The MAT produced an eight-connected pixel-thin skeleton but was slower than the DT which produced a skeleton with gaps and was not pixel thin. A selective branch erosion (SBERO) technique is described that can remove skeleton branches and identify a unique fiducial point on the skeleton for object position offset measurement in the image. The combination of a skeletonization algorithm with the SBERO can be a powerful method to.search for a specified skeletal structure of a branched shape object in a complex image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A typical application, namely the flexible, automatic assembly of equipment bearing plates, has been taken to show the practical possibilities of a picture sensor system. It consists of three video cameras, two monitors for picture and data processing with dialogue, a keyboard and a processing unit. The sensor system controls transport, handling and grasping functions and recognises incomplete assemblies in the final check.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image parallelism implies a SIMD-architecture where a two-dimensional array of processors and memory modules are controlled by a central master control unit. It is emphasized that effective access of artribrarily sized neighborhoods (which is everso common in image processors) are distributed over the image plane. The access and interconnection problem for such an architecture is shown to have several possible solutions with different trade-offs. No existing design presents good solutions of the neighborhood access problem. It is shown that a proper design in this respect results in a 13-fold increase in speed compared to what has recently been reported for the MPP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The extensive use of graphite structures in aircraft has created a need for new manufacturing and quality assurance techniques. As a part of the Integrated Flexible Automation Center development program, a high speed, automated video inspection system has been developed. This system presently interfaces with four video cameras and a solid state linear array camera. The inspection tasks for this application include detection of flaws in graphite material as it is dispensed, ply identification, and verification of position and orientation on a robot transfer head. Inspection problems and system architecture are described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A mathematical model of staring planar array sensor behavior is required for simulation of sensors in imaging system performance evaluation studies, as well as for evaluation of the sensors themselves. Such a model should be accurate, adaptable, and practical to implement. As existing models of scanning sensors were inadequate, a model that is both structurally and statistically consistent with actual sensors has been developed at Texas Instruments. To characterize a class of sensors, data is collected by recording selected imagery using a representative sensor. The recorded data is reduced, converted, and stored in a medium facilitating access by a digital computer. The resulting data is then analyzed to extract critical sensor statistics that characterize the nominal behavior of the sensor and the associated fixed and time varying errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design criteria and the basic structure of GYPSY, a portable system for visual inspection and recognition. GYPSY can be employed in different situations, problems and environments. The user can write applicative programs in a Pascal like language to give the system the ability to take over different tasks. GYPSY analyzes binary images of objects on the basis of their 2D shape. With respect to similar systems, it is fully portable and easily programmable. It is able of recognizing parts in any number and position in the field of view, recognizing even parts inside other parts. The software package is operational on a MINC 11 with a DMA interface to a solid state 128 x 128 camera. However GYPSY has been designed for a multi-microprocessor architecture based on Z8000, specially designed in Italy for industrial automation. The package is fully implemented in OMSI Pascal and it does not require any special hardware. It will be publicly available in Italy having been developed with the support of the National Council of Research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a system for localization of photoelectron events utilizing an intensified Plumbicon camera and a Grinnell video digitizer. The Grinnell digitizer, arithmetic unit and memory are used to produce a real-time video difference between current pixel value and previous pixel value thereby suppressing multiple detection of the same event. A master clock provides synchronization with the camera in operation at 60 Hz in 240 lines/field, repeat field mode. Our event-localization scheme provides double-buffered line-address and event-amplitude for up to 32 events along a 512 pixel video line. A software algorithm allows localization of multiple detections of the same event, and provides a unique address interpolated with 1/2 line resolution by the host minicomputer in a 480 x 512 format.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most optical instruments in use are modeled after the human eye. The design of insect eyes is fundamentally different and is governed by the laws of multiaperture optics. It can be shown that for certain applications, instruments using multiaperture optics are superior to single aperture instruments. For study of multiaperture optics and function of the insect eye a mechanical model resembling an insect eye was constructed. An individual eyelet of the mechanical model consists of a lens system and a detection system. The diameter of the front lens is 2mm (7mm fl), the aperture of the back lens is lmm (2mm fl). Each eyelet has seven optical fibers which transport the incident light to individual detectors. The model has a total of 100 eyelets (700 detectors). Each detector is sequentially read by a multiplexer which is interfaced with a micro computer, which displays the output on a video terminal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital medical imaging modalities construct and display images as cross-sections of anatomy. The technologies of these digital imaging systems are moving towards thinner cross-section thicknesses, higher spatial resolutions, and larger dynamic ranges. These imaging modalities provide cross-sectional displays in which there is no superimosing of organs. This provides relatively precise two-dimensional geometric information. In regarding the general three-dimensional (3-D) space relationships among a number of anatomic structures is difficult to acquire from a large number of thin serial sections. Clinicians are required to visualize such information by mentally stacking these serial sections to obtain the complete structure. A number of 3-D surface reconstruction algorithms have been developed for displaying 3-D anatomic structures on raster graphic displays. This paper will present a comparison among these reconstruction algorithms for: (1) computation time; (2) algorithm complexity; (3) computer storage requirements; and (4) clinical efficacy. The authors will report on a three-year study of the clinical utilization of 3-D display algorithms using raster graphic display systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Examples are presented of image processing techniques applied to medical images. A description of major types of medical imaging procedures is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integration of several data types, including remotely-sensed imagery and cartographic data, has been accomplished by reprojection of the data to match a common map base in the Universal Transverse Mercator (UTM) projection. The resultant reprojected aircraft multispectral scanner (MSS) imagery, digital terrain models afIVO, and land use maps are available at the EROS Data Center, U.S. Geological Survey (USGS) in Sioux Falls, South Dakota as digital data sets that have areal coverage corresponding to the USGS 7 1/2-minute topographic quadrangle maps in the Los Angeles Basin. The three data types are: Thematic Mapper Simulator (N8001 scanner) MSS imagery; Digital Land Mass Simulator (or Arc-Second) DTM and the Environmental Sciences Research Institute land use files for Los Angeles. The techniques for data integration included: 1) control point selection, based on common features located in image and map; 2) map base calculations, based on the coordinate locations of the features on image and map; 3) image reprojection using a bilinear resampling algorithm, based on calculated map projection parameters; 4) mosaicking of reprojected images, based on offsets of the images within the mosaic map base; and 5) quadrangle area extraction, based on the boundaries of USGS 7 1/2-minute quad maps. The common map base concept was used because a common map projection allowed all the data to be spatially-registered when the unique projection differences between imaging sensors and cartographic files were minimized by reprojecting the data to approximate the UTM map projection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a method of detecting point sources using phase only images is described. The method is adaptive, in the sense that it can be used to reduce the effects of clutter in situations for which the clutter statistics are unknown. Data to which this method may be applied include sampled time signals from a single detector, imaging systems composed of discrete elements, and successive frames of such images. An extension of the basic technique to multiple frames of data is discussed which allows detection of moving point sources. The method has the advantages that multi-dimensional signals can be filtered in several different formats, providing a choice of implementations, and that noise limiting behavior allows the use of a predetermined threshold, resulting in a known probability of false alarm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accuracy of image registration methods is dependent on peak sharpness which is dependent on scene content. As a result of characterizing joint scene content in terms of the joint probability density function, a new registration metric is defined as the thresholded difference (TD) method. It produces a sharper correlation peak than either the direct cross-correlation or mean absolute difference methods. Analytical comparisons and simulations are presented which show the TD method to exhibit less dependence on scene content than other pixel-by-pixel registration metrics. The method is easily normalized by a simple re-scaling of both images to the same range of gray levels. The thresholded difference registration metric is evaluated in terms of registering images which are incompatible due to additive noise, different spectral bands, temporal variations and scale differences. Simulation results show the TD method to be as good or better than other pixel-by-pixel correlation methods. An optimum threshold of half the image difference standard deviation at registration is indicated by simulation results. The TD method offers promise for enhanced registration accuracy for non-compatible images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two techniques for radar image registration and rectification are presented. In the registration method, a general 2-D polynomial transform is defined to accomplish the geometric mapping from one image into the other. The degree and coefficients of the polynomial are obtained using an a priori found tiepoint data set. In the second part of the paper, a rectification procedure is developed that models the distortion present in the radar image in terms of the radar sensor's platform parameters and the topographic variations of the imaged scene. This model, the ephemeris data and the digital topographic data are then used in rectifying the radar image. The two techniques are then used in registering and rectifying two examples of radar imagery. Each method is discussed as to its benefits, short-comings and registration accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The passive acoustic ambiguity function is a measure of the cross-spectrum in a Doppler-shift and time-delay space that arises when two or more passive receivers are used to monitor a moving acoustic source. Detection of a signal source in the presence of noise has been treated in the past from a communications-theory point of view, with considerable effort devoted to establishing a threshold to which the maximum value of the function is compared. That approach disregards ambiguity function topography information which in practice is manually used to interpret source characteristics and source kinematics. Because of the two-dimensional representation of the ambiguity function, digital image processing techniques can be easily applied for the purposes of topography enhancement and characterization. This work presents an overview of techniques previously reported as well as more current research being conducted to improve detection performance and automate topography characterization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an efficient array-processor implementation of an adaptive histogram equalization algorithm for digital image enhancement. The algorithm is based on a sliding window approach, and computes local histograms and grey level mappings for generating uniform (equalized) histograms for each pixel location. Equivalently, this method can be interpreted as generating local maximum entropy representations of the original image data. For sample digital imagery, it is shown that on the average, a 62% increase in local entropy can be obtained. In addition, the effects of adjusting key parameters (such as local brightness, gain, etc.) upon processed imagery are discussed. The technique has been applied to the analysis of high quality digital imagery and found to be particularly effective for accentuating subtle texture and detail in the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the investigation and the implementation of the zonal filtering algorithm in order to improve the performance of the blob target detection system. Because the zonal filtering method will enhance the cold and hot regions, those cold and hot regions in the FLIR image are most likely the potential target areas. Also, this filter can be used to compress the wide global scene dynamic range while preserving local area contrast. The local area contrast is a good feature for discriminating target areas from background areas. The experimental results of the zonal filter will be shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we show that a one-dimensional or multi-dimensional sequence is uniquely specified under mild restrictions by its Fourier transform amplitude (magnitude and one bit of phase information). In addition, we develop a numerical algorithm to reconstruct a one-dimensional or multi-dimensional sequence from its Fourier transform amplitude. Reconstruction examples obtained using this algorithm are also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Gerchberg algorithm has been successfully applied to signal enhancement, reconstruction and extrapolation problems where only partial information is available in the space (time) and frequency domains. In this paper, the Gerchberg algorithm is applied to the iterative interpolation of two-dimensional (2-D) surfaces from irregularly spaced data points. Specific applications presented are: the generation of hydrographic surfaces from bathymetric data obtained from the hydrographic airborne laser sounder (HALS) and the generalization of wind-flows from cloud imagery obtained from the Geostationary Operational Environmental Satellite (GOES). Experimental results obtained using a VAX 11/780 and FPS 120-B array processor system are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problems associated with the interpretation of structure preserving low-dose electron micrographs have been solved for crystalline biological specimens by using specialized image processing routines. Fourier domain techniques are used to extract the periodic signal component. Hardly visible in the noisy unprocessed micrograph, the periodic structure becomes manifest by the discrete reflexions in the power spectrum. The SNR of the images is improved by averaging over a large number of unit cells or identical "motifs". A local average is performed as a first step in analogy to optical filtering. A stricter averaging is achieved by use of crystallographic methods where one phase and amplitude pair is established for each reflexion in the Fourier plane. Particularly reliable amplitudes and phases, however, are obtained by using the additional redundancy due to the intrinsic symmetry properties characteristic for all biological membrane components (mainly P3, P4, P6). The different filtering methods and symmetrisation procedures have been applied to noisy, low contrast electron micrographs of crystalline sheets of various biological specimens. The increase in interpretability and reliability of structure determination is demonstrated at the different stages of processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Colour images progressively blurred by defocusing and by chromatism of optical systems are restored by digital techniques. The Fourier transforms for the green, blue and red image components have been degraded by the respective three components of the polychromatic transfer function computed from the wave aberration. The effects on the image for several amounts of aberration as well as the degree of restoration obtained by different Wiener filtering for each colour component are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extracorporeal circulation allows direct access inside the chest: it may be used to carry out physiological research. The thermo-chemical protection of myocardium during heart surgery, called cardioplegy, is one of the latest outstanding techniques in patient safety. Thermocardiography monitoring during the infusion of the cardioplegic solution allows continuous assessment of rapid temperature distribution changes and shows exactly the extent of myocardium involved. Using a peculiar pseudocolor digital image enhancement, it is possible to emphasize involved areas coronary flow and to model the thermo-fluid-dynamical actions of inspected heart.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a technique for matching two images containing natural terrain and tactical objects using syntactic pattern recognition. A preprocessor analyzes each image to identify potential areas of interest. Points of interest in an image are classified and a graph possessing properties of invariance is created based on these points. Classification derived grammar strings are generated for each classified graph structure. A local match analysis is performed and the best global match is constructed. A probability-of-match metric is computed in order to evaluate the global match. Examples demonstrating these steps are provided and actual FLIR image results are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional matched filtering is sensitive to rotation of the object to be detected. If the target object is rotated, the signal to noise ratio of the output correlation is reduced with the result that the object may not be detected. Efforts have been made towards developing matched filters with signal to noise ratios that are space invariant and rotation invariant with respect to the target. Our latest approach has been to extract from the target some circular harmonic component, and to use a filter matched to this component. A complete digital simulation using simple two-dimensional objects has resulted in the successful detection of rotated target objects in a field of different objects. The same technique has also been implemented in an optical processor using computer generated holograms. To discriminate between objects that are almost similar, a pseudo-object is used to generate the matched filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The weakest link in the inspection process is the subjective interpretation of data by inspectors. To overcome this troublesome fact computer based analysis systems have been developed. In the field of nondestructive evaluation (NDE) there is a large class of inspections that can benefit from computer analysis. X-ray images (both film and fluoroscopic) and acoustic images lend themselves to automatic analysis as do the one-dimensional signals associated with ultrasonic, eddy current and acoustic emission testing. Computer analysis can enhance and evaluate subtle details. Flaws can be located and measured, and accept-ance decisions made by computer in a consistent and objective manner. This paper describes the interactive, computer-based analysis of real-time x-ray images and acoustic images of graphite/epoxy adhesively bonded structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many industrial applications, inspection and quality control is required for two-dimensional patterns on flat surfaces. This paper describes a computer controlled pattern inspection system capable of resolutions and positional accuracies in the 1 to 2 mil range. For sample sizes up to 48-inches in length and up to 4-inches in width, total inspection is accomplished within 50-seconds. Inspected patterns are compared, in real time, with a computer stored master pattern. The system is designed to store up to 60 patterns in disk memory and can swap patterns within 10-seconds. During the inspection, the data on the measured pattern is continuously checked against preselected tolerances. Out of tolerance conditions are immediately flagged, permitting the inspection to be terminated and the sample to be rejected. Pattern sensing is performed with a 2048-element CCD silicon detector array. The CCD signals are amplified and thresholded in a high speed AGC amplifier. The analog signals are then digitized in a 6-bit A/D flash converter. Thereafter, signal processing is performed digitally. Illumination is provided by a 300-watt tungsten-halogen linear filament lamp, used in conjunction with a cylindrical lens condenser. Both the illumination and the projected image of the CCD array span a 4-inch field. The illuminator and sensor are mounted on a carriage that rides on a precision air bearing table. A velocity controlled D.C. servo, accurate to 0.01%, is used to drive the carriage along the long axis of the pattern to provide up to 48-inches of coverage at speeds of up to 1-inch per second. A linear glass scale optical encoder reads carriage position to an accuracy of better than 1-mil.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the IRAC II (Interactive Recognition of Arabic Characters) and the IRAC III systems, which recognize isolated Arabic words written from right to left on a graphic tablet connected to a mini-computer (MITRA 15125). In the IRAC II version words are recognized following their segmentation into characters. The IRAC III version uses global recognition with no segmentation. It calculates a vector defining the main parameters for each stroke making up the word and uses this information to recognize the word by dictionary consultation. It resolves eventual ambiguities with the help of secondary parameters calculated for each stroke.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some digital image processing problem types, such as sorting, reference comparison, classification, remote sensing, monitoring and quickest adaptive detection of image "disorders" are dealt with. Principal fundamental problems to be solved here are: 1) selection of informative features, and 2) as full as possible extraction of useful information out of data and its effective use. A solution approach is suggested for the above problems, based on the concept of invariant "coupling" of unknown parameters by some data functions, and integration or summation for these invariant "couplings". Here, on the one hand, it appears possible to derive a synthesis procedure for effective statistical decision rules which are not strictly dominated by any other decision rules with respect to specified loss functions, and, on the other hand, the decision rules per se are readily implementable on digital computers. Some new results have been obtained, showing the highest effectivity to be achieved when handling small data volumes. Several examples are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method for analysing colour space is described with reference to compression, data ordering and display. This technique uses a precalculated curve first described by Giuseppe Peano to traverse the space, allowing sensible compression in the colour domain. Techniques already available for single channel data processing can then be applied because of the ordering properties peculiar to this family of curves. The immediate use, for which the technique was developed, enables the presentation of high colour fidelity images on a frame store able to display only 256 colours simultaneously, providing the frame store has a flexible look-up table (LUT). Other applications for false colour display and compression are outlined. The technique reduces the entropy in a multi-channel picture and is suitable for preprocessing before other compression techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present some of the highlights of the recent advancements in the uniqueness and estimation of 3-D motion parameters and object surface structure from perspective views, a key issue in the analysis of 3-D time varying scene. For estimating planar patch motion from two views, there are two solutions in general, unless the 3 x 3 matrix containing the canonical coordinates of a Lie group has multiple singular values. Closed form solutions are derived analytically. The solutions would be unique if three views are given. For curved surface motion, two theorems, one lemma and a collection of corollaries on the conditions of uniqueness of solution are given. Closed form solutions for the nonlinear motion equations are derived. Results of simulation and real experiments are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we discuss the performance of sequential quantizers and show that for typical correlated signals a 12-bit uniform quantizer can be replaced by a 6-bit sequential quantizer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to illustrate the usefulness of two- dimensional (2-D) Gaussian Markov random field models for synthesis and coding of textures. The MRF models used are non causal; the mean of observation y(s) at position s is written as a linear weighted sum of observations surrounding s in all directions. The method of least squares is used to obtain estimates of the model parameters. The model is then used with appropriate boundary conditions to regenerate the original image. Results obtained indicate that this method could be used to code textures at low bit rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a numerical techniques of transform image coding for the image codklg for the image bandwidth compression. Unitary transformations called Hadamard, Haar and Hadamard-Haar transformations are definied and developed. 'Te described the construction of the transformation matrices and presents algorithms for computation of the transformations and theirs inverse. Considered transformations are asolied to iiaa e processing and theirs utility and effectiveness are compared with other discrete transforms on the basic of some standard performance criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to composite predictive coding for NTSC color TV signals has been proposed by Yamamoto et al. This technique involved integrating a comb filtering operation into an arbitrary linear predictor function tailored for monochrome signals. The prediction algorithms are valid as long as the sampling rate is half the integer multiple of the horizontal scanning frequency (15.7 KHz). Four intrafield predictors based on comb filtering integration (CFI) are applied to three test pictures sampled at 3fsc where fsc = 3.58 MHz is the color subcarrier frequency. With phase alternating line encoding (PALE) the data is aligned vertically. The predictors and the position of pels are shown in Table 1 and Figure 1, respectively. The variances of the prediction error were computed without the quantizer in the DPCM loop. Comparison of these variances shows that the CP1 is the best among the four predictors. Max quantizers with 5 bit, 4 bit and 3/6 bit (13 levels normal mode, 7 levels forced mode) based on the statistics of the prediction error of the CP1 are developed. Based on the occurrence frequency of the long word code (6 bit code) for the three test pictures, the buffer is designed, for constant bit rate transmission. For this processing, a single color TV channel can be transmitted at 32 MBPS which includes audio, error correcting code and synchronization bits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Performance measures for statistical segmentation have been developed for a space-and-time critical Bayesian statistical tracker. They are intended to become an integral part of a knowledge-based tracking algorithm, which has been developed by RCA. The performance measures are serving to quantify the usefulness of the processed input, to assist in the identification of each tracking state and give its reliability, and to predict impending changes of state. They have been tested using stochastically generated target-background frames. Performance measure results have correlated well with the parameters which characterize the difference in the target and background distributions. A host of possible performance measures are discussed in relation to their strengths and weaknesses. Experimental results for the measures currently being employed by RCA are given, and areas for future research are indicated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an image segmentation technique that has detection and classification applications in autonomous image analysis systems and compares it with current edge and region-based segmentation techniques. The technique, referred to as directed edge tracing, uses both edge magnitude and direction information to reduce segmentation problems commonly associated with segmenters based on edge thresholding. The edge tracing is carried out by examining pixels in a neighborhood of high probability boundary pixels, making the method locally adaptive to contrast changes along a boundary. No a priori knowledge of the number of segments expected or of the level of contrast between segments is required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing algorithms that employ shape statistics (i.e., moment invariants, perimeter2/area ratio, etc.) to characterize object classes can be affected by "segmenter wobble," a phenomenon contributing to the uncertainty in object classification. This paper defines the concept of segmenter wobble and gives examples of this effect in the segmentation of infrared (IR) images. Shape statistics are computed for segmentations by a candidate segmentation algorithm, and their sensitivity to segmenter wobble is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The lower levels of many image processing and pattern recognition systems require a high degree of machine throughput. This is especially true in real-time (or pseudo-real time) systems where all the pixels in an image frame must be processed as quickly as possible. This paper proposes a method directed primarily toward extracting low level features quickly and efficiently. This method is based on projecting the image into characteristic subspaces using appropriate orthogonal basis functions. These basis operators are designed with an emphasis on modularity; thus, they will be useful in hierarchical processing and, significantly, they will fit well into a Very Large Scale Integrated (VLSI) circuit design. Chips consisting of many of these modules, under executive processor control, could now be used in imaging systems that require fast feature detection using compact and efficient hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system for the automatic evaluation of interference patterns has been developed. After digitizing the interferograms from classical and holografic interferometers with a television digitizer and performing different picture enhancement operations the fringe loci are extracted by use of a floating-threshold method. The fringes are numbered using a special scheme after the removal of any fringe disconnections which might appear if there was insufficient contrast in the interferograms. The reconstruction of the object function from the numbered fringe field is achieved by a local polynomial least-squares approximation. Applications are given, demonstrating the evaluation of interferograms of supersonic flow fields and the analysis of holografic interferograms of car-tyres.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.