PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142188
The objective of this paper is to describe three current trends in the development of image processing technology as it is applied to problems in the earth sciences. During the 1990's there will be significant growth in both research and applications of the earth sciences, caused by concerns about the global environment, resources, and natural hazards. Motivated by these issues and stimulated by new developments in information technology, many significant developments in image processing will be created. Three trends are identified here as being particularly important: integration of image processing into 'visual computing', development of volume imaging technology and applications, and creation of highly functional visual information processing systems. These trends are illustrated by numerous applications in atmospheric, oceanographic, land, and solid earth geophysics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142199
The analysis of multi-format, multi-parameter, and multi-temporal data sets can be very difficult. A system to effectively handle data from different formats and sources is being developed and tested as an extension to the VICAR/IBIS Geographical Information System (GIS) developed at the Jet Propulsion Laboratory. In its original implementation, all data is references to a single georeference plane in the VICAR/IBIB GIS. Average or typical values for a parameter defined within a polygonal region are stored in unlabeled columns of a tabular file. We have replaced the tabular file format with an 'info' file format. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to 'actual' values, graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input). We have successfully tested the GIS using multi-temporal, multi-sensor data sets from Flevoland in The Netherlands and Bonanza Creek, in Alaska as part of a research study of applications of the GIS to remote sensing data under NASA's Applied Information System Program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142205
We describe an integrated system developed to update maps by extracting and identifying roads and other linear features in imagery. Our approach involves registering the map to be updated to the image on a local basis using an affine transformation to eliminate costly preprocessing. Image features are converted to the map coordinate system by inverse transformation. Three strategies for linear structure identification and a method for classifying new roads are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142206
This paper presents a prototype system which uses constraints (mathematical feature mapping functions) to index and retrieve images. Information is automatically extracted from images such as the shape, texture, and position of objects within the image. Once extracted, this feature information is stored in an associatively accessible database. The database allows users to locate images containing objects of interest, or locate objects of interest within images. The system presented here also provides a method for automatic indexing of the database through the learning and application of object types or classes. Query of the database is accomplished by way of: (1) sketched example, (2) selected prototype object from an image or atlas, (3) graphically specified single or multidimensional feature ranges, or (4) class type. The use of pre-derived features and mapping functions allow this method to be efficiently implemented in real-time systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142207
A geographical information system for storing and retrieving imagery and text data is described. The system provides a flexible means for attributing geographical features using full-text indexing and retrieval techniques. It integrates content-based and spatial data access methods into a single environment. The system is easy to use and involves no programming on the part of the user. All data is stored in standard Macintosh PICT and ASCII file formats. The system represents a low cost means of storing and accessing imagery and text data for remote sensing and GIS applications on a standard color Macintosh with hard drive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Todd K. Rodgers, Jeffrey A. Cochand, Joseph A. Sivak
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142670
The Global Reference Analysis and Visualization Environment (GRAVE) is a research prototype multimedia system that manages a diverse variety of data types and presents them to the user in a format that is geographically referenced ton the surface of a globe. When the user interacts with the globe, the system automatically manages the `level-of-detail' issues to support these user actions (allowing flexible functionality without sacrificing speed or information content). To manage the complexity of the presentation of the (visual) information to the user, data instantiations may be represented in an iconified format. When the icons are picked, or selected, the data `reveal' themselves in their `native' format. Object-oriented programming and data type constructs were employed, allowing a single`look and feel' to be presented to the user for the different media types. GRAVE currently supports the following data types: imagery (from various sources of differing resolution, coverage, and projection); elevation data (from DMA and USGS); physical simulation results (atmospherics, geological, hydrologic); video acquisitions; vector data (geographical, political boundaries); and textual reports. GRAVE was developed in the Application Visualization System (AVS) Visual Programming Environment (VPE); as such it is easily modifiable and reconfigurable, supporting the integration of new processing techniques/approaches as they become available or are developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142208
A variety of remotely sensed digital imagery data sources now exists that enable the computer graphics synthesis of convincing real, whole Earth images similar to those recorded by orbiting astronauts using conventional photographic techniques. Within data resolution limitations, such data sets can be rendered (using three dimensional graphics technologies) to produce views of our planet from any vantage point. By utilizing time series of collected data in conjunction with synthetic Lambertian lighting models, such views can be animated, in time, to produce dynamic visualizations of the Earth and its weather systems. This paper describes an effort to produce an animation for commercial use in the broadcast industry. To be used for entertainment purposes, the animation was designed to show the dramatic, fluid nature of the Earth as it might appear from space. GOES infra red imagery was collected over the western hemisphere for 15 days at half hour intervals. This imagery was processed to remove sensor artifacts and drop-outs and to create synthetic imagery which appears to the observer to be nature visible wavelength imagery. Cloud free imagery of the entire planet, re- sampled to 4 Km resolution, based on mosaicked AVHRR, polar orbiting imagery was used as a 'base map' to reflect surface features. Graphics techniques to simulate Lambertian lighting of the Earth surface were used to impart the effects of changing solar illumination. All of the graphics elements were then, on a frame by frame basis, digitally composited together, with varying cloud transparency to produce the final rendered imagery, which in turn is recorded onto video tape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142209
To support the development of electro/optical sensor systems under the Smart Weapons Operability Enhancement (SWOE) Program. TASC has developed a four-dimensional (3 spatial and 1 temporal) cloud model for use in radiometric computations and scene simulation. The cloud scene simulation model employs a multi-step process to generate the density fields beginning with the rescale and add fractional Brownian motion algorithm to simulate the horizontal distribution of cloud elements within the user-defined cloud domain. Knowledge of structures of stratiform and cirriform cloud types is used to specify the vertical extent of individual clouds. Internal variability is then generated within each cloud using a three- dimensional version of the rescale and add model. A physics-based scheme that models clouds as the sum of a large number of individual Lagrangian 'parcels' is used to simulate cumulus cloud growth and convection based on environmental conditions. In this paper we present a description of the cloud scene simulation modeling process. In particular, we emphasize the cumulus model which marries fractal field generation and convection dynamics to result in a computationally efficient method to generate cloud fields that are both physically derived and visually realistic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142189
Near real-time monitoring of the world's oceans is enabled by the NOAA polar orbiting satellites. Due to its dynamic nature, analysis of the ocean, like the atmosphere, benefits from high temporal resolution data. Remotely sensed data, combined with other information, can be evaluated temporally and spatially. To address one related problem, a project was undertaken to develop a system for monitoring the potential distribution of a few pelagic fish species. Techniques used for computing sea surface temperature (SST) from Advanced Very High Resolution Radiometer (AVHRR) digital data, and the logic for predicting the location of the species based on SST, bathymetry, and species specific environmental information are presented. Selected functionality described within this paper may be extended to other application areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tommy L. Coleman, William H. Clerke, Wubishet Tadesse, Reginald S. Fletcher
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142190
Accurate assessment of forested wetlands is essential for forest managers in the development of management plans because these areas are considered unsuitable for timber production and therefore affect the allowable sale quantity of the forest. Three methods of quantifying wetland habitats using Thematic Mapper (TM) imagery were evaluated to determine the most effective method of assessing this forest resource. The methods of evaluating the TM imagery were the Kauth-Thomas transformation, principal component analysis (PCA), and a maximum likelihood supervised classification algorithm using TM bands 2, 3, 4, and 5. A summer and winter TM scene was used to allow for those areas that are seasonal and may be dry for periods of the year. The results of this study revealed that the maximum likelihood supervised classification using TM bands 2, 3, 4, and 5 was the most effective method of quantifying wetland habitats. However, this method was the most time consuming and required the user to have good ancillary data and skills in site selection and assessment of those signatures used as input into the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142191
Environmental studies of the creosote sites on the Bow River, Calgary, Alberta, can greatly benefit from some appropriate evolutionary spatial modeling of those sites to enable correlations with different surface imagery and borehole information. This evolutionary digital terrain modeling consists of integrating terrain models corresponding to different epochs into a sequence of models with the corresponding aerial photography draped over the reconstructed surface. Correlations with other available photography and discrete measurements are also planned to be done to facilitate the surface environmental studies. The presentation will describe the results from the first phase of this research project which has concentrated on the terrain modeling with appropriate visualization of the surface for the environmental scientists and engineers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142192
New techniques are described for detecting environmental anomalies and changes using multispectral imagery. Environmental anomalies are areas that do not exhibit normal signatures due to man-made activities and include phenomena such as effluent discharges, smoke plumes, stressed vegetation, and deforestation. A new region-based processing technique is described for detecting these phenomena using Landsat TM imagery. Another algorithm that can detect the appearance or disappearance of environmental phenomena is also described and an example illustrating its use in detecting urban changes using SPOT imagery is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142193
Spatial information, analysis, and modeling systems that are used to study the environment require a rich set of operators, and a well designed set of interfaces to be effective. Without the correct operators or functionality, the systems fall short of being able to represent the complexity that is present in the world. Despite years of research and development in spatial computing, there are still technical voids related to languages, operators, interfaces, and processing capabilities related to the handling of complex spatial relationships. This is especially true when considering problems that work with temporal data sets. The position of an object in the environment, the relationship of that object to other objects and the terrain, and its own inherent function at a point in time can be thought of as the spatial context associated with that object. This context is important if we are interpreting a remotely sensed image, analyzing the data in a Geographic Information System (GIS) to resolve the location of a proposed facility, modeling the physical phenomenon, or attempting to model the behavior of an animal in its habitat. This paper discusses the form of the operators that incorporate spatial context, approaches for their implementation, and illustrate how these operators help integrate the remote sensing methods with GIS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142194
A by-product of entity representation in GIS (or any image with spatial meaning) is the level at which we can abstract relationships between those objects. The power of resolution in an object (its semantic meaning) is directly related to its actual representation. This has an immediate impact on the complexity of rules necessary to identify that object and its relationship to others in its domain. This paper will discuss the epistemological characteristics of spatial reasoning and its implication in the development of a generic geometry engine for spatial reasoning and understanding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142195
Digital image classification is a computation intensive task. In remote sensing image analysis, a large proportion of computing time is spent on image classification. Reducing the time required for classification may largely improve the efficiency of image analysis. This is especially significant for real-time applications of remote sensing images. Parallel computing provides effective techniques for improving data processing efficiency. In this paper, three parallel classification algorithms for multiple spectral remote sensing images are described. The strategies for the parallel classification are discussed and experimental results are presented and analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142196
Neural networks can be used as a new type of classifier for multispectral remote sensing data. To achieve efficient and accurate classification, the selection of neural network structures and training parameters are crucial. This research explores suitable neural network models for practical remote sensing image classification. By using a set of techniques, including multispectral image data compression and training parameters selection, complexity of network training phase have been reduced by half and a classification accuracy above 90 percent has been obtained. The neural network using a Back-Propagation model for supervised remote sensing image classification is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142197
A concept is presented for analyzing the texture of changes in multi-temporal imagery. In more traditional change detection approaches, spectral signatures or textures from two or more spatially-coincident image sets are compared. Spatial cooccurrence has been used by various researchers to compute texture measures. These measures, representing the two dimensional x/y spatial variability in an image, are compared against two dimensional textures in other images. This paper introduces the concept of computing image texture using spatial cooccurrence matrices by searching, not just in the x/y space, but in the third dimension of time, or t space. An example problem is described in which changes in forest canopies are evaluated. A spectral mixture model for computing forest canopy closure from Landsat TM data is described. The canopy closure feature images from two spatially coincident, but time varying image sets are evaluated using three dimensional texture analysis. The technique lends itself to evaluation of systematic or localized forest changes; e.g. uniform thinning vs. localized damage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142198
The well-known dark-object subtraction method has formed one of the oldest and widely used procedures for adjusting digital remote sensing data for effects of atmospheric scattering. The method's limited capabilities, relative to more sophisticated methods, are at least partially offset by its wide applicability, due its requirement for little information beyond the image itself. This study examines alternative applications of the procedure to evaluate its effectiveness, using a SPOT HRV XS image of irregular terrain in southwestern Virginia and a sequence of Landsat MSS data depicting a region in south central Virginia. Assessment of the success of the adjustment is conducted using chromaticity co-ordinates (using the method of Alfoldi and Munday (1978)), from corrected values, and comparing corrections to the original data. A successful correction shifts chromaticity co-ordinates away from the equal radiance point towards the purer regions near edges of the diagram. Further, some categories, when corrected successfully, will occupy known positions within chromaticity space. Assessment of the modification proposed by Chavez (1988) was conducted by examining the effects of choosing alternative starting haze values, and effects of alternative choices for atmospheric models. One difficulty in applying the 1988 modification is that it appears to be difficult to make accurate assessments of atmospheric conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ahmed S. EL-Behery, Samia A. Mashali, Ahmed M. Darwish
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142200
Image data compression is essential for a number of applications that involve transmission and storage. One technique that has been recently extensively investigated is vector quantization (VQ). One class of neural networks (NN) structures, namely competitive learning networks appears to be particularly suited for VQ. One main feature that characterizes NN training algorithms is that the VQ codewords are obtained in an adaptive manner. In this paper, a new competitive learning (CL) algorithm called the Threshold Competitive Learning (TCL) is introduced. The algorithm uses a threshold to determine the codewords to be updated after the presentation of each input vector. The threshold can be made variable as the training proceeds and more than one threshold can be used. The new algorithm can be easily combined with other NN training algorithms such as the Frequency-Sensitive competitive learning (FSCL) or the Kohonen Self-Organizing Feature Maps (KSFM). The new algorithm is shown to be efficient and yields results comparable to the famous traditional LBG algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ahmed S. EL-Behery, Samia A. Mashali, Ahmed M. Darwish
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142201
Image data compression is essential for a number of applications that involve transmission and storage. One technique that has been recently extensively investigated is vector quantization (VQ). For image sequence coding, we usually need adaptive algorithms to exploit the high correlation between successive frames. In this paper, we describe an adaptive technique for image sequence coding based on vector quantization. The algorithm is called Variable Size And Code (VSAC), and it has a codebook that varies both in size and code as successive frames are encoded to closely match the local statistics of the current frame. Experimental results are presented on a test sequence and demonstrate that the proposed technique is efficient and that it maintains a nearly constant distortion over the entire sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142202
Multispectral image sequences are one example of a class of image sequences that can be characterized as being spatially invariant. In this class of image sequences, all features are positionally invariant in each image of a given sequence but have varying gray-scale properties. The various features of the scene contribute additively to each image of the sequence but the image formation processes associated with given features have characteristic signatures describing the manner in which they vary over the image sequence. Such sequences can be processed using the simultaneous diagonalization (SD) filter which will generate gray- scale maps of the different image formation processes. The SD filter is based on an explicit mathematical model and can be used to maximize SNR, perform segmentation and provide data compression. A unique property of this approach is that even if several image formation processes occupy a given pixel, they can still be isolated. The gray-scale map associated with each process provides an estimate of the magnitude of a given process at every spatial location in the image sequence. Data compression and noise reduction can be achieved using the same spatially-invariant linearly-additive model and a variation of the simultaneous diagonalization filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142203
Multivalue processing of gray-level image is to transform an image with l gray levels into an image with k gray levels where k is always less than l. In this paper, a new novel, flexible multivalue method based on the selection of varied local extreme points ((delta) -extreme) on gray histograms of the image is proposed and discussed. Several experimental results are given out in the paper to show that this algorithm is very effective, especially for those images with flat histograms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ke Liu, Ying-Jiang Liu, Yong-Qing Cheng, Jingyu Yang
Proceedings Volume Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (1993) https://doi.org/10.1117/12.142204
A novel algebraic feature extraction method for image recognition is presented. For the training image samples, a set of optimal discriminant projection vectors are calculated according to a generalized Fisher criterion function. On the basis of the optimal discriminant projection vector, the algebraic feature vectors of an image can be extracted by projecting the image onto all optimal discriminant projection vectors. Experimental results shows that the algebraic features extracted by the presented method have good recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.