The airborne enhanced synthetic vision system generates a virtual scene that reflects a real scene by comprehensively utilizing multiple types of data. The commonly used data includes sensor imaging data, obstacle data, navigation and attitude data, scene database data, and two-dimensional and three-dimensional symbol data. Only if these data are effectively integrated and displayed, can the entire system be made effective and reliable. The fusion display processing in this article is based on the OpenGL rendering pipeline. The highlights include the data types and processing ideas involved in the enhanced synthetic vision system, the processing methods of some common symbols, and the processing flow for superimposing sensor images and database data.
The pseudo-color processing for infrared (IR) image is useful for object detection and tracking. Image fusion of IR and visual images is an effectual scheme for pseudo-color processing. But in some conditions, we only have the IR images of the scene. How to perform a pseudo-color processing for these IR images is a difficult and interesting problem. In the paper, a novel image fusion method based on wavelet and color transfer is proposed. With wavelet and color transfer, the chromaticity values of the color visual image are assigned to the IR image. By the method, the IR and visual images can be fused even if the scenes of two images are not the same one. The only requirement of the method is that the compositions of the source and target scenes resemble each other. The experimental results show that the algorithm can give IR image a natural color appearance. Such a full color representation of nighttime scenes may be of great ergonomic value by making the interpretation of the displayed scene easier (more intuitive) for the observer.
In this paper, we propose a new algorithm to enhance global and local contrast of infrared image. The aim of our research is to carry out pretreatment for infrared video moving objects tracking. For the inherent difficulty, it is difficult for infrared image enhancement to get comparatively ideal result by adopting only one kind of method. So we propose a novel algorithm. First an enhancement algorithm is proposed based on plateau histogram using a self-adaptive threshold to enhance global contrast, which is different from the traditional histogram equalization algorithm. Second step, after getting a nonlinear gain function, the equalized infrared image is transformed by discrete stationary wavelet. Then the high frequency sub-bands are enhanced with the gain function better than linear filter. It also can suppress the amplification of noise. Experimental results show that the new algorithm can enhance the contrast of infrared image effectively and get more excellent visual effect than traditional histogram equalization method and unsharp masking method.
A novel multi-resolution image fusion method based on maximum directional contrast and area-based standard deviation is presented. First the source images are performed the multi-resolution wavelet decomposition and the directional contrast and area-based standard deviation are defined. And then, the wavelet coefficients for the fused image can be obtained by means of the fusion rules based on maximum directional contrast and area-based standard deviation. Finally, the fused image is reconstructed by inverse wavelet transform. By this scheme, the contrast and details from each original image are emphasized and enhanced in the fused image. The experimental results show that the fusion algorithm presented here is much effective.
The embedded multi-resolution coding in JPEG2000 make the ROI(Region Of Interesting) coding very successful. But it is processed in the wavelet domain, resulting in having to decompose some mature JPEG2000 baseline system coder and decoder to implement the ROI function. So a low computational complexity upscaling method in space domain, deduced from the relation of the original image and the image after wavelet transform, is proposed in this paper without modifying the baseline system. By the method, we should right shift the data outside the region of interesting, then use the normal JPEG2000 system to encode and decode the image, and subsequently we can get the image by left shift that data in the space domain after decoding. The method can get high performance quality in ROI in very low bit rate. And it has low computational complexity, needn't deduce the ROI mask in the wavelet domain. As Maxshift method in the wavelet domain, the Maxshift in space domain can be deduced and it decodes the compressed image without ROI mask. That is the ROI mask transmission is not need. The cost of the simpleness is that it will produce distinct tiring edge around the ROI. The method can implement the ROI function through simple extends with the JPEG2000 baseline hardware without the ROI function and it is significative in practice.
In this paper, a novel hierarchical image fusion scheme based on wavelet multi-scale decomposition is presented. The basic idea is to perform a wavelet multi-scale decomposition of each source image first, then the wavelet coefficients of the fused image is constructed using region-based selection and weighted operators according to different fusion rules, finally the fused image is obtained by taking inverse wavelet transform. This approach has been successfully used in image fusion. In addition, with the use of the parameters such as entropy, cross entropy, mutual information, root mean square error, peak-to-peak signal-to-noise ratio, the performance of the fusion scheme is evaluated and analyzed. The experimental results show that the fusion scheme is effectual.
In this paper, a pixel level multi-resolution image fusion scheme based on wavelet transform is described. In the fusion process, the images are first decomposed based on wavelet transform. Then at each resolution the images are fused. Finally the fused image is obtained by taking the inverse wavelet transform of the fused wavelet coefficients. With the aim of reducing the contrast and structural distortion, when construct each wavelet coefficient for the fused image, we consider not only the corresponding coefficients, but also their close neighbors. Both visual quality (contrast and presence of fine details) and absence of impairments or artifacts are concerned in our method. This approach has been successfully used in image fusion. The experimental results show that the fusion scheme is effectual and the fused images are more suitable for human visual or machine perception.
A pixel-level image fusion scheme based on Laplacian pyramid decomposition is presented. The basic idea is to perform a Laplacian pyramid decomposition of each source image first, then the Laplacian pyramid of the fused image is constructed using region-based weighted operators according to different fusion rules, finally the fused image is obtained by taking inverse pyramid transform. Both visual quality (regarded as contrast and presence of fine details) and absence of impairments or artifacts are concerned in our method. This approach has been successfully used in image fusion. The experimental results show that the fusion scheme is effectual and the fused images are more suitable for human visual or machine perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.