This article illustrates the application of the Kalman filter in real-time human re-identification tasks to improve accuracy, reliability, and reduce computational costs while determining a person's position and orientation. The use of the Kalman filter addresses noise filtering and prediction issues in human re-identification tasks.
The study aims to explore a method for identifying corresponding objects across multiple camera views, to improve the accuracy of object re-identification. We analyzed various techniques, including contour detection, region of interest extraction, and keypoint extraction. We also examined the challenges of finding object correspondences between multiple camera views. To evaluate the effectiveness of the proposed method, we utilized two human attribute datasets, Market-1501 and DukeMTMC-reID, and performed extensive testing on these datasets.
KEYWORDS: Image filtering, Image processing, Information technology, Image compression, Video, Image quality, Video surveillance, Video compression, Optical filters, RGB color model
This work is devoted to development of information technology for real noise image filtering and storing obtained under low light condition. The information technology is designed to reduce the size of the image (video) archive, through the use of image processing methods. The structure of information technology is proposed. It includes input data, color models, image processing methods (filtering, compression, blocking effect reduction, coding and transmission), algorithms and software. The developed information technology depending on image type, task and result requirements performs image processing with appropriate filtering methods and has the possibility of encoding and transmission obtained images. The information technology allows reducing the size of the video archive in a video surveillance system with MPEG-4 (XVID) and H.264 video codec, while maintaining acceptable quality.
In this work, partially homomorphic encryption algorithm based on elliptic curves is implemented. The established algorithm allows performing operations of encryption, addition and decryption of various aspects of the system. One of the possible applications of the algorithm is the creation of the depersonalization protocol in the electronic voting systems with different scales. The mathematical model of algorithm and mathematical models of basic analogues, among which it is possible to distinguish the algorithm of Paillier, which is also homomorphic according to the addition operation, are given. Comparison of fast-acting is executed between the algorithm based on elliptic curves and algorithm of Paillier at correlative cryptoproof lengths of the keys.
In this article, we discuss an extensive class of channel codes called turbo codes. These error correction methods will achieve very good results in terms of error rates, which may be close to the bandwidth limit of the Shannon channel. The article begins with a brief discussion on the coding of the turbo, and then describes the form of the iterative decoder most often used to decode the turbo codes. This article proposes a new optimal modification of the log-MAP-log decoding algorithm. This method (PL-log-MAP) is based on a partial linear approximation of the correction function in the Jacobean logarithm. Using the proposed approximation, the complex functions of ln (.) I exp (.) In the log-MAP algorithm can be estimated with high accuracy and lower computational complexity. The effectiveness of the proposed approximation is tested and demonstrated by applying it to the digital communication system for transmission images in MatLab. It seems that the performance of the PL-log-MAP algorithm has the closest efficiency to the original log-MAP solution.
KEYWORDS: Digital filtering, Signal processing, Wavelets, Electronic filtering, Data processing, Filtering (signal processing), Interference (communication), Signal to noise ratio, Error analysis, Computing systems
The article is devoted to increasing the efficiency of digital signal processing in the conditions of high level of interference, for which the efficiency and reliability of the transmission of information have a priority over the speed of transmission and the amount of CP resources used. The authors provide readers with the improvement of current methods in order to increase the performance of information transmitting in difficult conditions environment. The method of determining the decomposition coefficients, which uses the replacement of the biorthogonal coefficients of the wavelet decomposition with the approximation sum using a series of quasi-random delta sequences, is used in the work, which is used to eliminate the Gibbs effect in signal processing. The method for evaluating the spectrum of the signal for an adaptive threshold method, which uses a multi-window average estimation of the logarithmic spectrum of the signal, is improved. A method of the fast median filtration which processes the finite quantities of date vector with splitting an original data vector onto some parts has been developed. The method of parallel fast wavelet transform is improved, which uses the partition of the data vector into blocks for processing data using a local wavelet transform in the diagonal sequence. The theoretical researches and modeling demonstrate the significant efficiency of the newly proposed and improved methods.
In this article, we discuss the powerful class of channel codes referred to as turbo codes. We commence with a brief discussion for turbo decoding algorithms. It is proposed to use an PL-log-MAP algorithm. Some numerical results and research experiments, such as simulation for bit error rate estimation for images transmission, have been presented. The performance of the PL-log-MAP algorithm is shown to have the closest performance to the original log-MAP solution.
This article suggests the method of image segmentation, using Laws` texture energy measures. This method allows identifying segments of images efficiently for their further use in the image processing. Laws` measures describe the image most accurately, resulting in making it easier and more efficient in comparison with the other approaches to allocate separate classes of textures. In order to obtain these measures the sixteen masks are calculated. Resulting energy measures can be provided after applying each of the masks to the image. The developed algorithm was tested using a set of test images. Analysis of the obtained results has showed that in case of visually similar texture images the transition to energy maps significantly improves the correlation coefficient and therefore emphasizes textural features of the images and makes it possible to identify the similarities of textures. In order to evaluate the results of efficiency of developed algorithm properly, its results have been compared to the segmentation method based on matrix matches. It was proved that segmentation based on Laws` measures can detect various types of texture more precisely and with greater speed of operation.
Images acquired by computer vision systems under low light conditions are characterized by the existence of noises. As a rule, it results in decreasing object detection rate. To increase the object detection rate, the proper image preprocessing algorithm is needed. The paper presents the image denoising method based on bilateral filtering and wavelet thresholding. The boosting method for object detection that uses the modified Haar-like features which include Haar-like features and symmetrical local binary patterns are proposed. The proposed algorithm allows increasing object detection rate in comparison with Viola-Jones method for a case of face detection task. The algorithm was tested on the two image sets, Yale B and the proprietary – VNTU-458.
A new approach for constructing cloud instant messaging represented in this article allows users to encrypt data locally by using Diffie - Hellman key exchange protocol. The described approach allows to construct a cloud service which operates only by users encrypted messages; encryption and decryption takes place locally at the user party using a symmetric AES encryption. A feature of the service is the conferences support without the need for messages reecryption for each participant. In the article it is given an example of the protocol implementation on the ECC and RSA encryption algorithms basis, as well as a comparison of these implementations.
A new approach to solve the problem of image correction to improve the quality perception of graphic information by people with aberrations of the eye optical system is considered in given article. The model of higher order aberrations which may appear in the human eye optical system is described. The developed approach is based on the pre-processing of digital images and applying of the filtration methods to the adjusted images.
The second fundamental form (SFF) characterizes surface bending as value and direction of normal vector to surface. The value of SFF can be used for blur elimination by simple subtractions of the SFF from image signal. This operation narrows amplitude fronts saving contours as inflection lines. However, it sharpens all small fluctuations and introduces image distortion like noise. Therefore blur recognition and elimination using SFF has to be accompanied by procedure of image estimate optimization in accordance with regularization functional which acts as nonlinear filter. Two iterative methods of original image estimate optimization are suggested. The first method uses dynamic regularization basing on condition of iteration process convergence. The second method implements the regularization in curved space with metric defined on image estimate surface. The given iterative schemes have faster convergence in comparison with known ones.
The article proposes the application of discrete cosine transformation, Haar wavelet transformation and parallel computation in order to reduce the computational complexity of the fractal coding algorithm and to increase the processing speed of the image compression using the proposed algorithm.
In computer graphics, anti-aliasing is a software technique for diminishing jaggies -- stairstep-like lines that should be smooth. Jaggies occur because the output device, the monitor or printer, doesn't have a high enough resolution to represent a smooth line. The standard anti-aliasing reduces the prominence of jaggies by surrounding the stair-steps with intermediate shades of gray. Although this reduces the jagged appearance of the lines, it also makes them fuzzier. The suggested anti-aliasing algorithm uses the interpolation based on self-similar multitudes to remove the jaggies. In contrast to the well-known anti-aliasing algorithms, this approach changes neither brightness nor colors of a magnified image. This anti-aliasing algorithm was implemented in the special software for low vision people -- the L&H Magnifier that is being developed by CMS, Ukraine and Lernaut & Hauspie, Belgium. The preliminary tests confirmed that the developed technology improves the quality of zoomed images much better than the standard algorithms but it needs a large number of the computer operation. So, it's reasonable to use the anti- aliasing algorithm based on self-similar multitudes, when the magnification level is 4 and higher.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.