Statistical modeling of biometric systems at the score level is extremely important. It is the foundation of the
performance assessment of biometric systems including determination of confidence intervals and test sample
size for simulations, and performance prediction of real world systems. Statistical modeling of multimodal
biometric systems allows the development of a methodology to integrate information from multiple biometric
sources. We present a novel approach for estimating the marginal biometric matching score distributions by
using extreme value theory in conjunction with non-parametric methods. Extreme Value Theory (EVT) is based
on the modeling of extreme events represented by data which has abnormally low or high values in the tails of the
distributions. Our motivation stems from the observation that the tails of the biometric score distributions are
often difficult to estimate using other methods due to lack of sufficient numbers of training samples. However,
good estimates of the tails of biometric distributions are essential for defining the decision boundaries. We
present EVT based novel procedures for fitting a score distribution curve. A general non-parametric method is
used for fitting the majority part of the distribution curve, and a parametric EVT model - the general Pareto
distribution - is used for fitting the tails of the curve. We also demonstrate the advantage of applying the EVT
by experiments.
This paper presents a new document binarization algorithm for camera
images of historical documents, which are especially found in The
Library of Congress of the United States. The algorithm uses a
background light intensity normalization algorithm to enhance an
image before a local adaptive binarization algorithm is applied. The
image normalization algorithm uses an adaptive linear or non-linear
function to approximate the uneven background of the image due to
the uneven surface of the document paper, aged color or uneven light
source of the cameras for image lifting. Our algorithm adaptively
captures the background of a document image with a "best fit"
approximation. The document image is then normalized with respect to
the approximation before a thresholding algorithm is applied. The
technique works for both gray scale and color historical handwritten
document images with significant improvement in readability for both
human and OCR.
The performance of any fingerprint recognizer highly depends on the
fingerprint image quality. Different types of noises in the fingerprint images pose greater difficulty for recognizers.
Most Automatic Fingerprint Identification Systems (AFIS) use some form of image enhancement. Although several methods have been described in the literature, there is still scope for improvement. In particular, effective methodology of cleaning the valleys between the ridge contours are lacking. We observe that noisy valley pixels and the pixels in the interrupted ridge flow gap are "impulse noises". Therefore, this paper describes a new approach to fingerprint image enhancement, which is based on integration of Anisotropic Filter and directional median filter(DMF). Gaussian-distributed noises are reduced effectively by Anisotropic Filter, "impulse noises" are
reduced efficiently by DMF. Usually, traditional median filter is the most effective method to remove pepper-and-salt noise and other small artifacts, the proposed DMF can not only finish its original tasks, it can also join broken fingerprint ridges, fill out the holes of fingerprint images, smooth irregular ridges as well as remove some annoying small artifacts between ridges. The enhancement algorithm has been implemented and tested on fingerprint images from FVC2002. Images of varying quality have been used to evaluate the performance of our approach. We have compared our method with other methods described in the literature in terms of matched minutiae, missed
minutiae, spurious minutiae, and flipped minutiae(between end points and bifurcation points). Experimental results show our method to be superior to those described in the literature.
A new multiple level classification method is introduced. With an available feature set, classification can be done in several steps. After first step of the classification using the full feature set, the high confidence recognition result will lead to an end of the recognition process. Otherwise a secondary classification designed using partial feature set and the information available from earlier classification step will help classify the input further. In comparison with the existing methods, our method is aimed for increasing recognition accuracy and reliability. A feature selection mechanism with help of genetic algorithms is employed to select important features that provide maximum separability between classes under consideration. These features are then used to get a sharper decision on fewer classes in the secondary classification. The full feature set is still used in earlier classification to retain complete information. There are no features dumped as they would be in feature selection methods described in most related publications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.