KEYWORDS: Biometrics, Computer security, Security technologies, Lens design, Symmetric-key encryption, Control systems, Fluctuations and noise, Surveillance, Current controlled current source, Detection and tracking algorithms
This paper presents a novel approach to remotely authenticating a user by applying the Vaulted Fingerprint Verification (VFV) protocol. It proposes an adaptation of the Vaulted Verification (VV) concept with fingerprint minutia triangle representation. Over the past decade, triangle features have been used in multiple fingerprint algorithms. Triangles are constructed from three fingerprint minutiae and result in a feature vector that is translation and rotation invariant. In VFV, the user’s minutia triangles are arranged into blocks; each block of triangles is paired with a chaff block. In turn, each real/chaff block is encrypted with a key that is only known to the users. These encrypted block pairs can be used to generate a “challenge” by swapping blocks according to a random bitstring and requiring the remote user to reproduce that exact string. For identity verification, the user creates a new triangle feature vector from his or her fingerprint. This feature vector is matched against each block, which allows the user to identify the “real” block in each pair and recover the bitstring. In this process, individual triangle matching rates are improved by approximate matching on the feature vectors, grouping several feature vectors together, and correcting errors on the final bitstring. This paper presents data on an optimal threshold for approximate matching, the accuracy of triangle matching, the distinguishability between a user’s triangle and a chaff triangle, and the accuracy of the VFV system.
As the use of biometrics becomes more wide-spread, the privacy concerns that stem from the use of biometrics are becoming more apparent. As the usage of mobile devices grows, so does the desire to implement biometric identification into such devices. A large majority of mobile devices being used are mobile phones. While work is being done to implement different types of biometrics into mobile phones, such as photo based biometrics, voice is a more natural choice. The idea of voice as a biometric identifier has been around a long time. One of the major concerns with using voice as an identifier is the instability of voice. We have developed a protocol that addresses those instabilities and preserves privacy. This paper describes a novel protocol that allows a user to authenticate using voice on a mobile/remote device without compromising their privacy. We first discuss the Vaulted Verification protocol, which has recently been introduced in research literature, and then describe its limitations. We then introduce a novel adaptation and extension of the Vaulted Verification protocol to voice, dubbed Vaulted Voice Verification (V3). Following that we show a performance evaluation and then conclude with a discussion of security and future work.
Describable visual attributes are a powerful way to label aspects of an image, and taken together, build a detailed representation of a scene's appearance. Attributes enable highly accurate approaches to a variety of tasks, including object recognition, face recognition and image retrieval. An important consideration not previously addressed in the literature is the reliability of attribute classifiers as the quality of an image degrades. In this paper, we introduce a general framework for conducting reliability studies that assesses attribute classifier accuracy as a function of image degradation. This framework allows us to bound, in a probabilistic manner, the input imagery that is deemed acceptable for consideration by the attribute system without requiring ground truth attribute labels. We introduce a novel differential probabilistic model for accuracy assessment that leverages a strong normalization procedure based on the statistical extreme value theory. To demonstrate the utility of our framework, we present an extensive case study using 64 unique facial attributes, computed on data derived from the Labeled Faces in the Wild (LFW) data set. We also show that such reliability studies can result in significant compression benefits for mobile applications.
The issues of applying facial recognition at significant distances are non-trivial and often subtle. This paper summarizes 7 years of effort on Face at a distance, which for us is far more than a fad. Our effort started under the DARPA Human Identification at a Distance (HID) program. Of all the programmers under HID, only a few of the efforts demonstrated face recognition at greater than 25ft and only one, lead by Dr. Boult, studied face recognition at distances greater than 50 meters. Two issues were explicitly studied. The first was atmospherics/weather, which can have a measurable impact at these distances. The second area was sensor issues including resolution, field-of-view and dynamic range. This paper starts with a discussion and some of results in sensors related issues including resolution, FOV, dynamic range and lighting normalization. It then discusses the "Photohead" technique developed to analyze the impact of weather/imaging and atmospherics at medium distances. The paper presents experimental results showing the limitations of existing systems at significant distance and under non-ideal weather conditions and presents some reasons for the weak performance. It ends with a discussion of our FASSTTM (failure prediction from similarity surface theory) and RandomEyesTM approaches, combined into the FIINDERTM system and how they improved FAAD.
An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated
detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron
multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than
1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under
quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning
minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms,
to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and
viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed
and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of
targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are
passed to secondary processing to extract recognition templates, which are then compared with a database for
identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7
surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.
The goal of this research is to quickly create a compact, accurate representation of the path of a robot. Others have developed techniques for building 3D maps, while this work concentrates on building a descriptive image of where the robot has been. This technique which we call "tubular mosaics," provides an orthographic-like view of the hallways the robot has traversed. An omnidirectional camera produces images that can contain a hemispherical eld of view. Mosaics are built from wedges extracted from each frame. We demonstrate the technique using views perpendicular to the camera motion resulting in the upper half of the tubular mosaic (i.e. two walls of a corridor, including the ceiling.) Each wedge is unwarped into a rectangular strip. Strips from consecutive frames of video are matched and incorporated into the mosaics. Others have created orthographic strip mosaics of areas using a camera looking down at an angle from an airplane. Our work is a generalization of this to multiple directions. Three techniques of varying complexity and accuracy are presented. Results will be useful for robot navigation. Other applications include texture mapping and the creation of a compact representation of the video stream captured by the robot.
Ideally, an algorithm used for either self localization or pose estimation would be both efficient and robust. Many researchers have based their techniques on the absolute orientation research of B. K. P Horn. As will be shown in this paper, while Horn s method performs well with an additive Gaussian noise of large variance, mismatches and outliers have a more profound effect. In this paper, the authors develop a new closed-form solution to the absolute orientation problem, featuring techniques specifically designed for increasing the robustness during the critical rotation determination stage. We also include a comparative analysis of the various strengths and weaknesses of both Horn s and the new techniques.
Image processing often involves operations using `neighbor' pixels. This paper combines the usual definition of 4 or 8 connected neighbors with image information to produce local neighbor definitions that are signal dependent. These generalized neighbors, G-neighbors, can be used for a variety of image processing tasks. The paper examines their use for detail preserving smoothing and morphology. The simple/local nature of G-neighbor definitions make them ideal for implementation on low-level pixel parallel hardware. A near real-time parallel implementation of the G-neighbor computation, including G-neighbor-based detail preserving smoothing and G-neighbor morphology, is discussed. The paper also provides a qualitative comparison of G-neighbor-based algorithms to previous work.
This paper presents a method for the segmentation of multiple motions in a scene using the singular value decomposition of a feature track matrix. It is shown that motions can be separated using the right singular vectors associated with the nonzero singular values. This is based on the relationship between the right singular vectors and the principal components of the covariance matrix of the tracks. Furthermore, under general assumptions, the number of numerically nonzero singular values can be used to determine the number of motions. This can be used to derive a relationship between a good segmentation, the number of nonzero singular values in the input and the sum of the number of nonzero singular values in the segments. The approach is demonstrated on real and synthetic examples and a study of the robustness of the method is given. The paper ends with a critical analysis of the approach.
KEYWORDS: Sensors, Data modeling, Mathematical modeling, Visual process modeling, 3D modeling, Distance measurement, Sensor fusion, Cameras, Error analysis, Systems modeling
The application of numeric methods to the minimization of error has become an emerging paradigm for obj ect recovery. Typically, a parametric representation describing the object is postulated. Its parameters are then adjusted to minimize some measurement of the distance between the representation and the datapoints (the error-of-fit model). Characteristics of the sensor used to recover the points may be implicit in this formulation or may not be included at all. While sensors may be precise for a specific field of view no sensor is everywhere exact. A laser range finder for example, yields very sharp x- and y-coordinate values; however, its z-coordinate is less trustworthy. It becomes important to capture the strengths and weaknesses of a sensor and incorporate them into the recovery process. We seek to make explicit the contribution of a particular sensor by introducing a sensor model. This partitioning facilitates the development of an appropriate description of a sensor's characteristics. Also, it helps clarify interactions among different aspects of the recovery process ( i.e. error-of-fit model, sensor model, and parametric object representation). The sensor model is reflected in the certainty of sensed quantities (position, color, intensity) associated with a datapoint. We explore whether the introduction of an explicit sensor model yields an improvement in the recovery process. The PROVER (Parametric Representation Of Volumes: Experimental Recovery) System, a testbed used in the development of sensor models is described.
This paper describes a new segmentation technique for very sparse surfaces which is based on minimizing the energy of the surfaces in the scene. While it could be used in almost any system as part of surface reconstruction/model recovery, the algorithm is designed to be usable when the depth information is scattered and very sparse, as is generally the case with depth generated by stereo algorithms. We describe a sequential implementation that constructs seed surfaces, automatically sets thresholds, adds points to the seeds, merges surfaces, and corrects for incorrectly added points. We discuss a parallel implementation that runs on the Connection Machine™. We show results from a sequential algorithm that processes synthetic or range finder data.
The idea of segmentation by energy minimization is not new. However, prior techniques have relied on discrete regularization or Markov random fields to model the surfaces to build smooth surfaces and detect depth edges. Both of the aforementioned techniques are ineffective at energy minimization for very sparse data. In addition, out method does not require edge detection and is thus also applicable when edge information is unreliable or unavailable.
The technique presented herein models the surfaces with reproducing kernel-based splines which can be shown to solve a regularized surface reconstruction problem. From the functional form of these splines we derive computable bounds on the energy of a surface over a given finite region. The computation of the spline, and the corresponding surface representation are quite efficient for very sparse data. An interesting property of the algorithm is that it makes no attempt to determine segmentation boundaries; the algorithm can be viewed as a classification scheme which partitions the data into collections of points which are “from” the same surface. Among the significant advantages of the method is the capacity to process overlapping transparent surfaces, as well as surfaces with large occluded areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.