The growing usage of smart mobile devices have made authentication with biometric data more convenient. On the other side, videos and photos of users are becoming more available online. This makes it easier for attackers to spoof the authentication systems which rely on face and eye-region data for instance. One major problem with current Presentation Attack Detection (PAD) systems is their lack of generalization to data captured by different sensors or in different environments. In this paper, we propose the use of unsupervised domain adaptation to solve this PAD problem, specifically the iris PAD. Our model is composed of symmetric classifiers and two per-class domain discriminators. Interaction between class probabilities and domain classification is utilized to jointly adversarialy train a mobile-oriented feature extraction network, capable of generating domain-invariant features. The approach is evaluated on three benchmark iris PAD datasets. Results show up to 40% improvement in cross-dataset Average Classification Error Rate (ACER) proving the effectiveness of the approach in increasing the robustness and generalization of biometric PAD systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.