If you wish to cite this paper, please use the following reference:

Similar documents
This is an accepted version of a paper published in Elsevier Information Fusion. If you wish to cite this paper, please use the following reference:

When enough is enough: early stopping of biometrics error rate testing

Score calibration for optimal biometric identification

A Confidence-Based Late Fusion Framework For Audio-Visual Biometric Identification

Biometric Fusion: Does Modeling Correlation Really Matter?

Error Rates. Error vs Threshold. ROC Curve. Biometrics: A Pattern Recognition System. Pattern classification. Biometrics CSE 190 Lecture 3

I D I A P R E S E A R C H R E P O R T. Samy Bengio a. November submitted for publication

Likelihood Ratio Based Biometric Score Fusion

EXTRACTING BIOMETRIC BINARY STRINGS WITH MINIMAL AREA UNDER THE FRR CURVE FOR THE HAMMING DISTANCE CLASSIFIER

Biometrics: Introduction and Examples. Raymond Veldhuis

BIOMETRIC verification systems are used to verify the

Estimation of Relative Operating Characteristics of Text Independent Speaker Verification

Biometric Hash based on Statistical Features of Online Signatures

Citation for published version (APA): Susyanto, N. (2016). Semiparametric copula models for biometric score level fusion

Semi-parametric Distributional Models for Biometric Fusion

Likelihood Ratio-Based Biometric Verification

PIN-based cancelable biometrics

Analytical Study of Biometrics Normalization and Fusion Techniques For Designing a Multimodal System

Score Normalization in Multimodal Biometric Systems

Dynamic Linear Combination of Two-Class Classifiers

Fingerprint Individuality

How to Deal with Multiple-Targets in Speaker Identification Systems?

Likelihood Ratio in a SVM Framework: Fusing Linear and Non-Linear Face Classifiers

Securing BioEncoded IrisCodes Against Correlation Attacks

System Design and Performance Assessment: A Biometric Menagerie Perspective. Norman Poh Lecturer Dept of Computing

Machine Learning Linear Classification. Prof. Matteo Matteucci

Multimodal Biometric Fusion Joint Typist (Keystroke) and Speaker Verification

IDIAP. Martigny - Valais - Suisse ADJUSTMENT FOR THE COMPENSATION OF MODEL MISMATCH IN SPEAKER VERIFICATION. Frederic BIMBOT + Dominique GENOUD *

IBM Research Report. The Relation between the ROC Curve and the CMC

Has this Person Been Encountered Before? : Modeling an Anonymous Identification System

Multivariate statistical methods and data mining in particle physics

An Analysis of BioHashing and Its Variants

Performance Evaluation

Pramod K. Varshney. EECS Department, Syracuse University This research was sponsored by ARO grant W911NF

1 Biometric Authentication: A Copula Based Approach

Research Article Multimodal Personal Verification Using Likelihood Ratio for the Match Score Fusion

Mixtures of Gaussians with Sparse Structure

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Maximum Likelihood and Maximum A Posteriori Adaptation for Distributed Speaker Recognition Systems

Fusion of Dependent and Independent Biometric Information Sources

Revisiting Doddington s Zoo: A Systematic Method to Assess User-dependent Variabilities

BLAST: Target frequencies and information content Dannie Durand

Novel Quality Metric for Duration Variability Compensation in Speaker Verification using i-vectors

Automatic Identity Verification Using Face Images

On the Individuality of Fingerprints: Models and Methods

Department of Computer and Information Science and Engineering. CAP4770/CAP5771 Fall Midterm Exam. Instructor: Prof.

Nearest Neighbourhood Classifiers in Biometric Fusion

Logarithmic quantisation of wavelet coefficients for improved texture classification performance

A Minutiae-based Fingerprint Individuality Model

Substroke Approach to HMM-based On-line Kanji Handwriting Recognition

Introduction to Signal Detection and Classification. Phani Chavali

Hybrid Multi-Biometric Person Authentication System

Two-stage Adaptive Randomization for Delayed Response in Clinical Trials

Bayesian Learning (II)

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Two-Layered Face Detection System using Evolutionary Algorithm

Click Prediction and Preference Ranking of RSS Feeds

ISCA Archive

A Theoretical Analysis of Authentication, Privacy, and Reusability Across Secure Biometric Systems

Evaluating the Consistency of Estimation

Role of Assembling Invariant Moments and SVM in Fingerprint Recognition

1 Hypothesis testing for a single mean

Multiple Classifier System for Writer Independent Offline Handwritten Signature Verification using Elliptical Curve Paths for Feature Extraction

DEPARTMENT OF COMPUTER SCIENCE AUTUMN SEMESTER MACHINE LEARNING AND ADAPTIVE INTELLIGENCE

Modifying Voice Activity Detection in Low SNR by correction factors

Ensemble Methods. NLP ML Web! Fall 2013! Andrew Rosenberg! TA/Grader: David Guy Brizan

Unsupervised Anomaly Detection for High Dimensional Data

A Contrario Detection of False Matches in Iris Recognition

Decentralized Sequential Hypothesis Testing. Change Detection

Evaluation. Andrea Passerini Machine Learning. Evaluation

SVM based on personal identification system using Electrocardiograms

Chris Piech CS109 CS109 Final Exam. Fall Quarter Dec 14 th, 2017

PRIVACY-PRESERVING BIOMETRIC PERSON IDENTIFICATION

AMONG several approaches for image classification,

Evaluation requires to define performance measures to be optimized

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing

Biometric Data Fusion Based on Subjective Logic 1

Hypothesis tests

Longitudinal Study of Fingerprint Recognition

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Robust Speaker Identification

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Outline. Supervised Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012)

S. R. Tate. Stable Computation of the Complex Roots of Unity, IEEE Transactions on Signal Processing, Vol. 43, No. 7, 1995, pp

CARNEGIE MELLON UNIVERSITY

Machine Learning, Midterm Exam: Spring 2009 SOLUTION

Orientation Map Based Palmprint Recognition

Performance Evaluation and Comparison

Quickest Anomaly Detection: A Case of Active Hypothesis Testing

Bayesian Decision Theory

Test Generation for Designs with Multiple Clocks

Preliminary Statistics. Lecture 5: Hypothesis Testing

Fighting Coercion Attacks in Key Generation using Skin Conductance

Mathematical statistics

ECE531 Lecture 13: Sequential Detection of Discrete-Time Signals

Machine Learning for Signal Processing Bayes Classification and Regression

Introductory Econometrics

Palmprint based Verification System Robust to Occlusion using Low-order Zernike Moments of Sub-images

Part 2 Elements of Bayesian Decision Theory

Transcription:

This is an accepted version of a paper published in Proceedings of the st IEEE International Workshop on Information Forensics and Security (WIFS 2009). If you wish to cite this paper, please use the following reference: T. Murakami, K. Takahashi, ccuracy Improvement with High Convenience in Biometric Identification Using Multihypothesis Sequential Probability Ratio Test, Proceedings of the st IEEE International Workshop on Information Forensics and Security (WIFS 2009), pp.66-70, 2009. http://dx.doi.org/0.09/wifs.2009.5386482 2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

CCURCY IMPROVEMENT WITH HIGH CONVENIENCE IN BIOMETRIC IDENTIFICTION USING MULTIHYPOTHESIS SEQUENTIL PROBBILITY RTIO TEST Takao Murakami and Kenta Takahashi Hitachi, Ltd., Systems Development Laboratory, Yokohama 244-087, Japan BSTRCT Biometric identification has lately attracted attention because of its high convenience; it does not require a user to enter a user ID. The identification accuracy, however, degrades as the number of the enrollees increases. lthough many multimodal biometric techniques have been proposed to improve the identification accuracy, it requires the user to input multiple biometric samples and makes the application less convenient. In this paper, we propose a new multimodal biometric technique that significantly reduces the number of inputs by adopting a multihypothesis sequential test that minimizes the average number of observations. The results of the experimental evaluation using the NIST BSSR (Biometric Score Set - Release ) database showed its effectiveness. Index Terms biometric identification, multimodal biometrics, sequential decision fusion, MSPRT. INTRODUCTION Biometric identification, which has long been used for criminal investigations, is recently used for various kinds of applications: computer login, physical access control, time and attendance management, etc. In such applications, a user only presents his/her biometric sample and it is compared to all the biometric templates enrolled in the database (i.e., oneto-many matching). Then, the system identifies the user and provides service appropriate to him/her (e.g., login to his/her account). Since the user is not required to enter an ID number or to present an ID card, biometric identification can provide a highly convenient way to authenticate users. However, biometric identification has a problem with identification accuracy; it degrades as the number of the enrollees increases. This problem has prevented biometric identification from being applied to large-scale applications. To improve the accuracy, some studies applied a multimodal biometric technique which uses multiple biometric sources of information (such as face, fingerprint, and so on) to biometric identification [, 2, 3]. These techniques, however, make the system inconvenient because the user has to present multiple biometric samples. Jain et al. [4] anticipate that multimodal biometric techniques will be increasingly applied to largescale applications or high security applications, and they also point out that such a solution may cause inconvenience. In this paper, we propose a new multimodal biometric identification technique that can minimize the number of biometric inputs. This technique is based on MSPRT (Multihypothesis Sequential Probability Ratio Test) [5], proposed as a multihypothesis sequential test which minimizes the average number of observations. We evaluate the performance of the proposed method using the NIST BSSR (Biometric Score Set - Release ) database [6], and show its effectiveness. 2. BIOMETRIC IDENTIFICTION 2.. Classification of biometric authentication Biometric authentication can be divided into two modes: verification and identification [7]. In the verification mode, a person claims an identity and presents a biometric sample. Then, the system compares the presented sample to the biometric template in the database which is corresponding to the claimed identity (i.e., one-to-one matching). fter one-to-one matching, the system compares the score to the threshold and decides whether the person is genuine or not. In the identification mode, a person only presents a biometric sample and the system matches the presented sample against all the biometric templates in the database. Then, the system compares each score to the threshold and output a candidate list of people whose templates are similar to the input biometric sample. Biometric identification can be further classified into two categories: negative identification and positive identification [4, 8]. Negative identification systems determine whether a person is not in the database as he/she (explicitly or implicitly) claims. Criminal investigation systems, which output a candidate list of criminals whose fingerprints resemble the input fingerprint, are examples of negative identification systems. On the other hand, positive identification systems determine whether a person is in the database as he/she claims. Commercial applications using biometric identification, such as computer login, physical access control, time and attendance management, are examples of positive identification systems. In this paper, we focus on positive identification. It is desired for negative identification that the number of candidates be small so that it can be examined by human operators, while it is required for positive identification to out-

put at most one candidate (or so small number of candidates that some other match mechanism can quickly narrow down to one) [8]. 2.2. New performance measures for biometric identification systems In the verification mode, two kinds of error rates are defined: FRR (False Reject Rate) and FR (False ccept Rate). FRR is the probability that the system rejects a genuine individual as an imposter. FR is the probability that the system accepts an impostor as a genuine individual. High FRR causes inconvenience, and high FR causes security problems. In the identification mode, FNIR (False Negative Identification Rate) and FPIR (False Positive Identification Rate) are used as performance measures [7]. FNIR is the probability that the enrollee who inputs a biometric sample is not in the candidate list. FPIR is the probability that the system outputs one or more candidates when a non-enrollee (a person not enrolled) inputs a biometric sample. In criminal investigations, for example, it is essential to decrease both FNIR and FPIR. Positive identification, however, outputs at most one candidate as mentioned before, and is used for the applications which provide service appropriate to the identified user. In such cases, the error when the enrollee inputs a biometric sample can be devided into two types: accepting the enrollee as another enrollee, and rejecting the enrollee as non-enrollee. lthough both of the errors cause inconvenience since the enrollee cannot use appropriate service, the former error further causes security problems because the system incorrectly provides service appropriate to others. Taking these matters into account, we newly define three kinds of error rates as performance measures for positive identification systems. EFRR (Enrollee False Reject Rate): The probability that the system incorrectly rejects an enrollee as a non-enrollee. EFR (Enrollee False ccept Rate): The probability that the system incorrectly accepts an enrollee as another enrollee. NFR (Non-Enrollee False ccept Rate): The probability that the system incorrectly accepts a non-enrollee as an enrollee. Taking computer login systems for example, EFRR is the probability that an enrollee fails to login, EFR is the probability that an enrollee logins to another account, NFR is the probability that a non-enrollee (i.e., an attacker) logins to someone s account. High EFRR causes inconvenience, high EFR causes both inconvenience and security problems, and high NFR causes security problems. Figure shows the performance measures for verification systems (FRR/FR) and for positive identification systems (EFRR/EFR/NFR). Verification Genuine FRR Impostor FR Positive identification Enrollee Non-enrollee EFR DB EFRR 2 N NFR Fig.. New performance measures for positive identification systems (EFRR/EFR/NFR). 2.3. problem with accuracy in biometric identification The accuracy in positive identification degrades as the number of the enrollees N increases because the number of those who can be falsely identified as increases. Let us consider the following decision rule: if one or more scores (defined as similarities) exceed the threshold, the system identifies the user as the enrollee whose score is the highest, otherwise the system rejects the user. The probability that a score between the different individuals exceeds the threshold is FR. Thus, assuming that all the scores are independent, the following approximation can be obtained if N F R. NF R = ( F R) N N F R. () NFR increases almost in proportion to the number of the enrollees N. Therefore, it is quite challenging to apply positive identification to the large-scale applications. 2.4. Multimodal biometric identification To solve the problem with accuracy, some papers studied a multimodal biometric technique for identification. BioID [] extracts facial features, lip movement, and voice from a user, and uses all of them for identification. Hong and Jain [2] integrate faces and fingerprints for identification. Nandakumar et al. [3] proposed to compare the posterior probability that a user who inputs biometric samples is each enrollee to the threshold in multimodal identification. ll of the techniques above, however, make a decision after the user inputs all the biometric samples required, causing inconvenience. We are here concerned with a multimodal technique mak- a decision each time a user inputs a biometric ing sample, which is called sequential decision fusion [9]. Takahashi et al. [9] proposed a sequential decision fusion in multimodal biometric verification based on SPRT (Sequential Probability Ratio Test). SPRT is a statistical hypothesis test which minimizes the average number of observations if the samples are i.i.d. (independent and identically distributed) and the error probabilities are sufficiently small [0]. pplying SPRT to biometric verification, a sequential decision fusion that minimizes the average number of inputs can be realized.

SPRT is, however, a binary test which accepts either a null hypothesis or an alternative hypothesis (i.e., binary classification). In contrast, positive identification accepts a user as one of the enrollees in the database or rejects him/her as a non-enrollee (i.e., multi-class classification). Noda and Kawaguchi [] extended SPRT to multi-class classification, and proposed to reduce the biometric inputs (the utterance length) in speaker identification using the extended SPRT. However, this technique does not minimize the average number of inputs (as shown in Section 4) since SPRT is optimal only in the case of binary classification. In this paper, we propose a sequential decision fusion for identification which minimizes the average number of inputs by applying MSPRT which is optimal in the case of multi-class classification. 3. SEQUENTIL DECISION FUSION IN BIOMETRIC IDENTIFICTION USING MSPRT User Enrollee P( H ) i s sn s2 λ s2 N 2 N 2 N 2 N Non-enrollee S P( H S 2 ) ( ) i i P H First input s s 2 Second input 0 2 N 0 i 0 2 N 0 i 0 2 N 0 i Fig. 2. The sequential decision fusion algorithm based on MSPRT using scores as observed data (T = 2). 3.2. Sequential decision fusion using MSPRT OK 3.. MSPRT Dragalin et al. [5] found sequential tests of multiple hypotheses H 0, H,, H N which minimizes the average number of observations, calling such tests MSPRT. The probability α i that the hypothesis H i (0 i N) is incorrectly accepted is given by α i = j i P (H j )α ji, (2) where P (H j ) is the prior probability that H j is true, and α ji is the probability of accepting H i when H j is true. Consider all the tests in which each α i does not exceed the predetermined value α i (0 i N). Dragalin et al. proved that the following test (MSRPT) minimizes the average number of observations among all the tests above [5]. MSPRT Each time data s t is observed, the posterior probability that the hypothesis H i (0 i N) is true is calculated as follows: P (H i )Z ti P (H i S t ) = N n=0 P (H, (3) n)z tn where S t is a set of the observed data S t = {s τ τ t}, and Z ti is a likelihood ratio which is given by Z ti = P (S t H i ) P (S t H 0 ). (4) If one or more posterior probabilities exceed a threshold, then the hypothesis whose posterior probability is the highest is accepted. Otherwise, observing data is continued. Both SPRT and the technique proposed in [] compare the likelihood ratios to the threshold, while MSPRT normalizes the likelihood ratios to the posterior probabilities and compares them to the threshold. We propose a sequential decision fusion in biometric identification based on MSPRT using scores as observed data. Let s ti a score against the i-th enrollee when the user inputs the t-th biometric sample, and s t a score vector whose elements are the scores obtained in the t-th input, which is given by s t = (s t, s t2,, s tn ). (5) We use a score vector s t as observed data. Then, a set of the observed data up to t-th input can be written as S t = {s τ τ t}. (6) Using the following hypothesis: H i : The user is an i-th enrollee ( i N). H 0 : The user is a non-enrollee. and substituting S t for S t in equation (3), MSPRT can be applied to biometric identification. The calculation of a like- lihood ratio is described in the next subsection. The proposed method identifies a user as follows. Each time the user inputs a biometric sample, the system calculates the likelihood ratio Z ti using the score vectors s τ ( τ t). Then, the system normalizes the likelihood ratio Z ti to the posterior probability P (H i S t ) that the user is the i-th enrollee (or a non-enrollee if i = 0). If one or more posterior probabilities exceed a threshold, then the system accepts the hypothesis H i whose posterior probability is the highest, identifying the user as the i-th enrollee (or a non-enrollee if i = 0). Otherwise, the system requires the user to input the next biometric sample. When the number of inputs reached the upper limit T, the system compares the posterior probabilities to another threshold λ, identifying the user as a nonenrollee if no posterior probabilities exceed λ. By changing λ, the result of the T th decision (accept as someone or reject) can be controlled, while the number of inputs is not changed. Thus, after setting the average number of inputs to the desired value by changing, the trade-off between EFRR and EFR/NFR can be controlled by changing λ. Figure 2 shows the algorithm of the proposed method

Probability density Impostor distribution g ( s ) Genuine distribution Training : Impostor score : Genuine score f s ( ) Score s Fig. 3. Training of genuine and imposter score distributions. (T = 2). The prior probability P (H i ) that the hypothesis H i is true is set in advance. 3.3. Calculation of a likelihood ratio ssume that all the scores are independent and that the scores between the same individual and the different individuals are generated from the genuine distribution f() and the impostor distribution g(), respectively. This assumption makes the likelihood ratio Z ti in equation (4) decomposed as follows: Z ti = P (S t H i ) P (S t H 0 ) t τ= = P (s τi H i ) n i P (s τn H i ) t τ= P (s τi H 0 ) n i P (s τn H 0 ) = { t τ= f(s τi)/g(s τi ) (if i 0) (if i = 0). This can be calculated using f() and g(). These distributions are trained with genuine and impostor scores, respectively (see Figure 3), both of which were obtained from precollected biometric data. For example, these distributions can be trained by assuming the Gaussian distribution model and using the maximum likelihood estimation. lternatively, the ratio between the two distributions f()/g(), from which the likelihood ratio Z ti is also derived (see equation (7)), can be trained by assuming the logistic regression model [2]. The effectiveness of using logistic regression is discussed in [2]. 4. EXPERIMENTL EVLUTION 4.. Experimental set-up We designed an experiment to demonstrate the effectiveness of the proposed method using the NIST BSSR (Biometric Score Set - Release ) database [6]. This database contains the face scores from two algorithms ( C and G ) and the fingerprint scores from the left and right index fingers, which is obtained from 57 individuals (4 57 57 scores in total). We used the face scores from the algorithm C and (7) the fingerprint scores from both fingers, excluding one person whose scores are inappropriate (3 56 56 scores in total). The scores from 6 individuals were used for training of the ratio of the genuine and impostor distributions f()/g() using logistic regression (3 6 genuine scores and 3 6 5 impostor scores). The scores from the other 400 individuals were used for evaluation. We regarded 200 individuals as enrollees and the other 200 individuals as nonenrollees. Then, we carried out the experiment in which each of the enrollees and the non-enrollees inputs his/her biometric samples sequentially, trying all of 6 (= 3!) ways with regard to the order of inputs. We randomly chose 00 ways to divide 56 individuals into those for training, enrollees, and non-enrollees, and carried out the experiment above in each case. In this experiment, both the number of accesses by enrollees and that of attacks by non-enrollees were 20, 000 (= 200 6 00). The prior probability P (H i ) was set to be uniform: P (H i ) = /(N + ) (0 i N), and two thresholds and λ were controlled so that EFRR is 2%. Then, we evaluated the relationship between EFR/NFR and the average number of inputs. For comparison, we also evaluated the performance of the OR rule [8] and the decision rule which sequentially compares the likelihood ratio to the threshold in the same way as [] (hereinafter referred to as the likelihood ratio rule ). lthough the OR rule was originally in the verification mode, it can be easily extended to the identification mode as follows: if one or more scores (normalized to FR values) exceed the threshold, the system identifies the user as the enrollee whose score is the highest, otherwise the system requires the user to input the next biometric sample. s for the likelihood ratio rule, the system calculates the likelihood ratios using the scores (equation (7)), and compares them to the threshold instead of the posterior probabilities in the proposed method. 4.2. Experimental results Figure 4 shows the experimental results. OR, Likelihood, and Proposed represent the OR rule, the likelihood ratio rule, and the proposed method, respectively. When the average number of inputs was more than.40, no enrollees were accepted as another enrollee in the proposed method (EFR = 0%). It was found from the results that the proposed method outperformed the OR rule and the likelihood ratio rule with regard to both EFR/NFR and the average number of inputs. One explanation for this is as follows. Those who have high scores against multiple enrollees such as Wolves [3] can cause false identifications not only in the OR rule but in the likelihood ratio rule, while they do not cause such errors in the proposed method because they have low posterior probabilities against these enrollees. For example, when high scores are equally obtained against M (2 M N) enrollees, the posterior probabilities against these enrollees are calculated to be about /M. Thus, the errors above almost never happen

0 ] [ % R 0. F E 0.0 0.00 00 OR Likelihood Proposed..2.3.4.5 The average number of inputs In this paper, we proposed a sequential decision fusion based on MSPRT which minimizes the average number of observations. The experimental results showed that the proposed method is effective with regard to both the the identification accuracy and the average number of inputs. Future work will focus on controlling the trade-off between EFR and NFR. Biometrics identification, which does not require users to present a card or to enter ID / password, can provide the most convenient ways of authentication in all of the authentication methods. To apply this authentication method to large-scale applications, it has to be considered how to improve the accuracy while keeping high convenience. We believe this paper provides the optimal solution for such a problem. ] 0 [ % R F N 0. OR Likelihood Proposed..2.3.4.5 The average number of inputs Fig. 4. Relationship between EFR/NFR and the average number of inputs (EFRR=2%). if the threshold is set more than 0.5. It was also found that NFR increased with the average number of inputs after reaching a minimum in all the method (as for the OR rule, EFR increased as well). This is because we controlled and λ so that EFRR can be constant. The average number of inputs was increased as was set higher. t this time, however, λ had to be set lower to fix EFRR, causing false accepts at the 3rd (T = 3) decision. We also evaluated the performance of the system which identifies users using only the first biometric sample. The result of that was poor: [EFR, NFR]=[7.3%, 86%]. The performance was improved by applying the sequential decision fusion. When the thresholds were controlled so that the average number of inputs was less than.35, for example, EFR/NFR in the OR rule, the likelihood ratio rule, and the proposed method were [0.70%, 32%], [0.042%,.7%], and [0.0033%, 0.39%], respectively. Furthermore, we evaluated the performance of the system which compares the posterior probabilities to the threshold after the user inputs all the samples as described in [3]. The result was [EFR, NFR] = [0%, 0.42%]. The proposed method reduced the average number of inputs from 3 to.4 without increasing the error rates above. That is, more than half of the samples were not required in the proposed method. 5. CONCLUSION 6. REFERENCES [] R.W.Frischholz and U.Dieckmann, BioID: multimodal biometric identification system, IEEE Computer, Vol.33, No.2, pp.64-68 (2000) [2] L.Hong and.jain, Integrating faces and fingerprints for personal identification, IEEE Trans. on Pattern nalysis and Machine Intelligence, Vol.20, No.2, pp.295-307 (998) [3] K.Nandakumar et al., Fusion in Multibiometric Identification Systems: What about the Missing Data?, Proc. ICB, pp.743-752 (2009) [4].K.Jain et al., n Introduction to Biometric Recognition, IEEE Trans. on Circuits and Systems for Video Technology, Vol.4, No., pp.4-20 (2004) [5] V.P.Dragalin et al., Multihypothesis Sequential Probability Ratio Tests, Part I: ymptotic Optimality, IEEE Trans. on Information Theory, Vol.45, Issue 7, pp.2448-246 (999) [6] National Institute of Standards and Technology: NIST Biometric Scores Set (online), available from http://www.itl.nist.gov/iad/894.03/biometricscores/inde x.html [7] ISO/IEC9795-, Information technology - Biometric performance testing and reporting - Part : Principles and framework (2006) [8] R.M.Bolle et al.: Guide to Biometrics, Springer (2003) [9] K.Takahashi et al., Secure and User-Friendly Multi- Modal Biometric System, Proc. SPIE, pp.2-9 (2004) [0].Wald: Sequential nalysis, John Wiley & Sons, New York (947) [] H.Noda and E.Kawaguchi, daptive Speaker Identification Using Sequential Probability Ratio Test, Proc. ICPR, pp.262-265 (2000) [2] P.Verlinde and M.cheroy, Contribution to Multi- Modal Identity Verification Using Decision Fusion, Proc. PROMOPTIC (2000) [3] G.Doddington et al., SHEEP, GOTS, LMBS and WOLVES: Statistical nalysis of Speaker Performance in the NIST 998 Speaker Recognition Evaluation, Proc.ICSLP 98, pp.35-354 (998)