GUC100 Multisensor Fingerprint Database for In-House (Semipublic) Performance Test
© Davrondzhon Gafurov et al. 2010
Received: 5 February 2010
Accepted: 30 August 2010
Published: 5 September 2010
For evaluation of biometric performance of biometric components and system, the availability of independent databases and desirably independent evaluators is important. Both databases of significant size and independent testing institutions provide the precondition for fair and unbiased benchmarking. In order to show generalization capabilities of the system under test, it is essential that algorithm developers do not have access to the testing database, and thus the risk of tuned algorithms is minimized. In this paper, we describe the GUC100 multiscanner fingerprint database that has been created for independent and in-house (semipublic) performance and interoperability testing of third party algorithms. The GUC100 was collected by using six different fingerprint scanners (TST, L-1, Cross Match, Precise Biometrics, Lumidigm, and Sagem). Over several months, fingerprint images of all 10 fingers from 100 subjects on all 6 scanners were acquired. In total, GUC100 contains almost 72.000 fingerprint images. The GUC100 database enables us to evaluate various performances and interoperability settings by taking into account different influencing factors such as fingerprint scanner and image quality. The GUC100 data set is freely available to other researchers and practitioners provided that they conduct their testing in the premises of the Gjøvik University College in Norway, or alternatively submit their algorithms (in compiled form) to run on GUC100 by researchers in Gjøvik. We applied one public and one commercial fingerprint verification algorithm on GUC100, and the reported results indicate that GUC100 is a challenging database.
The interest in biometric systems is rapidly increasing due to the demands on high security applications. Although various types of human characteristics are observed in biometric authentication, the most popular biometric systems are based on fingerprinting [1, 2]. The two important aspects in performance evaluation of fingerprint recognition algorithms (and other biometrics in general) are the availability of independent databases and desirably testing bodies too. The advantages of such databases and third party testing bodies are that firstly it allows more direct and unbiased benchmarking of different algorithms, and secondly it increases trustworthiness of the performance report, since developers do not have a direct access to the database for tuning algorithm's parameters to adapt to the database. However, creating and distributing large-scale databases publicly is not an easy task because of the involved costs and time as well as jurisdictional limits. Due to the nature of the collected data (i.e., human physiology), creation and distribution of the large scale biometric databases raises privacy concerns and may not be permitted by data protection authorities in some countries (especially in Europe). Even if data collection is permitted, usually it is requested to destroy collected data after the completion of the project, for example, as in .
Summary of some fingerprint image databases.
Not available any more
Available for purchase
Available for purchase
Available for purchase
up to 4
up to 4
up to 6
Web-based automated evaluation system for fingerprint recognition algorithms, sequestered database
Available publicly from mid-2006
GUC100 (this paper)
Available for in-house (semi-public) testing
This paper describes a multi-scanner fingerprint database, which has been created for independent and in-house performance and interoperability testing. In the rest of the paper, we will refer to this database as GUC100 (GUC stands for Gjøvik University College.)
The rest of the paper is organized as follows. Section 2 describes objectives, targeted application scenarios, and availability of the GUC100 database. Section 3 details more on the data collection process, subject demographics population, fingerprint scanners, and so on. Section 4 presents an overview of interoperability testing on the GUC100 database as well as some factors that can be considered while conducting a test on the GUC100 database. Section 5 provides performance of two publicly available fingerprint verification software on GUC100 database. Section 6 points out to some possible biases in the database which are needed to be taken into account when interpreting results of evaluation on GUC100. Section 7 summarizes the paper.
2. Objectives, Scenarios, and Availability of the Database
The primary objective of the GUC100 database is to enable performance evaluation of fingerprint algorithms in cross-scanner (interoperability) scenarios where enrolment and verification scanners are different. The targeted performance accuracy with this database is aimed at FRR 1% (or lower) at FAR 0.1%.
Although evaluation of products from a single biometric supplier is essential from the supplier's perspective, testing of scenarios, where products (e.g., sensor, minutia extractor, minutia comparator) are provided by different suppliers, is very important for both integrators and operators to proof the interoperability prior to component integration and/or system roll-out. This refers to the settings where, for example, the enrolment and verification fingerprint images are acquired by different capture devices. For instance, in a biometric passport case, the document issued by a country where the enrolment image is captured by one scanner shall be able to be verified by another country where the probe image is very likely to be acquired by a different scanner. The GUC100 fingerprint database provides 15 and 30 cross-scanner combinations for a symmetric and an asymmetric comparators, respectively.
The GUC100 database is intended for technology testing which is an offline evaluation of biometric components using a pre-existing corpus . In creating the GUC100, we aimed at increasing several dimensions of the database as the numbers in Table 1 (last row) indicate. The database aims to simulate an indoor, covert (i.e., supervised), and verification (i.e., one to one) application environments. It is useful for performance evaluation not only at the traditional minutiae level but also at the pseudonymous identifier level which is more privacy protective compared to conventional minutiae templates [17, 18].
In exploitation of this database we follow—due to privacy regulations in Norway—the principal of "If the data cannot travel to the algorithm, then the algorithm shall travel to the data". This means that copies of GUC100 database cannot be distributed to other parties outside of GUC campus. However, algorithm developers are free to visit GUC and perform testing of their algorithm in its premises or submit their (binary code) fingerprint recognition algorithm to GUC team for testing. The interested party can contact authors of this paper or visit the GUC100 webpage for any updates on the database at http://www.nislab.no/guc100. The minimum specification for a fingerprint encoder is that it should be able to produce a template from fingerprint image in PNG format, and a fingerprint comparator should be able to compare two templates and produce a comparison score. If there are any specific requirements then they will be posted in aforementioned GUC100 webpage. It is also possible to send requests or inquires about database to the E-mail address: firstname.lastname@example.org. It is worth mentioning that the database is available for algorithm evaluation until 2021. After that the database will be destroyed due to the agreement with Norwegian Data Privacy Authority (NSD) .
3. GUC100 Fingerprint Database
The GUC100 database was collected at GUC (Gjøvik University College) in Norway during February 2008–January 2009. Before starting the data collection, we obtained permission from the Norwegian Data Privacy Authority (NSD) . In addition, all volunteers signed a consent form. Although due to Norwegian regulations the database cannot be sent over to other parties, but it is freely accessible and available for testing within GUC's campus to external parties.
The number of subjects who participated in the data collection was 100: 80 males and 20 females. The average ages of male and female groups were about 30.5 ( 12.3) and 28.3 ( 8) years old, respectively. Participants were mostly students and staff at GUC.
3.2. Fingerprint Scanners
Some characteristics of fingerprint scanners.
Temperature range [Celsius]
19 × 16
32 × 28
TIR (Total Internal Reflection)
Cross Match LSCAN100
31 × 31
TIR (Total Internal Reflection)
18 × 12.8
27.94 × 17.78
MSI (MultiSpectral Images)
21 × 21
TIR (Total Internal Reflection)
The lack of the swipe sensor in GUC100 database can be justified by the fact that database is intended to simulate and predict performance for public, commercial, and governmental applications but not for access to personal devices, where swipe sensors are in common use. Furthermore, the main purpose of the database is not comparing performance of various scanner technology but rather benchmarking different algorithms and investigating cross-scanner interoperability.
3.3. Data Collection
The data collection was conducted in an indoor environment. Each subject attended 12 sessions during a period of several months. The average time interval between each session was about one week. A restriction was applied such that the participant was not allowed to attend more than one session per day. We believe that introducing such long time delays (i.e., in terms of days and weeks) between acquisition sessions allows natural variations of the fingerprint skin to occur and thus to cover more realistic scenarios. All sessions were carried out under supervision of a human operator, so that no extreme rotations of the fingerprints are included in the database. During the capture process no objective quality measurements were taken, and the quality of the images was determined visually (i.e., subjectively) by the human operator.
In each session (both in controlled and uncontrolled ones), subjects provided all 10 fingers on each of the 6 scanners. Participants presented their fingers in the following order: left small finger, …, left thumb, right thumb, …, right ring and right small finger. The order of visiting scanners was as follow: first they presented all 10 fingers (in the above mentioned order) in TST scanner, then in L-1, Cross Match, Precise, Lumidigm, and finally in Sagem scanner. In every session, 60, fingerprint images per person were obtained. In total, GUC100 database contains 71934 (= 100 × 10 × 6 × 12 66) fingerprint images. Few images were discarded due to the duplication or mislabeling.
In addition, two separate smaller fingerprint databases are also available that can be used for algorithm development . They consist of fingerprint images from 45 and 40 subjects (these are different subjects from GUC100 database), respectively.
3.4. Fingerprint Image Quality
4. Interoperability and Parameters
4.1. Interoperability Performance and Matrices
From a customer perspective, performance interoperability of biometric components is very important. Performance interoperability is an essential measure to ensure that biometric subsystems from different suppliers are capable of generating and comparing samples and to meet at the same time an absolute level of performance within some margin . Interoperability performance results of biometric components/systems provide a better choice on selecting products and thus reduce dependency on a single supplier. The GUC100 fingerprint database enables performance evaluation of not only components from a single supplier but also components from different suppliers in intra- and intersensor settings. Such interoperability can be viewed at two different processing levels which are image and minutiae template levels.
Figure 7 depicts interoperability picture at the minutia level according to ISO interoperability schema . In this figure, the blue octagons, blue circles, yellow round rectangles, yellow circles, and green round rectangle represent fingerprint scanners (S), fingerprint images (FP), Minutia Template Encoder (MTE), minutiae template (T), and Minutia Template Comparator (MTC), respectively. In addition, the superscripts (s) and (p) denote whether MTE/MTC produces/processes proprietary or standard data formats (e.g., ISO standard on finger minutiae ), respectively. In Figure 7, the dimension of interoperability is 5. In first dimension there are 6 scanners (enrolment mode), in second dimension there are 4 MTE (enrolment mode), in third dimension there are 4 MTC, in fourth dimension there are 4 MTE (verification mode), and in fifth dimension there are 6 scanners (verification mode).
4.2. Evaluation Parameters
Fingerprint Scanner. Scanner interoperability is an important issue, although not very much investigated. It has been shown that when enrolment and verification images are acquired by different scanners, the performance deteriorates significantly . Recently, some methods have been proposed to address this problem [25, 26]. At the same time, interestingly, experimental evaluation indicates that fusing scores from different scanners results in better performance compared to fusing different instances of the same sensor [27, 28]. In addition to disparate fingerprint scanners, if MTE and MTC are also provided by different suppliers, then the interoperability schema gets more complex, for example, as the red path in Figure 7 highlights. The GUC100 database provides 6 intrascanner and 15 inter-scanner combinations for the specified pair of MTC and MTE.
Image Quality. Image quality is a very important factor that influences performance . As mentioned earlier, each fingerprint image is associated with the NFIQ score that indicates its quality. Depending on application, one can test performance on various configurations in the context of image quality, for example, use only good quality images for the enrolment and then medium to low quality images for the verification; use only good quality images both for the enrolment and verification, and so forth.
Session Type. Usually in biometric system the enrolment phase is conducted in a controlled way where the image quality, finger positioning, and so forth. are controlled or instructed to some extent. On the other hand, the verification phase can be performed in a more relaxed environment where no feedbacks to the user are expected. Therefore, one may use images from controlled sessions only for enrolment while images from uncontrolled sessions only for verification. It is worth noting that the image quality and session type parameters might be somewhat correlated because in controlled session image qualities were usually (not necessarily always) better compared to in uncontrolled sessions.
In addition to aforementioned factors, the GUC100 database may enable performance evaluations in the context of some other parameters such as temperature, humidity, and finger type (e.g., thumb finger, small finger).
5. Experimental Results
We have applied a public and a commercial fingerprint verification software for validating the value of the database. The publicly available fingerprint verification software was NIST's MINDTCT and BOZORTH3 . The second software was Neurotechnology VeriFinger which is commercially available . In this work, we use images from all ten fingers for genuine comparisons, but due to the large number of comparisons (and consequently long time), we use images from only one finger (left index) for impostor comparisons. In addition, we use only one finger for estimating impostor scores and also compare only the same session samples. Thus by denoting number of subjects, number of fingers per subject, and number of images per finger and also by assuming asymmetric template comparator, we can have about genuine comparisons (scores) and impostor comparisons (scores). Performance metric curves in terms of FAR/FRR plots for each scanner are presented in Figure 8. Plots are given both when enrolment and verification scanners are the same and when they are different. The EERs of the curves are also shown in the legends of the plots.
Neurotechnology: median, mean, and degradation in cross-scanner comparison (interop) in terms of EER.
NIST: median, mean, and degradation in cross-scanner comparison (interop) in terms of EER.
EER (no interop.), %
Median EER (interop.), %
Mean EER (interop.), %
Median degradation, %
Mean degradation, %
6. Limitations of the Database
There are few factors that may introduce bias, and one needs to take them into account when interpreting performance reports which are produced using GUC100 database. Since it is not always easy to recruit representative persons for experiments, the demographics of the subjects in GUC100 database in terms of gender (mostly men) and age (mostly adults) are not ideally balanced. Therefore, caution must be taken when analysing results in the context of the gender or when generalizing results to the other population of users like for instance, children or old people.
The order of finger presentation and order of scanner selection are fixed and not randomized. Although not investigated or proved yet, this may introduce bias when comparing performance of scanners (e.g., due to habituation). Thus, the main purposes of the GUC100 database are interoperability and benchmarking different algorithms but not comparing performance of different scanner technologies. In addition, interoperability results are primarily related to the scanner set used in GUC100, for other types of fingerprint scanners, the performance results might not be adequately generalized.
In this paper, we presented a GUC100 fingerprint database which was created for in-house performance and interoperability evaluation of fingerprint recognition algorithms in technology testing. The GUC100 database consists of 71934 fingerprint images of all 10 fingers from 100 subjects which were acquired by using 6 different scanners. The data collection was carried out during February 2008–January 2009 at the campus of the Gjøvik University College (GUC) in Norway. The GUC100 database is referred as "in-house" (semi-public) which means that the database is freely available for researchers and practitioners provided that all testing shall be conducted in the premises of GUC. Thus, the interested parties (i.e., industry, research institution, independent developers, etc.) can visit GUC premises and perform training and testing by themselves or alternatively submit their (binary) algorithms to be tested by researchers at GUC.
This work is supported by funding under the Seventh Research Framework Programme of the European Union, Project TURBINE (ICT-2007-216339). This document has been created in the context of the TURBINE project. All information is provided as is and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. The European Commission has no liability in respect of this document, which is merely representing the authors' view.
- International Biometric Group : Biometrics market and industry report 2009–2014. 2008, http://www.biometricgroup.com/reports/public/marketreport.php
- Maltoni D, Maio D, Jain AK, Prabhakar S: Handbook of Fingerprint Recognition. Springer, New York, NY, USA; 2003.MATHGoogle Scholar
- Arnold M, Busch C, Ihmor H: Investigating performance and impacts on fingerprint recognition systems. Proceedings of the 6th Annual IEEE System, Man and Cybernetics Information Assurance Workshop (SMC '05), June 2005, West Point, NY, USA 1-7.View ArticleGoogle Scholar
- NIST special database 29 2008, http://www.nist.gov/srd/nistsd29.cfm
- NIST special database 4 2008, http://www.nist.gov/srd/nistsd4.cfm
- NIST special database 14 2008, http://www.nist.gov/srd/nistsd14.cfm
- Maio D, Maltoni D, Cappelli R, Wayman JL, Jain AK: FVC2000: fingerprint verification competition. IEEE Transactions on Pattern Analysis and Machine Intelligence 2002, 24(3):402-412. 10.1109/34.990140View ArticleGoogle Scholar
- Maio D, Maltoni D, Cappelli R, Wayman JL, Jain AK: FVC2002: second fingerprint verification competition. Proceedings of the 16th International Conference on Pattern Recognition, 2002 811-814.Google Scholar
- Cappelli R, Maio D, Maltoni D, Wayman JL, Jain AK: Performance evaluation of fingerprint verification systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 2006, 28(1):3-17.View ArticleGoogle Scholar
- FVC2006: fingerprint verification competition 2006.
- FVC-onGoing: web-based automated evaluation system for fingerprint recognition algorithms https://biolab.csr.unibo.it/fvcongoing/UI/Form/Home.aspx
- Fierrez J, Ortega-Garcia J, Torre Toledano D, Gonzalez-Rodriguez J: Biosec baseline corpus: a multimodal biometric database. Pattern Recognition 2007, 40(4):1389-1392. 10.1016/j.patcog.2006.10.014View ArticleMATHGoogle Scholar
- Ortega-Garcia J, Fierrez-Aguilar J, Simon D, Gonzalez J, Faundez-Zanuy M, Espinosa V, Satue A, Hernaez I, Igarza J-J, Vivaracho C, Escudero D, Moro Q-I: MCYT baseline corpus: a bimodal biometric database. IEE Proceedings: Vision, Image and Signal Processing 2003, 150(6):395-401. 10.1049/ip-vis:20031078Google Scholar
- Grother P, Salamon W, Watson C, Indovina M, Flanagan P: Performance of fingerprint match-on-card algorithms, phase ii report. 2008, http://fingerprint.nist.gov/minexII/
- Garcia-Salicetti S, Beumier C, Chollet G, Dorizzi B, Jardins JLL, Lunter J, Ni Y, Petrovska-Delacrétaz D: BIOMET: a multimodal person authentication database including face, voice, fingerprint, hand and signature modalities. Proceedings of International Conference on Audio- and Video-Based Biometric Person Authentication, 2003, Lecture Notes in Computer Science 2688: 845-853.View ArticleMATHGoogle Scholar
- ISO/IEC 19795-2:2007 : Information technology—biometric performance testing and reporting—part 2: testing methodologies for technology and scenario evaluation. 2007.
- Jain AK, Nandakumar K, Nagar A: Biometric template security. EURASIP Journal on Advances in Signal Processing 2008 special issue on Advanced Signal Processing and Pattern Recognition Methods for Biometrics, 2008:-17. special issue on Advanced Signal Processing and Pattern Recognition Methods for BiometricsGoogle Scholar
- Gafurov D, Yang B, Bours P, Busch C: Independent performance evaluation of fingerprint verification at the minutiae and pseudonymous identifier levels. Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, 2010Google Scholar
- Norwegian data privacy authority, http://www.datatilsynet.no/
- GUC100 multi-scanner fingerprint database for in-house (semi-public) performance and interoperability evaluation http://www.nislab.no/guc100
- Garris MD, Tabassi E, Wilson CL: NIST fingerprint evaluations and developments. Proceedings of the IEEE 2006, 94(11):1915-1925.View ArticleGoogle Scholar
- ISO/IEC 19795-4 : Information technology—biometric performance testing and reporting—part 4: interoperability performance testing. 2007.
- ISO/IEC 19794-2:2005 : Information technology—biometric data interchange formats—part 2: finger minutiae data. 2005.
- Ross A, Jain A: Biometric sensor interoperability: a case study in fingerprints. Proceedings of the International Biometric Authentication, the 8th European Conference on Computer Vision (ECCV '04), 2004, Lecture Notes in Computer Science 3087: 134-145.Google Scholar
- Han Y, Nam J, Park N, Kim H: Resolution and distortion compensation based on sensor evaluation for interoperable fingerprint recognition. Proceedings of International Joint Conference on Neural Networks (IJCNN '06), July 2006, Vancouver, Canada 692-698.Google Scholar
- Ross A, Nadgir R: A thin-plate spline calibration model for fingerprint sensor interoperability. IEEE Transactions on Knowledge and Data Engineering 2008, 20(8):1097-1110.View ArticleGoogle Scholar
- Alonso-Fernandez F, Veldhuis RNJ, Bazen AM, Fierrez-Aguilar J, Ortega-Garcia J: Sensor interoperability and fusion in fingerprint verification: a case study using minutiae-and ridge-based matchers. Proceedings of the 9th International Conference on Control, Automation, Robotics and Vision (ICARCV '06), December 2006, SingaporeGoogle Scholar
- Marcialis GL, Roli F: Fingerprint verification by fusion of optical and capacitive sensors. Pattern Recognition Letters 2004, 25(11):1315-1322. 10.1016/j.patrec.2004.05.011View ArticleGoogle Scholar
- Alonso-Fernandez F, Fierrez J, Qrtega-Garcia J, Gonzalez-Rodriguez J, Fronthaler H, Kollreider K, Bigun J: A comparative study of fingerprint image-quality estimation methods. IEEE Transactions on Information Forensics and Security 2007, 2(4):734-743.View ArticleGoogle Scholar
- NIST's Fingerprint verification software 2009, http://fingerprint.nist.gov/NBIS/nbis_non_export_control.pdf
- Neurotechnology's VeriFinger 6.0 2009, http://www.neurotechnology.com/
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.