Skip to main content
  • Research Article
  • Open access
  • Published:

GUC100 Multisensor Fingerprint Database for In-House (Semipublic) Performance Test

Abstract

For evaluation of biometric performance of biometric components and system, the availability of independent databases and desirably independent evaluators is important. Both databases of significant size and independent testing institutions provide the precondition for fair and unbiased benchmarking. In order to show generalization capabilities of the system under test, it is essential that algorithm developers do not have access to the testing database, and thus the risk of tuned algorithms is minimized. In this paper, we describe the GUC100 multiscanner fingerprint database that has been created for independent and in-house (semipublic) performance and interoperability testing of third party algorithms. The GUC100 was collected by using six different fingerprint scanners (TST, L-1, Cross Match, Precise Biometrics, Lumidigm, and Sagem). Over several months, fingerprint images of all 10 fingers from 100 subjects on all 6 scanners were acquired. In total, GUC100 contains almost 72.000 fingerprint images. The GUC100 database enables us to evaluate various performances and interoperability settings by taking into account different influencing factors such as fingerprint scanner and image quality. The GUC100 data set is freely available to other researchers and practitioners provided that they conduct their testing in the premises of the Gjøvik University College in Norway, or alternatively submit their algorithms (in compiled form) to run on GUC100 by researchers in Gjøvik. We applied one public and one commercial fingerprint verification algorithm on GUC100, and the reported results indicate that GUC100 is a challenging database.

1. Introduction

The interest in biometric systems is rapidly increasing due to the demands on high security applications. Although various types of human characteristics are observed in biometric authentication, the most popular biometric systems are based on fingerprinting [1, 2]. The two important aspects in performance evaluation of fingerprint recognition algorithms (and other biometrics in general) are the availability of independent databases and desirably testing bodies too. The advantages of such databases and third party testing bodies are that firstly it allows more direct and unbiased benchmarking of different algorithms, and secondly it increases trustworthiness of the performance report, since developers do not have a direct access to the database for tuning algorithm's parameters to adapt to the database. However, creating and distributing large-scale databases publicly is not an easy task because of the involved costs and time as well as jurisdictional limits. Due to the nature of the collected data (i.e., human physiology), creation and distribution of the large scale biometric databases raises privacy concerns and may not be permitted by data protection authorities in some countries (especially in Europe). Even if data collection is permitted, usually it is requested to destroy collected data after the completion of the project, for example, as in [3].

Nevertheless, in the biometric community, several fingerprint databases were established for research purposes [410]. A short summary of some reported fingerprint databases is given in Table 1. In this table, the columns #SC, #SB, #FS, #UF, and #NF represent the number of fingerprint scanners, number of volunteers contributing to the data collection, number of fingers per subject, total number of unique fingers, and number of images per finger, respectively. Previously public databases were provided by NIST (National Institute of Standards and Technology) which consists of thousands of fingerprint images, for example, SD29 [4], SD4 [5], and SD14 [6]. However, these images are rolled ones that is, scanned from inked tenprint paper card. Such type of images is quite different from electronically captured ones and is not suited well for evaluation of algorithms that should be operated in an "on-line" application. In spite of this, the NIST fingerprint databases are still being used in research community and are available for purchase [46]. In addition, in the context of the MINEX project NIST composed a large-scale fingerprint data set for in-house testing of algorithms [14]. The database series FVC200x [710] were designed for the Fingerprint Verification Competition (FVC) where several competing algorithms were tested on them. Every FVC200x database consists of 4 disjoint data sets. Out of the four, in three data sets, images were captured electronically by some commercially available fingerprint scanners. The fourth database consisted of synthetically generated fingerprint images (therefore not listed in the Table 1). In fact in databases FVC2000, 2002, and 2004, the #UF and #NF were 120 and 12, respectively, but only 110 and 8 were used. There are also multimodal databases, where the fingerprint is collected as one of the modalities [12, 13, 15].

Table 1 Summary of some fingerprint image databases.

This paper describes a multi-scanner fingerprint database, which has been created for independent and in-house performance and interoperability testing. In the rest of the paper, we will refer to this database as GUC100 (GUC stands for Gjøvik University College.)

The rest of the paper is organized as follows. Section 2 describes objectives, targeted application scenarios, and availability of the GUC100 database. Section 3 details more on the data collection process, subject demographics population, fingerprint scanners, and so on. Section 4 presents an overview of interoperability testing on the GUC100 database as well as some factors that can be considered while conducting a test on the GUC100 database. Section 5 provides performance of two publicly available fingerprint verification software on GUC100 database. Section 6 points out to some possible biases in the database which are needed to be taken into account when interpreting results of evaluation on GUC100. Section 7 summarizes the paper.

2. Objectives, Scenarios, and Availability of the Database

The primary objective of the GUC100 database is to enable performance evaluation of fingerprint algorithms in cross-scanner (interoperability) scenarios where enrolment and verification scanners are different. The targeted performance accuracy with this database is aimed at FRR 1% (or lower) at FAR 0.1%.

Although evaluation of products from a single biometric supplier is essential from the supplier's perspective, testing of scenarios, where products (e.g., sensor, minutia extractor, minutia comparator) are provided by different suppliers, is very important for both integrators and operators to proof the interoperability prior to component integration and/or system roll-out. This refers to the settings where, for example, the enrolment and verification fingerprint images are acquired by different capture devices. For instance, in a biometric passport case, the document issued by a country where the enrolment image is captured by one scanner shall be able to be verified by another country where the probe image is very likely to be acquired by a different scanner. The GUC100 fingerprint database provides 15 and 30 cross-scanner combinations for a symmetric and an asymmetric comparators, respectively.

The GUC100 database is intended for technology testing which is an offline evaluation of biometric components using a pre-existing corpus [16]. In creating the GUC100, we aimed at increasing several dimensions of the database as the numbers in Table 1 (last row) indicate. The database aims to simulate an indoor, covert (i.e., supervised), and verification (i.e., one to one) application environments. It is useful for performance evaluation not only at the traditional minutiae level but also at the pseudonymous identifier level which is more privacy protective compared to conventional minutiae templates [17, 18].

In exploitation of this database we follow—due to privacy regulations in Norway—the principal of "If the data cannot travel to the algorithm, then the algorithm shall travel to the data". This means that copies of GUC100 database cannot be distributed to other parties outside of GUC campus. However, algorithm developers are free to visit GUC and perform testing of their algorithm in its premises or submit their (binary code) fingerprint recognition algorithm to GUC team for testing. The interested party can contact authors of this paper or visit the GUC100 webpage for any updates on the database at http://www.nislab.no/guc100. The minimum specification for a fingerprint encoder is that it should be able to produce a template from fingerprint image in PNG format, and a fingerprint comparator should be able to compare two templates and produce a comparison score. If there are any specific requirements then they will be posted in aforementioned GUC100 webpage. It is also possible to send requests or inquires about database to the E-mail address: turbines@hig.no. It is worth mentioning that the database is available for algorithm evaluation until 2021. After that the database will be destroyed due to the agreement with Norwegian Data Privacy Authority (NSD) [19].

3. GUC100 Fingerprint Database

The GUC100 database was collected at GUC (Gjøvik University College) in Norway during February 2008–January 2009. Before starting the data collection, we obtained permission from the Norwegian Data Privacy Authority (NSD) [19]. In addition, all volunteers signed a consent form. Although due to Norwegian regulations the database cannot be sent over to other parties, but it is freely accessible and available for testing within GUC's campus to external parties.

3.1. Population

The number of subjects who participated in the data collection was 100: 80 males and 20 females. The average ages of male and female groups were about 30.5 (12.3) and 28.3 (8) years old, respectively. Participants were mostly students and staff at GUC.

3.2. Fingerprint Scanners

The GUC100 database was collected by using six fingerprint scanners from different suppliers. The scanners were TST BiRD3, L-1 DFR2100, Cross Match LSCAN100, Precise 250MC, Lumidigm V100, and Sagem MorphoSmart. All these scanners, except TST BiRD3, were based on touching interaction. The TST scanner was based on touchless interaction. The resolution of all scanners was 500 dpi. Photos of the fingerprint scanners are given in Figure 2, and some of their properties are presented in Table 2(In this table, the order of scanners does not indicate any preference, it merely follows the order in Figure 2.) In this table, the columns Area, Temperature range, and Technology represent acquisition area, operating temperature range and sensor technology of the scanners, respectively.

Table 2 Some characteristics of fingerprint scanners.

The lack of the swipe sensor in GUC100 database can be justified by the fact that database is intended to simulate and predict performance for public, commercial, and governmental applications but not for access to personal devices, where swipe sensors are in common use. Furthermore, the main purpose of the database is not comparing performance of various scanner technology but rather benchmarking different algorithms and investigating cross-scanner interoperability.

Example images of the same finger in one session for each scanner are shown in Figure 1. As it can be seen from the figure, due to the scanner principle, the nature of fingerprint images from the TST scanner is rather different compared to the images from the other scanners.

Figure 1
figure 1

Images of the same finger in one session on all scanners.

Figure 2
figure 2

Fingerprint scanners (from left to right): TST, L-1, Cross Match, Precise, Lumidigm, and Sagem.

3.3. Data Collection

The data collection was conducted in an indoor environment. Each subject attended 12 sessions during a period of several months. The average time interval between each session was about one week. A restriction was applied such that the participant was not allowed to attend more than one session per day. We believe that introducing such long time delays (i.e., in terms of days and weeks) between acquisition sessions allows natural variations of the fingerprint skin to occur and thus to cover more realistic scenarios. All sessions were carried out under supervision of a human operator, so that no extreme rotations of the fingerprints are included in the database. During the capture process no objective quality measurements were taken, and the quality of the images was determined visually (i.e., subjectively) by the human operator.

For each person, the first 3 sessions were uncontrolled and the other 9 sessions were controlled(the term controlled refers to signal quality control by means of adjustment of the environmental factors that was conducted by an operator.) The reason for introducing such controlled session was that when the data collection was started in February 2008, it was not straightforward to capture good quality fingerprint image (visual jugement) without some extra action. Thus, in the controlled sessions, some actions were undertaken to improve image quality for example, subjects could wet/clean their fingers (by touching wet sponge) before touching scanners platens, or sometimes they were instructed to apply more finger pressure on scanners. This was mostly required on cold days with outside temperature below zero and mainly for L-1 and Cross Match scanners. In uncontrolled sessions, no extra actions were undertaken to get better images. Figure 3 presents an example of images of a single finger over all 12 sessions. In addition, over most sessions of the data collection, the environmental conditions were also recorded on line (i.e., during data capture of the subject), which include inside and outside temperatures as well as the humidity of the room.

Figure 3
figure 3

Images of the same finger in 12 sessions over several months on one scanner (left to right, top to down).

In each session (both in controlled and uncontrolled ones), subjects provided all 10 fingers on each of the 6 scanners. Participants presented their fingers in the following order: left small finger, …, left thumb, right thumb, …, right ring and right small finger. The order of visiting scanners was as follow: first they presented all 10 fingers (in the above mentioned order) in TST scanner, then in L-1, Cross Match, Precise, Lumidigm, and finally in Sagem scanner. In every session, 60, fingerprint images per person were obtained. In total, GUC100 database contains 71934 (= 100 × 10 × 6 × 12 66) fingerprint images. Few images were discarded due to the duplication or mislabeling.

In order to speed up data collection time and reduce human errors, we also developed a graphical user interface integrating all scanners. The program integrates image capturing functions from all scanners into a common interface and automatically saves the captured fingerprints according to the filename convention. The visual interface of this software is shown in Figure 4. This program made the data collection process easier and faster, and every session took about 5–7 minutes per subject.

Figure 4
figure 4

Visual interface of the software tool.

In addition, two separate smaller fingerprint databases are also available that can be used for algorithm development [20]. They consist of fingerprint images from 45 and 40 subjects (these are different subjects from GUC100 database), respectively.

3.4. Fingerprint Image Quality

We applied NIST Fingerprint Image Quality (NFIQ) algorithm [21] to have an overview of image qualities in GUC100 database. For each fingerprint image, the NFIQ algorithm returns number 1 (best quality), 2, 3, 4, or 5 (worst quality). The aim of the this work is not comparing performances of individual scanners; therefore, here the NFIQ scores are not provided separately for each scanner. Figure 5 shows distribution of NIFIQ scores over all scanners (except TST). The NFIQ scores for TST images are not included due to the nature of images. The ordinate of the figure is given in percentage (%). It is worth noting that quality score may be affected by the order the subject uses the scanners, and other image quality algorithms can also be applied on the database.

Figure 5
figure 5

NFIQ distribution.

4. Interoperability and Parameters

4.1. Interoperability Performance and Matrices

From a customer perspective, performance interoperability of biometric components is very important. Performance interoperability is an essential measure to ensure that biometric subsystems from different suppliers are capable of generating and comparing samples and to meet at the same time an absolute level of performance within some margin [22]. Interoperability performance results of biometric components/systems provide a better choice on selecting products and thus reduce dependency on a single supplier. The GUC100 fingerprint database enables performance evaluation of not only components from a single supplier but also components from different suppliers in intra- and intersensor settings. Such interoperability can be viewed at two different processing levels which are image and minutiae template levels.

Figure 6 depicts interoperability perspective at the image level according to ISO interoperability schema [22]. In this figure, the blue octagons, blue ellipses, and green round rectangle represent fingerprint scanners (S), fingerprint images (FP) and IMage-based Comparators (IMC), respectively. In this figure (also in Figure 7), the subscripts denote product supplier's id. In Figures 6, 7, and 8, the TST, IDT, CMT, PBA, LUM, and SAG stand for TST, L-1 (Identix), Cross Match, Precise, Lumidigm, and Sagem, respectively. The A and B indicate some arbitrary suppliers. In addition, in this figure (also in Figure 7), the left part of comparators (i.e., IMC and MTC) represents enrolment and the right part represents verification. In Figure 6, the dimension of interoperability is 3. In first dimension there are 6 scanners (enrolment mode), in second dimension there are 2 IMC, in third dimension there are 6 scanners (verification mode).

Figure 6
figure 6

Interoperability picture at the image level.

Figure 7
figure 7

Interoperability picture at the minutia level.

Figure 8
figure 8

Performance of public and commercial algorithms on GUC100. Neurotechnology: enrolment and verification scanners are the sameNIST: enrolment and verification scanners are the sameNeurotechnology: enrolment and verification scanners are differentNIST: enrolment and verification scanners are different

Figure 7 depicts interoperability picture at the minutia level according to ISO interoperability schema [22]. In this figure, the blue octagons, blue circles, yellow round rectangles, yellow circles, and green round rectangle represent fingerprint scanners (S), fingerprint images (FP), Minutia Template Encoder (MTE), minutiae template (T), and Minutia Template Comparator (MTC), respectively. In addition, the superscripts (s) and (p) denote whether MTE/MTC produces/processes proprietary or standard data formats (e.g., ISO standard on finger minutiae [23]), respectively. In Figure 7, the dimension of interoperability is 5. In first dimension there are 6 scanners (enrolment mode), in second dimension there are 4 MTE (enrolment mode), in third dimension there are 4 MTC, in fourth dimension there are 4 MTE (verification mode), and in fifth dimension there are 6 scanners (verification mode).

4.2. Evaluation Parameters

The GUC100 database enables evaluation of various configurations of native and interoperability performances by focusing on some influencing factors. Such factors can be the following.

  1. (i)

    Fingerprint Scanner. Scanner interoperability is an important issue, although not very much investigated. It has been shown that when enrolment and verification images are acquired by different scanners, the performance deteriorates significantly [24]. Recently, some methods have been proposed to address this problem [25, 26]. At the same time, interestingly, experimental evaluation indicates that fusing scores from different scanners results in better performance compared to fusing different instances of the same sensor [27, 28]. In addition to disparate fingerprint scanners, if MTE and MTC are also provided by different suppliers, then the interoperability schema gets more complex, for example, as the red path in Figure 7 highlights. The GUC100 database provides 6 intrascanner and 15 inter-scanner combinations for the specified pair of MTC and MTE.

  2. (ii)

    Image Quality. Image quality is a very important factor that influences performance [29]. As mentioned earlier, each fingerprint image is associated with the NFIQ score that indicates its quality. Depending on application, one can test performance on various configurations in the context of image quality, for example, use only good quality images for the enrolment and then medium to low quality images for the verification; use only good quality images both for the enrolment and verification, and so forth.

  3. (iii)

    Session Type. Usually in biometric system the enrolment phase is conducted in a controlled way where the image quality, finger positioning, and so forth. are controlled or instructed to some extent. On the other hand, the verification phase can be performed in a more relaxed environment where no feedbacks to the user are expected. Therefore, one may use images from controlled sessions only for enrolment while images from uncontrolled sessions only for verification. It is worth noting that the image quality and session type parameters might be somewhat correlated because in controlled session image qualities were usually (not necessarily always) better compared to in uncontrolled sessions.

In addition to aforementioned factors, the GUC100 database may enable performance evaluations in the context of some other parameters such as temperature, humidity, and finger type (e.g., thumb finger, small finger).

5. Experimental Results

We have applied a public and a commercial fingerprint verification software for validating the value of the database. The publicly available fingerprint verification software was NIST's MINDTCT and BOZORTH3 [30]. The second software was Neurotechnology VeriFinger which is commercially available [31]. In this work, we use images from all ten fingers for genuine comparisons, but due to the large number of comparisons (and consequently long time), we use images from only one finger (left index) for impostor comparisons. In addition, we use only one finger for estimating impostor scores and also compare only the same session samples. Thus by denoting number of subjects, number of fingers per subject, and number of images per finger and also by assuming asymmetric template comparator, we can have about genuine comparisons (scores) and impostor comparisons (scores). Performance metric curves in terms of FAR/FRR plots for each scanner are presented in Figure 8. Plots are given both when enrolment and verification scanners are the same and when they are different. The EERs of the curves are also shown in the legends of the plots.

As can be observed from the figures in general, when enrolment and verification scanners are different, EERs are higher compared to when they are the same. Tables 3 and 4 provide summarizing statistics (median and mean) of cross-scanner comparisons (interoperability) in terms of EER for Neurotechnology and NIST algorithms, respectively. In these tables, the last two columns indicate average performance degradation with respect to same scanner comparison which are computed according to the following:

(1)

where is the EER of same scanner comparison (i.e., no interoperability—second column) and is the average (median or mean) EER of cross scanner comparison (i.e., interoperability—columns three and four in the tables).

Table 3 Neurotechnology: median, mean, and degradation in cross-scanner comparison (interop) in terms of EER.
Table 4 NIST: median, mean, and degradation in cross-scanner comparison (interop) in terms of EER.

6. Limitations of the Database

There are few factors that may introduce bias, and one needs to take them into account when interpreting performance reports which are produced using GUC100 database. Since it is not always easy to recruit representative persons for experiments, the demographics of the subjects in GUC100 database in terms of gender (mostly men) and age (mostly adults) are not ideally balanced. Therefore, caution must be taken when analysing results in the context of the gender or when generalizing results to the other population of users like for instance, children or old people.

The order of finger presentation and order of scanner selection are fixed and not randomized. Although not investigated or proved yet, this may introduce bias when comparing performance of scanners (e.g., due to habituation). Thus, the main purposes of the GUC100 database are interoperability and benchmarking different algorithms but not comparing performance of different scanner technologies. In addition, interoperability results are primarily related to the scanner set used in GUC100, for other types of fingerprint scanners, the performance results might not be adequately generalized.

7. Summary

In this paper, we presented a GUC100 fingerprint database which was created for in-house performance and interoperability evaluation of fingerprint recognition algorithms in technology testing. The GUC100 database consists of 71934 fingerprint images of all 10 fingers from 100 subjects which were acquired by using 6 different scanners. The data collection was carried out during February 2008–January 2009 at the campus of the Gjøvik University College (GUC) in Norway. The GUC100 database is referred as "in-house" (semi-public) which means that the database is freely available for researchers and practitioners provided that all testing shall be conducted in the premises of GUC. Thus, the interested parties (i.e., industry, research institution, independent developers, etc.) can visit GUC premises and perform training and testing by themselves or alternatively submit their (binary) algorithms to be tested by researchers at GUC.

References

  1. International Biometric Group : Biometrics market and industry report 2009–2014. 2008, http://www.biometricgroup.com/reports/public/marketreport.php

  2. Maltoni D, Maio D, Jain AK, Prabhakar S: Handbook of Fingerprint Recognition. Springer, New York, NY, USA; 2003.

    MATH  Google Scholar 

  3. Arnold M, Busch C, Ihmor H: Investigating performance and impacts on fingerprint recognition systems. Proceedings of the 6th Annual IEEE System, Man and Cybernetics Information Assurance Workshop (SMC '05), June 2005, West Point, NY, USA 1-7.

    Chapter  Google Scholar 

  4. NIST special database 29 2008, http://www.nist.gov/srd/nistsd29.cfm

  5. NIST special database 4 2008, http://www.nist.gov/srd/nistsd4.cfm

  6. NIST special database 14 2008, http://www.nist.gov/srd/nistsd14.cfm

  7. Maio D, Maltoni D, Cappelli R, Wayman JL, Jain AK: FVC2000: fingerprint verification competition. IEEE Transactions on Pattern Analysis and Machine Intelligence 2002, 24(3):402-412. 10.1109/34.990140

    Article  Google Scholar 

  8. Maio D, Maltoni D, Cappelli R, Wayman JL, Jain AK: FVC2002: second fingerprint verification competition. Proceedings of the 16th International Conference on Pattern Recognition, 2002 811-814.

    Google Scholar 

  9. Cappelli R, Maio D, Maltoni D, Wayman JL, Jain AK: Performance evaluation of fingerprint verification systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 2006, 28(1):3-17.

    Article  Google Scholar 

  10. FVC2006: fingerprint verification competition 2006.

  11. FVC-onGoing: web-based automated evaluation system for fingerprint recognition algorithms https://biolab.csr.unibo.it/fvcongoing/UI/Form/Home.aspx

  12. Fierrez J, Ortega-Garcia J, Torre Toledano D, Gonzalez-Rodriguez J: Biosec baseline corpus: a multimodal biometric database. Pattern Recognition 2007, 40(4):1389-1392. 10.1016/j.patcog.2006.10.014

    Article  MATH  Google Scholar 

  13. Ortega-Garcia J, Fierrez-Aguilar J, Simon D, Gonzalez J, Faundez-Zanuy M, Espinosa V, Satue A, Hernaez I, Igarza J-J, Vivaracho C, Escudero D, Moro Q-I: MCYT baseline corpus: a bimodal biometric database. IEE Proceedings: Vision, Image and Signal Processing 2003, 150(6):395-401. 10.1049/ip-vis:20031078

    Google Scholar 

  14. Grother P, Salamon W, Watson C, Indovina M, Flanagan P: Performance of fingerprint match-on-card algorithms, phase ii report. 2008, http://fingerprint.nist.gov/minexII/

  15. Garcia-Salicetti S, Beumier C, Chollet G, Dorizzi B, Jardins JLL, Lunter J, Ni Y, Petrovska-Delacrétaz D: BIOMET: a multimodal person authentication database including face, voice, fingerprint, hand and signature modalities. Proceedings of International Conference on Audio- and Video-Based Biometric Person Authentication, 2003, Lecture Notes in Computer Science 2688: 845-853.

    Article  MATH  Google Scholar 

  16. ISO/IEC 19795-2:2007 : Information technology—biometric performance testing and reporting—part 2: testing methodologies for technology and scenario evaluation. 2007.

  17. Jain AK, Nandakumar K, Nagar A: Biometric template security. EURASIP Journal on Advances in Signal Processing 2008 special issue on Advanced Signal Processing and Pattern Recognition Methods for Biometrics, 2008:-17. special issue on Advanced Signal Processing and Pattern Recognition Methods for Biometrics

    Google Scholar 

  18. Gafurov D, Yang B, Bours P, Busch C: Independent performance evaluation of fingerprint verification at the minutiae and pseudonymous identifier levels. Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, 2010

    Google Scholar 

  19. Norwegian data privacy authority, http://www.datatilsynet.no/

  20. GUC100 multi-scanner fingerprint database for in-house (semi-public) performance and interoperability evaluation http://www.nislab.no/guc100

  21. Garris MD, Tabassi E, Wilson CL: NIST fingerprint evaluations and developments. Proceedings of the IEEE 2006, 94(11):1915-1925.

    Article  Google Scholar 

  22. ISO/IEC 19795-4 : Information technology—biometric performance testing and reporting—part 4: interoperability performance testing. 2007.

  23. ISO/IEC 19794-2:2005 : Information technology—biometric data interchange formats—part 2: finger minutiae data. 2005.

  24. Ross A, Jain A: Biometric sensor interoperability: a case study in fingerprints. Proceedings of the International Biometric Authentication, the 8th European Conference on Computer Vision (ECCV '04), 2004, Lecture Notes in Computer Science 3087: 134-145.

    Google Scholar 

  25. Han Y, Nam J, Park N, Kim H: Resolution and distortion compensation based on sensor evaluation for interoperable fingerprint recognition. Proceedings of International Joint Conference on Neural Networks (IJCNN '06), July 2006, Vancouver, Canada 692-698.

    Google Scholar 

  26. Ross A, Nadgir R: A thin-plate spline calibration model for fingerprint sensor interoperability. IEEE Transactions on Knowledge and Data Engineering 2008, 20(8):1097-1110.

    Article  Google Scholar 

  27. Alonso-Fernandez F, Veldhuis RNJ, Bazen AM, Fierrez-Aguilar J, Ortega-Garcia J: Sensor interoperability and fusion in fingerprint verification: a case study using minutiae-and ridge-based matchers. Proceedings of the 9th International Conference on Control, Automation, Robotics and Vision (ICARCV '06), December 2006, Singapore

    Google Scholar 

  28. Marcialis GL, Roli F: Fingerprint verification by fusion of optical and capacitive sensors. Pattern Recognition Letters 2004, 25(11):1315-1322. 10.1016/j.patrec.2004.05.011

    Article  Google Scholar 

  29. Alonso-Fernandez F, Fierrez J, Qrtega-Garcia J, Gonzalez-Rodriguez J, Fronthaler H, Kollreider K, Bigun J: A comparative study of fingerprint image-quality estimation methods. IEEE Transactions on Information Forensics and Security 2007, 2(4):734-743.

    Article  Google Scholar 

  30. NIST's Fingerprint verification software 2009, http://fingerprint.nist.gov/NBIS/nbis_non_export_control.pdf

  31. Neurotechnology's VeriFinger 6.0 2009, http://www.neurotechnology.com/

Download references

Acknowledgment

This work is supported by funding under the Seventh Research Framework Programme of the European Union, Project TURBINE (ICT-2007-216339). This document has been created in the context of the TURBINE project. All information is provided as is and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. The European Commission has no liability in respect of this document, which is merely representing the authors' view.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Davrondzhon Gafurov.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Gafurov, D., Bours, P., Yang, B. et al. GUC100 Multisensor Fingerprint Database for In-House (Semipublic) Performance Test. EURASIP J. on Info. Security 2010, 391761 (2010). https://doi.org/10.1155/2010/391761

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/391761

Keywords