 Research
 Open Access
Qualitybased iris segmentationlevel fusion
 Peter Wild^{1}Email authorView ORCID ID profile,
 Heinz Hofbauer^{2},
 James Ferryman^{3} and
 Andreas Uhl^{2}
https://doi.org/10.1186/s136350160048x
© The Author(s) 2016
 Received: 14 April 2016
 Accepted: 12 October 2016
 Published: 26 October 2016
Abstract
Iris localisation and segmentation are challenging and critical tasks in iris biometric recognition. Especially in noncooperative and less ideal environments, their impact on overall system performance has been identified as a major issue. In order to avoid a propagation of system errors along the processing chain, this paper investigates iris fusion at segmentationlevel prior to feature extraction and presents a framework for this task. A novel intelligent reference method for iris segmentationlevel fusion is presented, which uses a learningbased approach predicting ground truth segmentation performance from quality indicators and modelbased fusion to create combined boundaries. The new technique is analysed with regard to its capability to combine segmentation results (pupillary and limbic boundaries) of multiple segmentation algorithms. Results are validated on pairwise combinations of four open source iris segmentation algorithms with regard to the public CASIA and IITD iris databases illustrating the high versatility of the proposed method.
Keywords
 Iris biometrics
 Segmentation
 Fusion
 Quality
1 Introduction
Personal recognition from human iris (eye) images comprises several steps: image capture, eye detection, iris localisation, boundary detection, eyelid and noise masking, normalisation, feature extraction, and feature comparison [1]. Among these tasks, it is especially iris localisation and pupillary/limbic boundary detection which challenge existing implementations [2], at least for images captured under less ideal conditions. Examples of undesirable conditions are visible light imaging with weak pupillary boundaries, onthemove near infrared acquisition with typical motion blur, outoffocus images, or images with weak limbic contrast.
As an alternative to the development of better individual segmentation algorithms, iris segmentation fusion as a novel fusion scenario [3] was proposed in [4]. For vendorneutral comparison, this form of fusion has certain advantages over more common multialgorithm fusion, where each algorithm uses its own segmentation routine: it facilitates data exchange offering access to the normalised texture, increases usability of existing segmentation routines, and allows faster execution requiring only a single module rather than entire processing chains. In [5], which is extended by this work, a fusion framework for the automated combination of segmentation algorithms is presented, but without taking segmentation quality into account. The reference method in [5] was shown to improve results in many cases, but no systematic improvement could be achieved. A more efficient combination technique can be obtained when inaccurate information can be discarded from the fusion stage, which is the scope of work in this paper. The proposed fusion algorithm assesses the usefulness of individual segmentation input to avoid a deterioration of results even if one of two segmentation results to be combined is inaccurate.
The remainder of the paper is organised as follows. Section 2 presents the methodology and gives an overview of related work in iris fusion, focusing on multisegmentation, data interoperability, and segmentation quality in iris recognition. The suggested framework and reference method for iris segmentation fusion is presented in detail in Section 3. Section 4 introduces the databases and algorithms under test and gives a detailed presentation and analysis of experiments. Finally, a conclusion of this work and outlook on future topics in segmentationlevel iris fusion is given in Section 5.
2 Methodology and related work
3 Proposed multisegmentation fusion method
3.1 Step 1: tracing
 1.
Mesh grid phase: A total of n equidistant scan lines (n=100 yields reasonable results for the employed datasets) are intersected with binary noise mask N locating 01 and 10 crossings. Based on count and first occurrence, an estimate of limbic or pupillary boundary membership is conducted. Topological inconsistencies (holes) in N should be closed morphologically prior to scanning.
 2.
Pruning phase: Outlier candidate points with high deviation (radius with zscore ≥2.5) from the centre of gravity C _{ r } are removed to avoid inconsistencies in case the outer mask of an iris is not convex, to tolerate noise masks where eyelids are considered and to suppress classification errors.
There are some caveats: first, it is not necessarily possible to differentiate between iris and eyelid purely based on the mask—pruning and succeeding modelfitting helps to reduce such effects. Second, some algorithms employ different boundary models for rubbersheet mapping and noise masks (see [22]). Even in the recent 4.1 version of OSIRIS, noise masks extend over actual boundaries used for unrolling the iris image [5], which has been corrected in experiments by limiting the mask to the employed rubbersheet limbic boundary. Ideally, the employed noise mask for scanning ignores eyelids or other occlusions and a separate noise mask for occlusions is considered at a later stage (e.g. via majority voting after normalisation). While masks may not necessarily be convex and may contain holes, such inconsistencies are repaired by a heuristical algorithm employing simple morphological closing and simplifying local inconsistencies where necessary. Further discussion about this problem and about the method employed can be found in [5].
After scanning and pruning, a set of limbic L _{ i } and pupillary P _{ i } boundary points is available for each (ith) segmentation candidate.
3.2 Step 2: model fusion
Having obtained pupillary and limbic boundary points for each segmentation algorithm, the scope of the model fusion step is to combine a set of segmentation boundaries into a new candidate boundary. This is useful to average individual algorithms’s segmentation errors (and considered as “the” fusion method in [5]).
This fusion strategy combines candidate sets B _{1},…,B _{ k } into a joint set applying a single parameterisation model ModelFit (e.g. leastsquares circular fitting) minimising the modelerror. This is in contrast to traditional sum rule, where continuous parameterisations are built for each curve to be combined separately. The method is employed separately for inner and outer iris boundaries, and the implementation uses Fitzgibbon’s ellipsefitting [23] for the combination. In this paper, we employ pairwise combinations (k=2); however, the method can easily be extended to test all possible combinations.
It would be a valid choice to use a weighted combination, but we decided against this option, because setting weights requires further tuning with regards to the employed dataset, which we tried to avoid at this stage. The final approach including the following stages uses neural networks which are much more flexible in using features of the image to try and, on a per image basis, weight the fusion.
3.3 Step 3: prediction
Quality parameters used for predicting segmentation accuracy
No  Par.  Description  Indicative property 

1  p _{ x }  Pupil centre xcoordinate  Close to image centre 
2  p _{ y }  Pupil centre ycoordinate  Close to centre 
3  p _{ r }  Pupil radius  Sensorspecific distribution (illumination) 
4  l _{ x }  Iris centre xcoordinate  Close to centre 
5  l _{ y }  Iris centre ycoordinate  Close to centre 
6  l _{ r }  Iris radius  Systemspecific distribution (focusdistance) 
7  a _{ I }  Iris area  System/sensorspecific distribution 
8  c _{ P }  Pupillary contrast  Usually higher for accurate segmentation 
9  c _{ L }  Limbic contrast  Usually higher for accurate segmentation 
10  μ  Mean iris intensity  Difficulty of segmentation (eye colour) 
11  σ  Iris intensity standard deviation  Infocus assessment (texture) 
Location parameters of circularfitted pupil and limbic boundary centres (p _{ x },p _{ y } and l _{ x },l _{ y }) provide a useful check, whether an iris is found close to image centres (assumed to be more likely for eye patches extracted by preceding eye detectors). Also the distance between centres can potentially reveal segmentation errors. Pupillary and limbic radius values p _{ r },l _{ r } are included for databasespecific predictions. Some segmentation algorithms allow explicit finetuning of these segmentation parameters specifying a range of tested values. Successful segmentations are assumed to exhibit sensor (illumination intensity impacting on pupil dilation) or databasespecific (focusdistance and average size of human iris) distributions of these parameters. The total available iris texture area with regard to a noise mask (a _{ I }) is an important indicator for noise and challenge of the underlying image. Pupillary and limbic contrast (c _{ P },c _{ L }) were introduced to judge for the accuracy of fitted boundaries, especially over and undersegmentation. Boundary contrast is calculated as the absolute difference in average intensity of the circular window (5 pixels height) outside and inside the boundary. Mean iris intensity and standard deviation are included as indicators for potential irissclera contrast and focus. All parameters refer to a particular segmentation result (characterised via its noise mask N), more precisely we use the trace (P,L fitted with an elliptical model) after the scanning and pruning stage to compute parameter values.

Input of the network are the quality parameter values x _{0},…,x _{ n }∈[0,1] obtained from a segmentation result after (minmax) normalisation.

Output of the network is a hypothesis value \(h_{W}(x) = f({W_{1}^{T}} f({W_{2}^{T}} x))\) estimating the segmentation error. It is calculated using (trained) matrices W _{1},W _{2}, i.e. a simple neural network with one hidden fullyconnected n×n layer and a logisticregression output layer. We use \(f(z) = \frac {1}{1+\exp (z)} \in [0,1]\) (sigmoid) as the activation function.
We used 50 % of each iris database for training and the remaining 50 % for testing. The computed hypothesis value is the returned quality score q(P,L):=h _{ W }(x(P,L)) of a segmentation result.
3.4 Step 4: selection
Outliers are implicitly removed by considering the combination maximising quality (minimum predicted segmentation error q). Finally, the selected boundary P _{ s },L _{ s } is used for the rubbersheet transform. Further local noise masks can be combined using e.g., majority voting (not executed). The segmentation tool from [10] is used for unrolling the iris image. It should also be noted that the masklevel fusion generates a mask which is used for unrolling the iris only. No noise or occlusion mask is generated and consequently all tests performed on the fusion are performed purely on the unrolled iris image without masking.
4 Experimental study
Segmentation algorithms used in experiments
Algorithm  Implementation  Model  Methods 

CAHT [1]  USIT v1.0 [1]  Circular  Circular Hough transform, contrastenhancement 
IFPP [22]  USIT v1.0 [1]  Elliptic  Iterative Fourier approximation; pulling and pushing 
OSIRIS [30]  IrisOsiris v4.1 [30]  Free  Circular HT; active contours 
WAHET [7]  USIT v1.0 [1]  Elliptic  Adaptive multiscale HT; ellipsopolar transform 
To facilitate reproducible research, the trained neural networks will be made available at http://www.wavelab.at/sources/Wild16a.
4.1 Predictability of segmentation accuracy
We train iris segmentation accuracy prediction separately for each training database, but jointly for all available segmentation algorithms and combinations thereof. Using the true E _{1} ground truth segmentation error, we find the minimum (0.189 for CASIA, 0.222 for IITD) of cost function J(W) introduced in Section 3 stopping after 1000 iterations yielding an average delta between prediction and true E _{2} error, Δ E _{2}= 0.017 for CASIA and Δ E _{2}= 0.015 for IITD test sets. This corresponds to 94.03 % accuracy for CASIA and 96.77 % for IITD, respectively, in predicting segmentation errors (considering images with E _{2} error >0.1, i.e. 10 %, as failed segmentations).
4.2 Ground truth segmentation accuracy
Test set segmentation errors (comparison with ground truth) for individual algorithms (diagonal) versus pairwise qualitybased segmentation fusion
(a) CASIA v4 interval database  (b) IIT Delhi database  

Average E _{1} [%]  CAHT  IFPP  OSIRIS  WAHET  CAHT  IFPP  OSIRIS  WAHET 
CAHT  2.47  2.56  2.46  2.33  2.95  2.88  2.73  2.80 
IFPP  5.75  3.67  3.08  4.98  3.62  3.41  
OSIRIS  5.27  3.01  5.69  3.10  
WAHET  3.45  5.95  
Average E _{2} [%]  
CAHT  3.75  3.87  3.75  3.55  4.28  3.82  3.61  3.74 
IFPP  10.09  5.58  4.65  6.37  4.77  4.49  
OSIRIS  8.20  4.59  7.48  4.11  
WAHET  5.20  7.85  
Outlier count (E _{1}>10%)  
CAHT  10  10  9  6  10  6  4  9 
IFPP  131  30  11  57  21  24  
OSIRIS  78  19  108  19  
WAHET  38  105 
4.3 Impact on recognition accuracy
Test set EER performance for individual algorithms (diagonal) versus pairwise qualitybased segmentation fusion
(a) CASIA v4 interval database  (b) IIT Delhi database  

Equalerror rate [%] for LG  CAHT  IFPP  OSIRIS  WAHET  CAHT  IFPP  OSIRIS  WAHET 
CAHT  1.10  1.04  1.03  0.90  0.94  0.45  0.25  0.76 
IFPP  5.97  1.24  1.06  2.51  0.58  1.17  
OSIRIS  2.10  0.96  3.74  0.70  
WAHET  1.71  6.21  
Equalerror rate [%] for QSW  
CAHT  0.81  0.74  0.79  0.43  0.94  0.57  0.71  0.89 
IFPP  6.48  1.38  0.82  2.66  0.96  1.74  
OSIRIS  2.94  0.95  5.41  1.47  
WAHET  1.38  5.86 
5 Conclusions
In this paper, we presented a novel qualitybased fusion method for the combination of segmentation algorithms. With the positive result for the quality prediction submodule certifying its ability to obtain meaningful estimates of segmentation errors of individual algorithms in as much as 94.03 % (CASIA) to 96.77 % (IITD) of segmentation failures also tests on ground truth segmentation conformity and recognition accuracy confirmed the high versatility of the suggested technique. Analysing pairwise combinations of CAHT, WAHET, IFPP, and OSIRIS iris segmentation algorithms, in all tested cases recognition performance could be improved. The best obtained result for IITD was 0.25 % EER for CAHT+OSIRIS versus 0.94 % EER for CAHT only (using LG) and for CASIA we obtained 0.43 % EER combining CAHT+WAHET versus 0.81 % for CAHT only (using QSW) as best single algorithms. Multisegmentation fusion has been shown to be a very successful technique to obtain higher accuracy at little additional cost proving to be particularly useful, where better normalised source images are needed. In the future, we will look at even further improved quality prediction and fusion techniques combining multiple segmentation algorithms at once and new sequential approaches saving computational effort. Further, an investigation of extending suggested methods to NIR and VIS images is ongoing work and has shown first promising results. As regards the training of weights, it is clear that also VIS images might be sensitive to different quality metrics and thus should be carefully retrained.
Declarations
Acknowledgements
This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 312583 and from the Austrian Science Fund, project no P26630.
Authors’ contributions
PW developed the concept, carried out the quality prediction and ground truth/recognitionbased evaluation, and drafted the manuscript. HH provided the mask data, developed the scanning and pruning method, and participated in drafting the manuscript. JF supported in the experimental analysis and revising the manuscript. AU supported in the development of the concept, organised the data, and participated in revising the manuscript. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 C Rathgeb, A Uhl, P Wild, Iris Recognition: From Segmentation to Template Security. Advances in Information Security, vol. 59 (Springer, New York, 2012).Google Scholar
 R Jillela, A Ross, PJ Flynn, in Proc. Winter Conf. on Appl. Computer Vision, (WACV). Information fusion in lowresolution iris videos using principal components transform, (2011), pp. 262–269. doi:http://dx.doi.org/10.1109/WACV.2011.5711512.
 K Ross, AA Nandakumar, AK Jain, Handbook of Multibiometrics (Springer, New York, 2006).Google Scholar
 A Uhl, P Wild, in Proc. 18th Ib. Congr. on Pattern Recog, (CIARP). Fusion of iris segmentation results, (2013), pp. 310–317. doi:http://dx.doi.org/10.1007/9783642418273_39.
 P Wild, H Hofbauer, J Ferryman, A Uhl, in Proc. 14th International Conference of the Biometrics Special Interest Group (BIOSIG’15). Segmentationlevel fusion for iris recognition, (2015), pp. 61–72. doi:http://dx.doi.org/10.1109/BIOSIG.2015.7314620.
 F AlonsoFernandez, J Bigun, in Proc. Int’l Conf. on Biometrics (ICB). Quality factors affecting iris segmentation and matching, (2013). doi:http://dx.doi.org/10.1109/ICB.2013.6613016.
 A Uhl, P Wild, in Proc. Int’l Conf. on Biometrics (ICB). Weighted adaptive hough and ellipsopolar transforms for realtime iris segmentation, (2012). doi:http://dx.doi.org/10.1109/ICB.2012.6199821.
 J Daugman, How iris recognition works. IEEE Trans. Circuits Syst. Video Technol. 14(1), 21–30 (2004). doi:http://dx.doi.org/10.1109/TCSVT.2003.818350.
 RP Wildes, in Proc. of the IEEE, vol. 85. Iris recognition: an emerging biometric technology, (1997).Google Scholar
 H Hofbauer, F AlonsoFernandez, P Wild, J Bigun, A Uhl, in Proc. 22nd Int’l Conf. Pattern Rec. (ICPR). A ground truth for iris segmentation, (2014). doi:http://dx.doi.org/10.1109/ICPR.2014.101.
 A Abhyankar, S Schuckers, in Proc. of SPIE. Active shape models for effective iris segmentation, (2006). doi:http://dx.doi.org/10.1117/12.666435.
 T Tan, Z He, Z Sun, Efficient and robust segmentation of noisy iris images for noncooperative iris recognition. Image Vis. Comput. 28(2), 223–230 (2010).View ArticleGoogle Scholar
 G Sutra, S GarciaSalicetti, B Dorizzi, in Proc. Int’l Conf. Biom. (ICB). The Viterbi algorithm at different resolutions for enhanced iris segmentation, (2012). doi:http://dx.doi.org/10.1109/ICB.2012.6199825.
 H Proença, L Alexandre, Toward covert iris biometric recognition: experimental results from the NICE contests. IEEE Trans. Inf. For. Sec. 7(2), 798–808 (2012). doi:http://dx.doi.org/10.1109/TIFS.2011.2177659.
 Z Wei, T Tan, Z Sun, J Cui, in Proc. int’l conf. on biometrics (icb), ed. by D Zhang, AK Jain. Robust and Fast Assessment of Iris Image Quality, (2005), pp. 464–471. doi:http://dx.doi.org/10.1007/11608288_62.
 E Tabassi, in Proc. Int’l Conf. Biom. Special Int. Group (BIOSIG). Large scale iris image quality evaluation, (2011), pp. 173–184.Google Scholar
 P Wild, J Ferryman, A Uhl, Impact of (segmentation) quality on long vs. shorttimespan assessments in iris recognition performance. IET Biometrics. 4(4), 227–235 (2015). doi:http://dx.doi.org/10.1049/ietbmt.2014.0073.
 E Llano, J Vargas, M GarcíaVázquez, L Fuentes, A RamírezAcosta, in Proc. Int’l Conf. on Biometrics (ICB). Crosssensor iris verification applying robust fused segmentation algorithms, (2015), pp. 17–22. doi:http://dx.doi.org/10.1109/ICB.2015.7139042.
 Y SanchezGonzalez, Y Cabrera, E Llano, in Proc. Ib. Congr. Patt. Rec. (CIARP). A comparison of fused segmentation algorithms for iris verification, (2014), pp. 112–119. doi:http://dx.doi.org/10.1007/9783319125688_14.
 J Huang, L Ma, T Tan, Y Wang, in Proc. BMVC. Learning based resolution enhancement of iris images, (2003), pp. 153–162. doi:http://dx.doi.org/10.5244/C.17.16.
 K Hollingsworth, T Peters, KW Bowyer, PJ Flynn, Iris recognition using signallevel fusion of frames from video. IEEE Trans. Inf. For. Sec. 4(4), 837–848 (2009). doi:http://dx.doi.org/10.1109/TIFS.2009.2033759.
 A Uhl, P Wild, in Proc. Int’l Conf. Image An. Rec. (ICIAR), ed. by A Campilho, M Kamel. Multistage visible wavelength and near infrared iris segmentation framework (2012), pp. 1–10. doi:http://dx.doi.org/10.1007/9783642312984_1.
 A Fitzgibbon, M Pilu, RB Fisher, Direct least square fitting of ellipses. IEEE Trans. Pat. An. Ma. Int. 21(5), 476–480 (1999). doi:http://dx.doi.org/10.1109/34.765658.
 E Tabassi, PJ Grother, WJ Salamon, Iris quality calibration and evaluation (IQCE): evaluation report, NIST Interagency/Internal Report (NISTIR), 7820, (2011).Google Scholar
 E Tabassi, in Proc. of the Int’l Biometrics Performance Conference, IBPC’14. Iso/iec 297946 quantitative standardization of iris image quality, (2014).Google Scholar
 C Zhu, P Byrd, RH Lu, J Nocedal, Algorithm 778: Lbfgsb: Fortran subroutines for largescale boundconstrained optimization. ACM Trans. Math. Softw.23(4), 550–560 (1997). doi:http://dx.doi.org/10.1145/279232.279236.
 CASIAIrisV4 Interval database. http://biometrics.idealtest.org/dbDetailForUser.do?id=4.
 IIT Delhi iris database. http://www4.comp.polyu.edu.hk/%7Ecsajaykr/IITD/Database_Iris.htm.
 A Kumar, A Passi, Pattern Recognition. 43(3), 1016–1026 (2010). doi:http://dx.doi.org/10.1016/j.patcog.2009.08.016.
 D Petrovska, A Mayoue, Description and documentation of the biosecure software library. Technical report, Project No IST2002507634  BioSecure (2007) Available online: http://biosecure.itsudparis.eu/AB/media/files/BioSecure_Deliverable_D0222_b4.pdf.pdf.
 Iris segmentation ground truth database—elliptical/polynomial boundaries (IRISSEGEP). http://www.wavelab.at/sources/Hofbauer14b.
 L Ma, T Tan, Y Wang, D Zhang, Efficient iris recognition by characterizing key local variations. IEEE Trans. Image Proc. 13(6), 739–750 (2004). doi:http://dx.doi.org/10.1109/TIP.2004.827237.
 L Masek, Recognition of human iris patterns for biometric identification,MSc thesis, Univ. Western Australia, 2003. Available online: http://www.peterkovesi.com/studentprojects/libor/LiborMasekThesis.pdf.