- Research
- Open access
- Published:
Texture based features for robust palmprint recognition: a comparative study
EURASIP Journal on Information Security volume 2015, Article number: 5 (2015)
Abstract
Palmprint is a widely used biometric trait deployed in various access-control applications due to its convenience in use, reliability, and low cost. In this paper, we propose a novel scheme for palmprint recognition using a sparse representation of features obtained from Bank of Binarized Statistical Image Features (B-BSIF). The palmprint image is characterized by a rich set of features including principal lines, ridges, and wrinkles. Thus, the use of an appropriate texture descriptor scheme is expected to capture this information accurately. To this extent, we explore the idea of B-BSIF that comprises of 56 different BSIF filters whose responses on the given palmprint image is processed independently and classified using sparse representation classifier (SRC). Extensive experiments are carried out on three different large-scale publicly available palmprint databases. We then present an extensive analysis by comparing the proposed scheme with seven different contemporary state-of-the-art schemes that reveals the efficacy of the proposed scheme for robust palmprint recognition.
1 Introduction
Biometric systems are widely used in access control and security-based applications. The goal of the biometric system is to utilize physical and/or behavior characteristics to identify/verify the subject of interest. There exist various kinds of biometric systems that are based on physical and/or behavioral cues such as the face, iris, speech, key-stroke, palmprint, retina, and so on. Among these, the palmprint-based biometric system that has been investigated for over 15 years has demonstrated its applicability as a successful biometric modality. Palmprints exhibit a unique characteristic that can be characterized using texture features that are contributed due to the presence of palm creases, wrinkles, and ridges. Furthermore, the palmprints can be captured using low-cost sensors with a very low-resolution imaging of 75 dots-per-inch (dpi) [1, 2]. Further, recent work [3] has demonstrated the anti-spoofing nature of palmprints that places the palmprint as a highly reliable biometric characteristic.
The increasing popularity of the palmprint biometrics has resulted in various feature extraction techniques that have contributed to boosting the accuracy of palmprint verification. The available techniques can be broadly classified into the following five types, namely: (1) local feature-based approaches, (2) statistical-based approaches, (3) appearance based approaches, (4) texture based approaches, and (5) hybrid approaches. The local feature extraction techniques are based on extracting the feature such as ridges that include both delta points, minutiae, and palm creases (or principle lines). The local features from the palmprint can be extracted using various techniques that includes line segment approach [4], morphological median wavelet [5], Sobel operator [6], Canny operators [6], Plessy operator [7], and wide-line detection operator [8]. Even though the local features are proven to achieve the accurate performance, these methods demand very high-resolution palmprint images to be captured and thereby increases the cost of the sensor. The statistical-based approaches are based on extracting the features that correspond to mean, variance, moments, and energy. There exist various techniques to capture the statistics of the palmprint that includes wavelet transform [9], Fourier transform [10], Cepstrum energy [11], sub-block energy based on Gabor transform [12, 13], micro-scale invariant Gabor [14], Zernike moments [15]. However, the use of the statistics-based approaches are not robust against the sensor noise. The appearance-based approaches perform the data mapping from high dimension to low dimension to achieve high accuracy as well as speed in comparison. The most popular appearance-based techniques includes Principal Component Analysis (PCA) [16], 2DPCA [17], bidirectional PCA [18], (2D)2PCA [19], independent component analysis (ICA) [20], linear discriminant analysis (LDA) [21], kernel-based approaches like kernel discriminant analysis (KDA) [13], kernel PCA (KPCA) [22], and generative model-based approaches, namely: PCA mixture model(PCAMM) and ICA mixture model (ICAMM) [23]. Even though the use of the appearance model can perform equally well with the statistics approach, it still lacks the robustness against variation in noise as well as variation in palmprint templates with time. The texture-based schemes normally extract the global patterns of lines, ridges, and wrinkles that constitute for the robust palmprint recognition. Among the available texture extraction schemes, the use of local binary patterns (LBP) [24], Gabor transform [13], palmcode [25], ordinal code [26], fusion code [27], competitive code [28], and contour code [29] have shown to perform accurately even on low-resolution palmprint images. The hybrid scheme [30, 31] combines more than one of the above-mentioned schemes so that it can address shortcomings of individual schemes. When compared to all five different types of schemes, the hybrid schemes appear to be more robust and accurate for the palmprint recognition. Table 1 shows the characteristics of the existing palmprint feature extraction schemes in terms of computation complexity and accuracy. The detailed survey on the palmprint recognition can be found in [32, 33].
In this work, we propose a simple and novel approach for palmprint verification based on the sparse representation of features derived from the Bank of Binarized Statistical Image Features (B-BSIF) [34]. BSIF [34] is a texture descriptor similar to the LBP, but the difference lies in the way the filters are learned. In the case of BSIF, filters are learned from the natural images while the LBP filters are manually predefined. To the best of our knowledge, no work has been reported in the literature which uses Binarized Statistical Image Features (BSIF) for palmprint verification. With this backdrop, in our previous work [35], we made an initial attempt towards the sparse representation of BSIF. In this paper, the same work is extended in many directions. By exploiting the idea of B-BSIF, filters that include 56 different filters allowed us to reduce significantly the equal error rate (EER). Overall, the following are the main contributions of this work:
-
A new method based on the Bank of BSIF (B-BSIF) and sparse representation classifier (SRC) for palmprint recognition.
-
Extensive experiments are carried out on the following three different palmprint databases, namely: PolyU contact palmprint database [36] with 356 subjects, IIT Delhi contactless palmprint database [37] with 236 subjects, and Multispectral palmprint PolyU database [3] with 500 subjects.
-
Comprehensive analysis by comparing the proposed scheme with seven different state-of-the-art contemporary schemes based on LBP [24], palmcode [25], ordinal code [26], fusion code [27], Gabor transform with KDA [13], Gabor transform with sparse representation [38], and also with our previously proposed technique based on the sparse representation of BSIF [35].
All in all, the proposed framework, being a simple, novel, and first of its kind in the literature of palmprint verification/recognition, is expected to open up a new dimension for further research in the field of palmprint biometrics.
The rest of the paper is structured as follows: Section 2 presents the proposed scheme for robust palmprint recognition, Section 3 discusses the experimental setup, protocols, and results, and Section 4 draws the conclusion.
2 Proposed method
Figure 1 shows the block diagram of the proposed Bank of BSIF and sparse representation classifier (SRC) based scheme for palmprint recognition. The proposed scheme can be structured in two main steps.
2.1 Region of interest extraction
The main idea of the region of interest (RoI) is to extract the significant region from the palmprint that constitutes for the rich set of features such as principal lines, ridges, and wrinkles by compensating for rotation and translation. The accurate extraction of RoI plays a crucial role in improving the performance of the overall palmprint recognition. In this work, we have employed the algorithm proposed in [23] which is based on aligning the palmprint by computing the center of mass and also by locating the valley regions. We carried out this RoI extraction scheme only on the PolyU palmprint database as the other two databases (MSPolyU and IITD) have already provided the RoI images.
2.2 Bank of BSIF features and sparse representation classifier
The idea behind the proposed B-BSIF is to construct a bank of filters that is trained using a set of natural images. Traditionally, one can train B-BSIF filters in an unsupervised manner using the most popular techniques, namely: Restricted Boltzmann Machines (RBMs) [39, 40], auto-encoders [41], sparse coding [42], and independent component analysis [43, 44]. Among these schemes, the use of ICA is a more appealing choice as it overcomes the tuning of large sets of hyper-parameters and can also provide a statistically independent basis that in turn can be used as the filter to extract the features from the given image. Thus, given the natural images, we first normalize to have a zero mean and unit variance [34]. Then, we sample N Im number of patches to learn the BSIF filters using ICA. Thus, the size of the image patch sampled from natural images will fix the size of the BSIF filter to be learned and the selection of the number of top ICA basis will indicate the length of the BSIF filter. For instance, BSIF filter with the size 5×5 and length 8 corresponds to the top 8 basis of the ICA algorithm learned using an image patch size of 5×5 sampled from the natural image. Thus, by varying both size and length, one can learn some BSIF filters from the natural images. In this work, we consider 56 different pre-learned filters with varying size and length that can constitute a Bank of BSIF (B-BSIF) filter.
In this work, we have employed the open-source filters [34] that are learned using 50,000 image patterns randomly sampled from 13 different natural images [45]. The learning process to construct these statistically independent filters has three main steps (1) mean subtraction of each patch, (2) dimensionality reduction using principle component analysis (PCA), (3) estimation of statistically independent filters (or basis) using independent component analysis (ICA). Thus, given the palmprint image I P (m,n) and a BSIF filter \(W_{i}^{k \times k}\), the filter response is obtained as follows [34]:
Where × denotes convolution operation, m and n denote the size of the palmprint image patch, and \(W_{i}^{k \times k}\), ∀i={1,2,…,L} denotes the length of the BSIF filter and k×k indicates the size of the BSIF filter, whose response can be computed together and binarized to obtain the binary string as follows [34]:
Finally, the BSIF features are extracted by considering each single pixel (m, n) as a set of binary values obtained from the L number of linear filters. Mathematically, for a given pixel (m, n) and its corresponding binary representation b i (m,n), BSIF encoded features are obtained as follows:
The whole procedure of BSIF extraction is illustrated in Fig. 2 by considering a BSIF filter \( W_{8}^{17 \times 17}\) which is of length 8 and size of 17×17. Figure 2 a indicates the input ROI of the palmprint image. Figure 2 b shows the learned BSIF filter with a size 17×17 and of length 8. Figure 2 c shows the results of the individual convolution of the palmprint image with BSIF filter as mentioned in Eq. 1. Figure 2 d shows the final BSIF feature encoded using Eq. 3 obtained on the palmprint ROI shown in Fig. 2 a.
To achieve the good performance in palmprint recognition using BSIF, we need to consider two important factors, namely: filter size and filter length. However, the use of a single filter with a fixed length may not be capable of capturing sufficient information to achieve accurate palmprint recognition. Thus, in this work, we propose to use the bank of filters with varying filter size and length. The filter size is varied from 5×5 to 17×17 in steps of two such that we have filters of seven different sizes. In a similar manner, we also vary the length of the filter (or a number of the independent components) from 5 till 12 in steps of 1 to get 8 different lengths. Thus, our ensemble has 7×8=56 filters such that the response for palmprint image is obtained independently. Given a palmprint sample P(m,n), we get 56 independent BSIF coded images R P ={R P1,R P2,…,R P56}.
Figure 3 illustrates various BSIF filters that are included in the proposed B-BSIF scheme. Figure 4 shows the qualitative results on the example palmprint with varying filter size with a fixed bit length of 8. It is interesting to observe here that as the filter size increases the distinctive information about the coarse palm lines also increases. Thus, the use of the various lengths of filters will also result in capturing various information from the palmprint. Furthermore, the variation in wrinkles and ridges among different palmprints can be accurately characterized using a bank of BSIF filter rather than using a single BSIF filter.
Given the palmprint sample I P (m,n), we obtain the response of I P (m,n) to all BSIF filters in B-BSIF; we then perform the sparse representation of these obtained response individually on each filter. Thus, the sparse representation of features obtained from each filter in the B-BSIF can be carried out as follows:
-
1.
Given the reference palmprint samples, we first extract the BSIF features (corresponding to one filter) and construct a training T r for all C classes (or subjects) as follows:
$$ T_{r} = \left[T_{r1}, T_{r2}, \ldots,T_{rC}\right] \in \Re^{N \times \left(n_{u}.C\right)} $$((4))Where, n u denotes the number of reference samples for each class and N indicates the dimension of the BSIF features obtained on n u reference samples from C classes (or subjects).
-
2.
Given that the test (or probe) sample T e obtains the BSIF features (corresponding to same filter as above) that can be considered as a linear combination of the training vectors as:
$$ T_{e} = T_{r}\alpha $$((5))Where,
$$ \alpha = \left[ \alpha_{1}, \ldots, \alpha_{1n_{u}}|,\alpha_{2}, \ldots, \alpha_{2n_{u}}|, \ldots, |\alpha_{C1}, \ldots,\alpha_{{Cn}_{u}}\right] $$((6)) -
3.
Solve l 1 minimization problem [46] as follows:
$$ \hat\alpha = \arg \min_{\alpha^{'} \in \Re^{N}} \| \alpha^{'}\|_{1} T_{e} = T_{r}\alpha^{'} $$((7)) -
4.
Calculate the residual as follows:
$$ r_{c}(y) = \|T_{e} - \Pi_{C}(\alpha^{'})\|_{2} $$((8)) -
5.
Finally, obtain the comparison score as the residual errors to compute the performance of the overall system.
Finally, we repeat the above mentioned steps 1–5 on all 56 different BSIF filters in the bank and obtain the final comparison score that corresponds to the minimum of residual errors obtained on all 56 filters in the bank.
3 Experimental results and discussion
This section presents the experimental results obtained on the proposed scheme for palmprint recognition. Extensive experiments are carried out on three different large-scale publicly available palmprint databases such as: (1) PolyU palmprint database [36], (2) IIT Delhi palmprint database [37], and (3) Multispectral palmprint PolyU database [3]. All the experimental results are presented in terms of equal error rate (EER), and we also present the statistical validation of the results with 90 % confidence interval [13]. In the following section, we present the experimental protocol adopted in this work.
3.1 Assessment protocol
This section describes the evaluation protocol adopted in this work on three different palmprint databases that are the same with our previous paper [35].
PolyU palmprint database This database comprises of 352 subjects such that each subject has ten samples collected in two different sessions. For our experiments, we consider all ten samples from the first session as a reference and all samples from the second session as probe samples. The database is available for research purposes at http://www4.comp.polyu.edu.hk/~biometrics/MultispectralPalmprint/MSP.htm.
IIT Delhi palmprint database This database consists of 235 subjects with both left and right palmprint samples. Each subject has five samples captured independently from both left and right palmprints. To evaluate this database, we consider four samples as the reference and remaining one sample as the probe sample. We repeat this selection of reference and probe samples using leaving-one-out cross validation with k = 10, and finally, we present the result by averaging the performance overall ten runs. The database is available for research purposes at http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Palm.htm.
Multi-Spectral PolyU palmprint database This database consists of 500 subjects whose palmprint samples are captured in two different sessions in four different spectra: blue, red, green, and near infrared (NIR). Each session has six samples per subject. Thus, we select samples from first session as reference samples while we select second session samples as a probe. We repeat this procedure for all four spectral bands, and results are presented independently. The database is available for research purposes at http://www4.comp.polyu.edu.hk/~biometrics/MultispectralPalmprint/MSP.htm.
3.2 Results and discussion
Figure 5 shows the qualitative results of the proposed scheme along with the five different state-of-the-art schemes employed in this work. It can be observed here that, the use of BSIF features appears to capture more accurate palmprint features that are characterized in terms of ridges and wrinkles. This qualitative result shows the superiority of the proposed BSIF features for palmprint recognition.
Table 2 shows the performance of the proposed scheme based on B-BSIF and SRC on the PolyU palmprint database. It can be observed here that, the proposed scheme shows the best performance with an EER of 4.06 % and thereby indicates the performance improvement over 2 % compared to our previous scheme [35] based on single BSIF features. This further justifies the applicability of the proposed scheme for palmprint recognition.
Table 3 tabulates the quantitative performance of the proposed scheme on the IITD contactless palmprint database. Here, we present the results individually on both left and right palmprint samples. As noticed from the Table 3, the proposed scheme shows the outstanding performance with an EER of 0.12 % on the left palmprint samples and 0.72 % on the right palmprint samples. This indicates the performance improvement of over 1 % compared to our previous scheme [35] based on single BSIF features. This further justifies the applicability of the proposed scheme for palmprint recognition on yet another kind of database where palmprint samples are captured in a contactless fashion.
Table 4 shows the performance of the proposed scheme on the multi-spectral PolyU palmprint database. Here also it can be noted that the proposed scheme has achieved the outstanding performance with an EER of 0 %. These results further justify the applicability of the proposed scheme on different palmprint samples that are captured with different spectral bands.
Thus, from the above experiments, it can be observed that the proposed scheme has shown the best performance when compared with five well-established state-of-the-art schemes for the palmprint recognition. Further, the performance achieved using the proposed scheme on three different databases justifies its robustness and applicability of the palmprint recognition.
Table 5 shows the computation time of the various algorithms used in this work. All the algorithms are developed in a Matlab software running on a PC with Intel i7 processor—8 Gb RAM and Windows 7. Note that all the described algorithms are implemented and not optimized to run fast hence computation time provided in the Table 5 is just a reference.
4 Conclusions
Accurate representation of the features plays a vital role in improving the accuracy and reliability of the palmprint recognition. In this paper, we have introduced a novel approach for the palmprint recognition based on B-BSIF and SRC. The main idea of the proposed method is to use multiple BSIF filters with various size and length to constitute an ensemble (or bank of BSIF filters). Since each of these BSIF filters are learned on the natural images using the independent component analysis (ICA), they exhibit the property of statistical independence. We proposed to build the B-BSIF with 56 different BSIF filters. Then, each of these filters is associated with the SRC that essentially perform the sparse representation of each BSIF filter. Thus, given a palmprint sample, we obtain its response on each of the BSIF filter and then obtain the corresponding comparison score using SRC. Finally, we select the best comparison score that corresponds to the minimum value of the residual error. The proposed method is validated by conducting extensive experiments on three different large-scale publicly available databases that indicated the outstanding performance. The performance of the proposed scheme is compared with seven well-established state-of-the-art schemes. The obtained results justify that the proposed scheme has emerged as an efficient and robust tool for accurate palmprint recognition.
References
D Zhang, Palmprint authentication (Springer-verlag, Guangzhou, China, 2004).
A Genovese, V Piuri, F Scotti, Advances in information security, vol. 60. Springer, (2014). doi:http://dx.doi.org/978-3-319-10365-5
D Zhang, Z Guo, G Lu, YLL Zhang, W Zuo, Online joint palmprint and palmvein verification. Expert Syst. Appl. 38(3), 2621–2631 (2011).
W Shu, D Zhang, in Pattern Recognition, 1998. Proceedings. Fourteenth International Conference On, 1. Palmprint verification: an implementation of biometric technology, pp. 219–221. Brisbane, Qld, 1998.
D qingyun, Y yinglin, Z dapeng, A line feature extraction method based on morphological median pyramid. J. South China Univ. Technol. 29(5), 14–18 (2001).
C-C Han, H-L Cheng, C-L Lin, K-C Fan, Personal authentication using palm-print features. Pattern Recognit. 36(2), 371–381 (2003).
JA Noble, Finding corners. Image Vis. Comput. 6(2), 121–128 (1988).
L Liu, D Zhang, in IEEE International Conference on Image Processing (ICIP), 3. Palm-line detection, (2005), pp. 269–272. doi:http://dx.doi.org/10.1109/ICIP.2005.1530380
J-Y Gan, D-P Zhou, in Signal Processing, 2006 8th International Conference On, 3. A novel method for palmprint recognition based on wavelet transform, pp. 1–7. Beijing, 2006.
X yuli, Palmprint feature extraction based on low frequency distribution. Microcomput. Appl. 20(1), 40–43 (2011).
MMM Fahmy, Palmprint recognition based on mel frequency cepstral coefficients feature extraction. Ain Shams Eng. J. 1(1), 39–47 (2010).
Y Zhang, D Zhao, G Sun, Q Guo, B Fu, in Artificial Intelligence and Computational Intelligence (AICI), 2010 International Conference On, 1. Palm print recognition based on sub-block energy feature extracted by real 2d-gabor transform, pp. 124–128. Sanya, 2010.
R Raghavendra, B Dorizzi, A Rao, GH Kumar, Designing efficient fusion schemes for multimodal biometric systems using face and palmprint. Pattern Recognit. 44(5), 1076–1088 (2011).
P xin, R qiuqi, W yanxia, Palmprint recognition using gabor local relative features. Comput. Eng. Appl. 48(15), 34–38 (2012).
GS Badrinath, N Kachhi, P Gupta, Verification system robust to occlusion using low-order zernike moments of palmprint sub-images. Telecommun. Syst. 47(3–4), 275–290 (2011).
G Lu, D Zhang, K Wang, Palmprint recognition using eigenpalms features. Pattern Recognit. Lett. 24(9–10), 1463–1467 (2003).
H Sang, W Yuan, Z Zhang, in Advances in Neural Networks – ISNN 2009. Lecture Notes in Computer Science, 5552, ed. by W Yu, H He, and N Zhang. Research of palmprint recognition based on 2dpca (WuhanChina, 2009), pp. 831–838.
W Zuo, K Wang, D Zhang, in Image Processing, 2005. ICIP 2005. IEEE International Conference On, 2. Bi-directional pca with assembled matrix distance metric, (2005), pp. 958–61. doi:http://dx.doi.org/10.1109/ICIP.2005.1530216
X Pan, Q-Q Ruan, Palmprint recognition using Gabor feature-based (2d)2pca. Neurocomputing. 71(13–15), 3032–6 (2008).
G-M Lu, K-Q Wang, D Zhang, in Machine Learning and Cybernetics, 2004. Proceedings of 2004 International Conference On, 6. Wavelet based independent component analysis for palmprint identification, (2004), pp. 3547–35506. doi:http://dx.doi.org/10.1109/ICMLC.2004.1380404
X Wu, D Zhang, K Wang, Fisherpalms based palmprint recognition. Pattern Recognit. Lett. 24(15), 2829–38 (2003).
M Ekinci, M Aykut, Gabor-based kernel pca for palmprint recognition. Electron. Lett. 43(20), 1077–9 (2007).
R Raghavendra, A Rao, GK Hemantha, in International Conference on Advances in Computing, Control, Telecommunication Technologies. A novel three stage process for palmprint verification (Trivandrum, Kerala, 2009), pp. 88–92.
X Wang, H Gong, H Zhang, B Li, Z Zhuang, in Pattern Recognition, 2006. ICPR 2006. 18th International Conference On, 3. Palmprint identification using boosting local binary pattern (IEEEHong Kong, 2006), pp. 503–6.
A Kumar, H Shen, in 3rd Int Conference on Image and Graphics, ICIG20. Palmprint identification using palm codes (Hong Kong, pp. 258–261. http://dx.doi.org/10.1007/978-3-642-04070-2_42
Z Sun, T Tan, Y Wang, SZ Li, in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference On. Ordinal palmprint represention for personal identification [represention read representation], (2005), pp. 279–84. http://dx.doi.org/10.1109/CVPR.2005.267
A Kong, D Zhang, M Kamel, Palmprint identification using feature-level fusion. Pattern Recognit. 39(3), 478–487 (2006).
J Wei, W Jia, H Wang, D-F Zhu, in Emerging Intelligent Computing Technology and Applications. Improved competitive code for palmprint recognition using simplified gabor filter, (2009), pp. 371–7. ISBN:3-642-04069-1 978-3-642-04069-6.
Z Khan, A Mian, Y Hu, in IEEE International Conference on Computer Vision. Contour code: Robust and efficient multispectral palmprint encoding for human recognition (Barcelona, 2011), pp. 1935–1942.
A Kumar, D Zhang, Personal authentication using multiple palmprint representation. Pattern Recognit. 38(10), 1695–1704 (2005).
W Li, J You, D Zhang, Texture-based palmprint retrieval using a layered search scheme for personal identification. Multimedia, IEEE Trans. 7(5), 891–8 (2005).
A Kong, D Zhang, M Kamel, A survey of palmprint recognition. Pattern Recognit. 42(7), 1408–18 (2009).
D Zhang, W Zuo, F Yue, A comparative study of palmprint recognition algorithms. ACM Comput. Surv. 44(1), 2–1237 (2012).
J Kannala, E Rahtu, in Pattern Recognition (ICPR), 2012 21st International Conference On. Bsif: Binarized statistical image features (IEEE,Tsukuba, 2012), pp. 1363–6.
R Raghavendra, C Busch, in Proceedings of the 2Nd ACM Workshop on Information Hiding and Multimedia Security, IH&MMSec ’14. Robust palmprint verification using sparse representation of binarized statistical features: a comprehensive study (ACM,New York, NY, USA, 2014), pp. 181–5.
D Zhang, W-K Kong, J You, M Wong, Online palmprint identification. Pattern Anal. Mach. Intell. IEEE Trans. 25(9), 1041–50 (2003).
A Kumar, in Sixth Indian Conference on Computer Vision, Graphics Image Processing (ICVGIP). Incorporating cohort information for reliable palmprint authentication (Bhubaneswar, 2008), pp. 583–90.
R Raghavendra, C Busch, Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition. Pattern Recognit. 44(6), 2505–221 (2014).
H Lee, C Ekanadham, AY Ng, in Advances in Neural Information Processing Systems. Sparse deep belief net model for visual area v2, (2008), pp. 873–80. www.eecs.umich.edu.
G Hinton, S Osindero, Y-W Teh, A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–54 (2006).
P Vincent, H Larochelle, Y Bengio, P-A Manzagol, in Proceedings of the 25th International Conference on Machine Learning. Extracting and composing robust features with denoising autoencoders (ACM,New York, NY, USA, 2008), pp. 1096–1103.
BA Olshausen, DJ Field, Sparse coding with an overcomplete basis set: A strategy employed by v1?Vision Res. 37(23), 3311–25 (1997).
JH van Hateren, A van der Schaaf, Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. R. Soc. London. Series B: Biol. Sci. 265(1394), 359–66 (1998).
AJ Bell, TJ Sejnowski, The independent components of natural scenes are edge filters. Vis. Res. 37(23), 3327–38 (1997).
A Hyvarinen, J Hurri, PO Hoyer, Natural Image Statistics (Springer, Berlin, 2009).
J Mairal, F Bach, J Ponce, G Sapiro, Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. 11, 19–60 (2010).
Acknowledgements
The authors extend their thanks to reviewers for the comments and suggestions which helped in improving the overall quality of this article. This work was funded by the EU 7th Framework Program (FP7) under grant agreement n o 284862 for the large-scale integrated project FIDELITY.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
RR conceived, designed, and performed experiments. RR and CB contributed to the writing of the manuscript. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Raghavendra, R., Busch, C. Texture based features for robust palmprint recognition: a comparative study. EURASIP J. on Info. Security 2015, 5 (2015). https://doi.org/10.1186/s13635-015-0022-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13635-015-0022-z