Open Access

Texture based features for robust palmprint recognition: a comparative study

EURASIP Journal on Information Security20152015:5

https://doi.org/10.1186/s13635-015-0022-z

Received: 2 February 2015

Accepted: 30 July 2015

Published: 19 August 2015

Abstract

Palmprint is a widely used biometric trait deployed in various access-control applications due to its convenience in use, reliability, and low cost. In this paper, we propose a novel scheme for palmprint recognition using a sparse representation of features obtained from Bank of Binarized Statistical Image Features (B-BSIF). The palmprint image is characterized by a rich set of features including principal lines, ridges, and wrinkles. Thus, the use of an appropriate texture descriptor scheme is expected to capture this information accurately. To this extent, we explore the idea of B-BSIF that comprises of 56 different BSIF filters whose responses on the given palmprint image is processed independently and classified using sparse representation classifier (SRC). Extensive experiments are carried out on three different large-scale publicly available palmprint databases. We then present an extensive analysis by comparing the proposed scheme with seven different contemporary state-of-the-art schemes that reveals the efficacy of the proposed scheme for robust palmprint recognition.

Keywords

Biometrics Palmprint Comparative study Texture features

1 Introduction

Biometric systems are widely used in access control and security-based applications. The goal of the biometric system is to utilize physical and/or behavior characteristics to identify/verify the subject of interest. There exist various kinds of biometric systems that are based on physical and/or behavioral cues such as the face, iris, speech, key-stroke, palmprint, retina, and so on. Among these, the palmprint-based biometric system that has been investigated for over 15 years has demonstrated its applicability as a successful biometric modality. Palmprints exhibit a unique characteristic that can be characterized using texture features that are contributed due to the presence of palm creases, wrinkles, and ridges. Furthermore, the palmprints can be captured using low-cost sensors with a very low-resolution imaging of 75 dots-per-inch (dpi) [1, 2]. Further, recent work [3] has demonstrated the anti-spoofing nature of palmprints that places the palmprint as a highly reliable biometric characteristic.

The increasing popularity of the palmprint biometrics has resulted in various feature extraction techniques that have contributed to boosting the accuracy of palmprint verification. The available techniques can be broadly classified into the following five types, namely: (1) local feature-based approaches, (2) statistical-based approaches, (3) appearance based approaches, (4) texture based approaches, and (5) hybrid approaches. The local feature extraction techniques are based on extracting the feature such as ridges that include both delta points, minutiae, and palm creases (or principle lines). The local features from the palmprint can be extracted using various techniques that includes line segment approach [4], morphological median wavelet [5], Sobel operator [6], Canny operators [6], Plessy operator [7], and wide-line detection operator [8]. Even though the local features are proven to achieve the accurate performance, these methods demand very high-resolution palmprint images to be captured and thereby increases the cost of the sensor. The statistical-based approaches are based on extracting the features that correspond to mean, variance, moments, and energy. There exist various techniques to capture the statistics of the palmprint that includes wavelet transform [9], Fourier transform [10], Cepstrum energy [11], sub-block energy based on Gabor transform [12, 13], micro-scale invariant Gabor [14], Zernike moments [15]. However, the use of the statistics-based approaches are not robust against the sensor noise. The appearance-based approaches perform the data mapping from high dimension to low dimension to achieve high accuracy as well as speed in comparison. The most popular appearance-based techniques includes Principal Component Analysis (PCA) [16], 2DPCA [17], bidirectional PCA [18], (2D)2PCA [19], independent component analysis (ICA) [20], linear discriminant analysis (LDA) [21], kernel-based approaches like kernel discriminant analysis (KDA) [13], kernel PCA (KPCA) [22], and generative model-based approaches, namely: PCA mixture model(PCAMM) and ICA mixture model (ICAMM) [23]. Even though the use of the appearance model can perform equally well with the statistics approach, it still lacks the robustness against variation in noise as well as variation in palmprint templates with time. The texture-based schemes normally extract the global patterns of lines, ridges, and wrinkles that constitute for the robust palmprint recognition. Among the available texture extraction schemes, the use of local binary patterns (LBP) [24], Gabor transform [13], palmcode [25], ordinal code [26], fusion code [27], competitive code [28], and contour code [29] have shown to perform accurately even on low-resolution palmprint images. The hybrid scheme [30, 31] combines more than one of the above-mentioned schemes so that it can address shortcomings of individual schemes. When compared to all five different types of schemes, the hybrid schemes appear to be more robust and accurate for the palmprint recognition. Table 1 shows the characteristics of the existing palmprint feature extraction schemes in terms of computation complexity and accuracy. The detailed survey on the palmprint recognition can be found in [32, 33].
Table 1

Characteristics of palmprint recognition approaches

Feature type

Complexity

Accuracy

Local feature approach

High

High

Statistical approach

Low

Low

Appearance-based approach

Low

Medium

Texture-based method (or) texture coding

Medium

High

Hybrid method

High

High

In this work, we propose a simple and novel approach for palmprint verification based on the sparse representation of features derived from the Bank of Binarized Statistical Image Features (B-BSIF) [34]. BSIF [34] is a texture descriptor similar to the LBP, but the difference lies in the way the filters are learned. In the case of BSIF, filters are learned from the natural images while the LBP filters are manually predefined. To the best of our knowledge, no work has been reported in the literature which uses Binarized Statistical Image Features (BSIF) for palmprint verification. With this backdrop, in our previous work [35], we made an initial attempt towards the sparse representation of BSIF. In this paper, the same work is extended in many directions. By exploiting the idea of B-BSIF, filters that include 56 different filters allowed us to reduce significantly the equal error rate (EER). Overall, the following are the main contributions of this work:
  • A new method based on the Bank of BSIF (B-BSIF) and sparse representation classifier (SRC) for palmprint recognition.

  • Extensive experiments are carried out on the following three different palmprint databases, namely: PolyU contact palmprint database [36] with 356 subjects, IIT Delhi contactless palmprint database [37] with 236 subjects, and Multispectral palmprint PolyU database [3] with 500 subjects.

  • Comprehensive analysis by comparing the proposed scheme with seven different state-of-the-art contemporary schemes based on LBP [24], palmcode [25], ordinal code [26], fusion code [27], Gabor transform with KDA [13], Gabor transform with sparse representation [38], and also with our previously proposed technique based on the sparse representation of BSIF [35].

All in all, the proposed framework, being a simple, novel, and first of its kind in the literature of palmprint verification/recognition, is expected to open up a new dimension for further research in the field of palmprint biometrics.

The rest of the paper is structured as follows: Section 2 presents the proposed scheme for robust palmprint recognition, Section 3 discusses the experimental setup, protocols, and results, and Section 4 draws the conclusion.

2 Proposed method

Figure 1 shows the block diagram of the proposed Bank of BSIF and sparse representation classifier (SRC) based scheme for palmprint recognition. The proposed scheme can be structured in two main steps.
Fig. 1

Block diagram of the proposed method

2.1 Region of interest extraction

The main idea of the region of interest (RoI) is to extract the significant region from the palmprint that constitutes for the rich set of features such as principal lines, ridges, and wrinkles by compensating for rotation and translation. The accurate extraction of RoI plays a crucial role in improving the performance of the overall palmprint recognition. In this work, we have employed the algorithm proposed in [23] which is based on aligning the palmprint by computing the center of mass and also by locating the valley regions. We carried out this RoI extraction scheme only on the PolyU palmprint database as the other two databases (MSPolyU and IITD) have already provided the RoI images.

2.2 Bank of BSIF features and sparse representation classifier

The idea behind the proposed B-BSIF is to construct a bank of filters that is trained using a set of natural images. Traditionally, one can train B-BSIF filters in an unsupervised manner using the most popular techniques, namely: Restricted Boltzmann Machines (RBMs) [39, 40], auto-encoders [41], sparse coding [42], and independent component analysis [43, 44]. Among these schemes, the use of ICA is a more appealing choice as it overcomes the tuning of large sets of hyper-parameters and can also provide a statistically independent basis that in turn can be used as the filter to extract the features from the given image. Thus, given the natural images, we first normalize to have a zero mean and unit variance [34]. Then, we sample N Im number of patches to learn the BSIF filters using ICA. Thus, the size of the image patch sampled from natural images will fix the size of the BSIF filter to be learned and the selection of the number of top ICA basis will indicate the length of the BSIF filter. For instance, BSIF filter with the size 5×5 and length 8 corresponds to the top 8 basis of the ICA algorithm learned using an image patch size of 5×5 sampled from the natural image. Thus, by varying both size and length, one can learn some BSIF filters from the natural images. In this work, we consider 56 different pre-learned filters with varying size and length that can constitute a Bank of BSIF (B-BSIF) filter.

In this work, we have employed the open-source filters [34] that are learned using 50,000 image patterns randomly sampled from 13 different natural images [45]. The learning process to construct these statistically independent filters has three main steps (1) mean subtraction of each patch, (2) dimensionality reduction using principle component analysis (PCA), (3) estimation of statistically independent filters (or basis) using independent component analysis (ICA). Thus, given the palmprint image I P (m,n) and a BSIF filter \(W_{i}^{k \times k}\), the filter response is obtained as follows [34]:
$$ r_{i} =\sum\limits_{m,n} I_{P}(m, n) \ast W_{i}^{k \times k}(m,n) $$
(1)
Where × denotes convolution operation, m and n denote the size of the palmprint image patch, and \(W_{i}^{k \times k}\), i={1,2,…,L} denotes the length of the BSIF filter and k×k indicates the size of the BSIF filter, whose response can be computed together and binarized to obtain the binary string as follows [34]:
$$ b_{i} = \begin{cases} 1, & \text{if}\ r_{i} > 0 \\ 0, & \text{otherwise} \end{cases} $$
(2)
Finally, the BSIF features are extracted by considering each single pixel (m, n) as a set of binary values obtained from the L number of linear filters. Mathematically, for a given pixel (m, n) and its corresponding binary representation b i (m,n), BSIF encoded features are obtained as follows:
$$ {BSIF}_{i}^{k \times k} (m,n) = \sum_{i = 1}^{L} \left(b_{i} (m,n) \times (2^{i-1})\right) $$
(3)
The whole procedure of BSIF extraction is illustrated in Fig. 2 by considering a BSIF filter \( W_{8}^{17 \times 17}\) which is of length 8 and size of 17×17. Figure 2 a indicates the input ROI of the palmprint image. Figure 2 b shows the learned BSIF filter with a size 17×17 and of length 8. Figure 2 c shows the results of the individual convolution of the palmprint image with BSIF filter as mentioned in Eq. 1. Figure 2 d shows the final BSIF feature encoded using Eq. 3 obtained on the palmprint ROI shown in Fig. 2 a.
Fig. 2

Illustration of BSIF extraction. (a) ROI of the palmprint image (b) BSIFfilter with a size 17X17 and of length 8 (c) BSIF features (d) final BSIF feature encoded

To achieve the good performance in palmprint recognition using BSIF, we need to consider two important factors, namely: filter size and filter length. However, the use of a single filter with a fixed length may not be capable of capturing sufficient information to achieve accurate palmprint recognition. Thus, in this work, we propose to use the bank of filters with varying filter size and length. The filter size is varied from 5×5 to 17×17 in steps of two such that we have filters of seven different sizes. In a similar manner, we also vary the length of the filter (or a number of the independent components) from 5 till 12 in steps of 1 to get 8 different lengths. Thus, our ensemble has 7×8=56 filters such that the response for palmprint image is obtained independently. Given a palmprint sample P(m,n), we get 56 independent BSIF coded images R P ={R P1,R P2,…,R P56}.

Figure 3 illustrates various BSIF filters that are included in the proposed B-BSIF scheme. Figure 4 shows the qualitative results on the example palmprint with varying filter size with a fixed bit length of 8. It is interesting to observe here that as the filter size increases the distinctive information about the coarse palm lines also increases. Thus, the use of the various lengths of filters will also result in capturing various information from the palmprint. Furthermore, the variation in wrinkles and ridges among different palmprints can be accurately characterized using a bank of BSIF filter rather than using a single BSIF filter.
Fig. 3

Illustration of BSIF filters with various lengths and sizes. a 5 × 5, 8 bit, b 7 × 7, 10 bit c, 17 × 17, 12 bit learned from natural images

Fig. 4

Qualitative results of BSIF with different path size and fixed bit length of 8 bits. a Input palmprint image, b 3×3,c 5×5,d 7×7,e 9×9,f 11×11,g 13×13,h 15×15,i 17×17

Given the palmprint sample I P (m,n), we obtain the response of I P (m,n) to all BSIF filters in B-BSIF; we then perform the sparse representation of these obtained response individually on each filter. Thus, the sparse representation of features obtained from each filter in the B-BSIF can be carried out as follows:
  1. 1.
    Given the reference palmprint samples, we first extract the BSIF features (corresponding to one filter) and construct a training T r for all C classes (or subjects) as follows:
    $$ T_{r} = \left[T_{r1}, T_{r2}, \ldots,T_{rC}\right] \in \Re^{N \times \left(n_{u}.C\right)} $$
    (4)

    Where, n u denotes the number of reference samples for each class and N indicates the dimension of the BSIF features obtained on n u reference samples from C classes (or subjects).

     
  2. 2.
    Given that the test (or probe) sample T e obtains the BSIF features (corresponding to same filter as above) that can be considered as a linear combination of the training vectors as:
    $$ T_{e} = T_{r}\alpha $$
    (5)
    Where,
    $$ \alpha = \left[ \alpha_{1}, \ldots, \alpha_{1n_{u}}|,\alpha_{2}, \ldots, \alpha_{2n_{u}}|, \ldots, |\alpha_{C1}, \ldots,\alpha_{{Cn}_{u}}\right] $$
    (6)
     
  3. 3.
    Solve l 1 minimization problem [46] as follows:
    $$ \hat\alpha = \arg \min_{\alpha^{'} \in \Re^{N}} \| \alpha^{'}\|_{1} T_{e} = T_{r}\alpha^{'} $$
    (7)
     
  4. 4.
    Calculate the residual as follows:
    $$ r_{c}(y) = \|T_{e} - \Pi_{C}(\alpha^{'})\|_{2} $$
    (8)
     
  5. 5.

    Finally, obtain the comparison score as the residual errors to compute the performance of the overall system.

     

Finally, we repeat the above mentioned steps 1–5 on all 56 different BSIF filters in the bank and obtain the final comparison score that corresponds to the minimum of residual errors obtained on all 56 filters in the bank.

3 Experimental results and discussion

This section presents the experimental results obtained on the proposed scheme for palmprint recognition. Extensive experiments are carried out on three different large-scale publicly available palmprint databases such as: (1) PolyU palmprint database [36], (2) IIT Delhi palmprint database [37], and (3) Multispectral palmprint PolyU database [3]. All the experimental results are presented in terms of equal error rate (EER), and we also present the statistical validation of the results with 90 % confidence interval [13]. In the following section, we present the experimental protocol adopted in this work.

3.1 Assessment protocol

This section describes the evaluation protocol adopted in this work on three different palmprint databases that are the same with our previous paper [35].

PolyU palmprint database This database comprises of 352 subjects such that each subject has ten samples collected in two different sessions. For our experiments, we consider all ten samples from the first session as a reference and all samples from the second session as probe samples. The database is available for research purposes at http://www4.comp.polyu.edu.hk/~biometrics/MultispectralPalmprint/MSP.htm.

IIT Delhi palmprint database This database consists of 235 subjects with both left and right palmprint samples. Each subject has five samples captured independently from both left and right palmprints. To evaluate this database, we consider four samples as the reference and remaining one sample as the probe sample. We repeat this selection of reference and probe samples using leaving-one-out cross validation with k = 10, and finally, we present the result by averaging the performance overall ten runs. The database is available for research purposes at http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Palm.htm.

Multi-Spectral PolyU palmprint database This database consists of 500 subjects whose palmprint samples are captured in two different sessions in four different spectra: blue, red, green, and near infrared (NIR). Each session has six samples per subject. Thus, we select samples from first session as reference samples while we select second session samples as a probe. We repeat this procedure for all four spectral bands, and results are presented independently. The database is available for research purposes at http://www4.comp.polyu.edu.hk/~biometrics/MultispectralPalmprint/MSP.htm.

3.2 Results and discussion

Figure 5 shows the qualitative results of the proposed scheme along with the five different state-of-the-art schemes employed in this work. It can be observed here that, the use of BSIF features appears to capture more accurate palmprint features that are characterized in terms of ridges and wrinkles. This qualitative result shows the superiority of the proposed BSIF features for palmprint recognition.
Fig. 5

Illustration of a palmprint sample, b BSIF (17 X 17, 8 bit), c LBP, d palmcode, e fusion code, f Log-Gabor (LG) transform

Table 2 shows the performance of the proposed scheme based on B-BSIF and SRC on the PolyU palmprint database. It can be observed here that, the proposed scheme shows the best performance with an EER of 4.06 % and thereby indicates the performance improvement over 2 % compared to our previous scheme [35] based on single BSIF features. This further justifies the applicability of the proposed scheme for palmprint recognition.
Table 2

Performance of the proposed method on PolyU palmprint database

Methods

EER (%) with

 

90 % confidence interval

Proposed scheme

4.06 [3.56; 4.56]

BSIF-SRC [35]

6.19 [5.09; 7.29]

LG-SRC [38]

7.67 [6.47; 8.87]

LG-KDA [13]

7.96 [6.56; 9.36]

Palm Code [25]

14.66 [13.26; 16.06]

Ordinal Code [26]

7.66 [6.46; 8.86]

Fusion Code [27]

14.51 [13.21; 15.81]

LBP-SRC [24]

46.22 [43.84; 48.62]

Table 3 tabulates the quantitative performance of the proposed scheme on the IITD contactless palmprint database. Here, we present the results individually on both left and right palmprint samples. As noticed from the Table 3, the proposed scheme shows the outstanding performance with an EER of 0.12 % on the left palmprint samples and 0.72 % on the right palmprint samples. This indicates the performance improvement of over 1 % compared to our previous scheme [35] based on single BSIF features. This further justifies the applicability of the proposed scheme for palmprint recognition on yet another kind of database where palmprint samples are captured in a contactless fashion.
Table 3

Performance of the proposed method on IIT Delhi palmprint database

IIT Delhi DB

Methods

EER (%)

  

90 % confidence interval

 

Proposed scheme

0.12 [0.05;0.19]

 

BSIF-SRC [35]

0.42 [0.32;0.52]

 

Gabor-SRC [38]

1.23 [0.93; 1.53]

 

Ordinal code [26]

0.20 [0.15; 0.25]

Left hand

Gabor-KDA [13]

2.34 [1.54; 3.14]

 

Palmcode [25]

2.67 [1.97; 9.37]

 

Fusion code [27]

2.34 [1.54; 3.14]

 

LBP-SRC [24]

10.41 [8.81;12.10]

 

Proposed scheme

0.72 [0.32; 1.12]

 

BSIF-SRC [35]

1.31 [0.91; 1.71]

 

Gabor-SRC [38]

1.42 [1.02; 1.82]

 

Ordinal code [26]

1.89 [1.59; 2.19]

 

Gabor-KDA [13]

7.82 [6.42; 9.22]

Right hand

Palmcode[25]

3.41 [2.91; 3.91]

 

Fusion code [27]

3.39 [2.89; 3.89]

 

LBP-SRC [24]

13.38 [11.49; 15.28]

Table 4 shows the performance of the proposed scheme on the multi-spectral PolyU palmprint database. Here also it can be noted that the proposed scheme has achieved the outstanding performance with an EER of 0 %. These results further justify the applicability of the proposed scheme on different palmprint samples that are captured with different spectral bands.
Table 4

Performance of the proposed method on MS PolyU palmprint database

Spectrum

Methods

EER (%)

  

90 % confidence interval

 

Proposed scheme

0

 

BSIF-SRC [35]

0

 

Gabor-SRC [38]

0

 

Ordinal code [26]

0

 

Gabor-KDA [13]

0.7 [0.55; 1.25]

Blue

Palmcode [25]

0.2 [0.1; 0.3]

 

Fusion code [27]

0.4 [0.28; 0.68]

 

LBP-SRC [24]

9.76 [8.36; 11.16]

 

Proposed scheme

0

 

BSIF-SRC [35]

0

 

Gabor-SRC [38]

0

 

Ordinal code [26]

0

 

Gabor-KDA [13]

0.40 [0.3; 0.5]

Green

Palmcode [25]

0.41 [0.21; 0.61]

 

Fusion code [27]

0.67 [0.38; 0.96]

 

LBP-SRC [24]

17.75 [15.58; 19.92]

 

Proposed scheme

0

 

BSIF-SRC [35]

0

 

Gabor-SRC [38]

0

 

Ordinal code [26]

0

 

Gabor-KDA [13]

0.40 [0.3; 0.5]

Red

Palmcode [25]

0

 

Fusion code [27]

0.21 [0.03; 0.38]

 

LBP-SRC [24]

14.78 [13.58; 15.98]

 

Proposed scheme

0

 

BSIF-SRC [35]

0

 

Gabor-SRC [38]

0

 

Ordinal code [26]

0

 

Gabor-KDA [13]

0.70 [0.4: 1.0]

NIR

Palmcode [25]

0

 

Fusion code [27]

0.2 [0.02;0.38]

 

LBP-SRC [24]

14.78 [13.38; 16.18]

Thus, from the above experiments, it can be observed that the proposed scheme has shown the best performance when compared with five well-established state-of-the-art schemes for the palmprint recognition. Further, the performance achieved using the proposed scheme on three different databases justifies its robustness and applicability of the palmprint recognition.

Table 5 shows the computation time of the various algorithms used in this work. All the algorithms are developed in a Matlab software running on a PC with Intel i7 processor—8 Gb RAM and Windows 7. Note that all the described algorithms are implemented and not optimized to run fast hence computation time provided in the Table 5 is just a reference.
Table 5

Computation time of different algorithm used in this work

Algorithms

Computation time in seconds

Proposed scheme

30

BSIF-SRC

12

Gabor-SRC

25

Ordinal code

15

Gabor-KDA

12

Palmcode

5

Fusion code

12

LBP-SRC

10

4 Conclusions

Accurate representation of the features plays a vital role in improving the accuracy and reliability of the palmprint recognition. In this paper, we have introduced a novel approach for the palmprint recognition based on B-BSIF and SRC. The main idea of the proposed method is to use multiple BSIF filters with various size and length to constitute an ensemble (or bank of BSIF filters). Since each of these BSIF filters are learned on the natural images using the independent component analysis (ICA), they exhibit the property of statistical independence. We proposed to build the B-BSIF with 56 different BSIF filters. Then, each of these filters is associated with the SRC that essentially perform the sparse representation of each BSIF filter. Thus, given a palmprint sample, we obtain its response on each of the BSIF filter and then obtain the corresponding comparison score using SRC. Finally, we select the best comparison score that corresponds to the minimum value of the residual error. The proposed method is validated by conducting extensive experiments on three different large-scale publicly available databases that indicated the outstanding performance. The performance of the proposed scheme is compared with seven well-established state-of-the-art schemes. The obtained results justify that the proposed scheme has emerged as an efficient and robust tool for accurate palmprint recognition.

Declarations

Acknowledgements

The authors extend their thanks to reviewers for the comments and suggestions which helped in improving the overall quality of this article. This work was funded by the EU 7 th Framework Program (FP7) under grant agreement n o 284862 for the large-scale integrated project FIDELITY.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Norwegian Biometric Laboratory, Gjøvik University College

References

  1. D Zhang, Palmprint authentication (Springer-verlag, Guangzhou, China, 2004).Google Scholar
  2. A Genovese, V Piuri, F Scotti, Advances in information security, vol. 60. Springer, (2014). doi:http://dx.doi.org/978-3-319-10365-5
  3. D Zhang, Z Guo, G Lu, YLL Zhang, W Zuo, Online joint palmprint and palmvein verification. Expert Syst. Appl. 38(3), 2621–2631 (2011).View ArticleGoogle Scholar
  4. W Shu, D Zhang, in Pattern Recognition, 1998. Proceedings. Fourteenth International Conference On, 1. Palmprint verification: an implementation of biometric technology, pp. 219–221. Brisbane, Qld, 1998.Google Scholar
  5. D qingyun, Y yinglin, Z dapeng, A line feature extraction method based on morphological median pyramid. J. South China Univ. Technol. 29(5), 14–18 (2001).Google Scholar
  6. C-C Han, H-L Cheng, C-L Lin, K-C Fan, Personal authentication using palm-print features. Pattern Recognit. 36(2), 371–381 (2003).View ArticleGoogle Scholar
  7. JA Noble, Finding corners. Image Vis. Comput. 6(2), 121–128 (1988).View ArticleGoogle Scholar
  8. L Liu, D Zhang, in IEEE International Conference on Image Processing (ICIP), 3. Palm-line detection, (2005), pp. 269–272. doi:http://dx.doi.org/10.1109/ICIP.2005.1530380
  9. J-Y Gan, D-P Zhou, in Signal Processing, 2006 8th International Conference On, 3. A novel method for palmprint recognition based on wavelet transform, pp. 1–7. Beijing, 2006.Google Scholar
  10. X yuli, Palmprint feature extraction based on low frequency distribution. Microcomput. Appl. 20(1), 40–43 (2011).Google Scholar
  11. MMM Fahmy, Palmprint recognition based on mel frequency cepstral coefficients feature extraction. Ain Shams Eng. J. 1(1), 39–47 (2010).View ArticleGoogle Scholar
  12. Y Zhang, D Zhao, G Sun, Q Guo, B Fu, in Artificial Intelligence and Computational Intelligence (AICI), 2010 International Conference On, 1. Palm print recognition based on sub-block energy feature extracted by real 2d-gabor transform, pp. 124–128. Sanya, 2010.Google Scholar
  13. R Raghavendra, B Dorizzi, A Rao, GH Kumar, Designing efficient fusion schemes for multimodal biometric systems using face and palmprint. Pattern Recognit. 44(5), 1076–1088 (2011).View ArticleMATHGoogle Scholar
  14. P xin, R qiuqi, W yanxia, Palmprint recognition using gabor local relative features. Comput. Eng. Appl. 48(15), 34–38 (2012).Google Scholar
  15. GS Badrinath, N Kachhi, P Gupta, Verification system robust to occlusion using low-order zernike moments of palmprint sub-images. Telecommun. Syst. 47(3–4), 275–290 (2011).View ArticleGoogle Scholar
  16. G Lu, D Zhang, K Wang, Palmprint recognition using eigenpalms features. Pattern Recognit. Lett. 24(9–10), 1463–1467 (2003).View ArticleMATHGoogle Scholar
  17. H Sang, W Yuan, Z Zhang, in Advances in Neural Networks – ISNN 2009. Lecture Notes in Computer Science, 5552, ed. by W Yu, H He, and N Zhang. Research of palmprint recognition based on 2dpca (WuhanChina, 2009), pp. 831–838.View ArticleGoogle Scholar
  18. W Zuo, K Wang, D Zhang, in Image Processing, 2005. ICIP 2005. IEEE International Conference On, 2. Bi-directional pca with assembled matrix distance metric, (2005), pp. 958–61. doi:http://dx.doi.org/10.1109/ICIP.2005.1530216
  19. X Pan, Q-Q Ruan, Palmprint recognition using Gabor feature-based (2d)2pca. Neurocomputing. 71(13–15), 3032–6 (2008).View ArticleGoogle Scholar
  20. G-M Lu, K-Q Wang, D Zhang, in Machine Learning and Cybernetics, 2004. Proceedings of 2004 International Conference On, 6. Wavelet based independent component analysis for palmprint identification, (2004), pp. 3547–35506. doi:http://dx.doi.org/10.1109/ICMLC.2004.1380404
  21. X Wu, D Zhang, K Wang, Fisherpalms based palmprint recognition. Pattern Recognit. Lett. 24(15), 2829–38 (2003).View ArticleGoogle Scholar
  22. M Ekinci, M Aykut, Gabor-based kernel pca for palmprint recognition. Electron. Lett. 43(20), 1077–9 (2007).View ArticleGoogle Scholar
  23. R Raghavendra, A Rao, GK Hemantha, in International Conference on Advances in Computing, Control, Telecommunication Technologies. A novel three stage process for palmprint verification (Trivandrum, Kerala, 2009), pp. 88–92.Google Scholar
  24. X Wang, H Gong, H Zhang, B Li, Z Zhuang, in Pattern Recognition, 2006. ICPR 2006. 18th International Conference On, 3. Palmprint identification using boosting local binary pattern (IEEEHong Kong, 2006), pp. 503–6.Google Scholar
  25. A Kumar, H Shen, in 3rd Int Conference on Image and Graphics, ICIG20. Palmprint identification using palm codes (Hong Kong, pp. 258–261. http://dx.doi.org/10.1007/978-3-642-04070-2_42
  26. Z Sun, T Tan, Y Wang, SZ Li, in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference On. Ordinal palmprint represention for personal identification [represention read representation], (2005), pp. 279–84. http://dx.doi.org/10.1109/CVPR.2005.267
  27. A Kong, D Zhang, M Kamel, Palmprint identification using feature-level fusion. Pattern Recognit. 39(3), 478–487 (2006).View ArticleMATHGoogle Scholar
  28. J Wei, W Jia, H Wang, D-F Zhu, in Emerging Intelligent Computing Technology and Applications. Improved competitive code for palmprint recognition using simplified gabor filter, (2009), pp. 371–7. ISBN:3-642-04069-1 978-3-642-04069-6.Google Scholar
  29. Z Khan, A Mian, Y Hu, in IEEE International Conference on Computer Vision. Contour code: Robust and efficient multispectral palmprint encoding for human recognition (Barcelona, 2011), pp. 1935–1942.Google Scholar
  30. A Kumar, D Zhang, Personal authentication using multiple palmprint representation. Pattern Recognit. 38(10), 1695–1704 (2005).View ArticleGoogle Scholar
  31. W Li, J You, D Zhang, Texture-based palmprint retrieval using a layered search scheme for personal identification. Multimedia, IEEE Trans. 7(5), 891–8 (2005).View ArticleGoogle Scholar
  32. A Kong, D Zhang, M Kamel, A survey of palmprint recognition. Pattern Recognit. 42(7), 1408–18 (2009).View ArticleGoogle Scholar
  33. D Zhang, W Zuo, F Yue, A comparative study of palmprint recognition algorithms. ACM Comput. Surv. 44(1), 2–1237 (2012).View ArticleGoogle Scholar
  34. J Kannala, E Rahtu, in Pattern Recognition (ICPR), 2012 21st International Conference On. Bsif: Binarized statistical image features (IEEE,Tsukuba, 2012), pp. 1363–6.Google Scholar
  35. R Raghavendra, C Busch, in Proceedings of the 2Nd ACM Workshop on Information Hiding and Multimedia Security, IH&MMSec ’14. Robust palmprint verification using sparse representation of binarized statistical features: a comprehensive study (ACM,New York, NY, USA, 2014), pp. 181–5.Google Scholar
  36. D Zhang, W-K Kong, J You, M Wong, Online palmprint identification. Pattern Anal. Mach. Intell. IEEE Trans. 25(9), 1041–50 (2003).View ArticleGoogle Scholar
  37. A Kumar, in Sixth Indian Conference on Computer Vision, Graphics Image Processing (ICVGIP). Incorporating cohort information for reliable palmprint authentication (Bhubaneswar, 2008), pp. 583–90.Google Scholar
  38. R Raghavendra, C Busch, Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition. Pattern Recognit. 44(6), 2505–221 (2014).Google Scholar
  39. H Lee, C Ekanadham, AY Ng, in Advances in Neural Information Processing Systems. Sparse deep belief net model for visual area v2, (2008), pp. 873–80. www.eecs.umich.edu.Google Scholar
  40. G Hinton, S Osindero, Y-W Teh, A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–54 (2006).MathSciNetView ArticleMATHGoogle Scholar
  41. P Vincent, H Larochelle, Y Bengio, P-A Manzagol, in Proceedings of the 25th International Conference on Machine Learning. Extracting and composing robust features with denoising autoencoders (ACM,New York, NY, USA, 2008), pp. 1096–1103.Google Scholar
  42. BA Olshausen, DJ Field, Sparse coding with an overcomplete basis set: A strategy employed by v1?Vision Res. 37(23), 3311–25 (1997).View ArticleGoogle Scholar
  43. JH van Hateren, A van der Schaaf, Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. R. Soc. London. Series B: Biol. Sci. 265(1394), 359–66 (1998).View ArticleGoogle Scholar
  44. AJ Bell, TJ Sejnowski, The independent components of natural scenes are edge filters. Vis. Res. 37(23), 3327–38 (1997).View ArticleGoogle Scholar
  45. A Hyvarinen, J Hurri, PO Hoyer, Natural Image Statistics (Springer, Berlin, 2009).View ArticleGoogle Scholar
  46. J Mairal, F Bach, J Ponce, G Sapiro, Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. 11, 19–60 (2010).MathSciNetMATHGoogle Scholar

Copyright

© Raghavendra and Busch. 2015