With recent advances in pattern recognition and computer vision, mobile palmprint authentication has become an emerging field to provide better facilities and ubiquitous computing for scientific and commercial communities. To effectively streamline this issue, researchers focus on improving authentication performance by designing deep convolutional neural networks. Despite the high potential of the state-of-the-art methods, the challenges of preprocessing computation cost, lack of training samples for big data application, and discriminative feature optimization remain to be carefully addressed. A deep mobile palmprint verification framework focusing on discriminative feature representation is proposed. To this end, an automatic feature mapping is learned from two well-known deep architectures via an effective weighted loss function. Thereafter, a convolution-based feature fusion block is followed by a surrogate model in the feature-matching phase for palmprint verification. From a practical point of view, our framework is cost-effective and can represent discriminative features with high performance. We demonstrate the effectiveness of our framework and mobile database for palmprint verification task beating the state-of-the-art on standard benchmarks. Moreover, experimental results show that our model outperforms previous ones, especially for the few-shot learning application, achieving equal error rates of 0.0281% and 0.0197% for IIT Delhi Touchless Palmprint Database and Hong Kong PolyU Palmprint databases, respectively. It is notable that all codes are open-source and may be accessed online.

Joint feature fusion and optimization via deep discriminative model for mobile palmprint verification / Izadpanahkakhk, Mahdieh; Mohammad Razavi, Seyyed; Taghipour-Gorjikolaie, Mehran; Hamid Zahiri, Seyyed; Uncini, Aurelio. - In: JOURNAL OF ELECTRONIC IMAGING. - ISSN 1017-9909. - 28:4(2019). [10.1117/1.JEI.28.4.043026]

Joint feature fusion and optimization via deep discriminative model for mobile palmprint verification

Mahdieh Izadpanahkakhk
;
Aurelio Uncini
2019

Abstract

With recent advances in pattern recognition and computer vision, mobile palmprint authentication has become an emerging field to provide better facilities and ubiquitous computing for scientific and commercial communities. To effectively streamline this issue, researchers focus on improving authentication performance by designing deep convolutional neural networks. Despite the high potential of the state-of-the-art methods, the challenges of preprocessing computation cost, lack of training samples for big data application, and discriminative feature optimization remain to be carefully addressed. A deep mobile palmprint verification framework focusing on discriminative feature representation is proposed. To this end, an automatic feature mapping is learned from two well-known deep architectures via an effective weighted loss function. Thereafter, a convolution-based feature fusion block is followed by a surrogate model in the feature-matching phase for palmprint verification. From a practical point of view, our framework is cost-effective and can represent discriminative features with high performance. We demonstrate the effectiveness of our framework and mobile database for palmprint verification task beating the state-of-the-art on standard benchmarks. Moreover, experimental results show that our model outperforms previous ones, especially for the few-shot learning application, achieving equal error rates of 0.0281% and 0.0197% for IIT Delhi Touchless Palmprint Database and Hong Kong PolyU Palmprint databases, respectively. It is notable that all codes are open-source and may be accessed online.
2019
convolutional neural network; discriminative palmprint; feature fusion; few-shot learning; mobile palmprint verification; surrogate optimization model
01 Pubblicazione su rivista::01a Articolo in rivista
Joint feature fusion and optimization via deep discriminative model for mobile palmprint verification / Izadpanahkakhk, Mahdieh; Mohammad Razavi, Seyyed; Taghipour-Gorjikolaie, Mehran; Hamid Zahiri, Seyyed; Uncini, Aurelio. - In: JOURNAL OF ELECTRONIC IMAGING. - ISSN 1017-9909. - 28:4(2019). [10.1117/1.JEI.28.4.043026]
File allegati a questo prodotto
File Dimensione Formato  
Izadpanahkakhk_Joint-feature_2019.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 9.8 MB
Formato Adobe PDF
9.8 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1340738
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 5
social impact