Your browser doesn't support javascript.
Template-Driven Knowledge Distillation for Compact and Accurate Periocular Biometrics Deep-Learning Models.
Boutros, Fadi; Damer, Naser; Raja, Kiran; Kirchbuchner, Florian; Kuijper, Arjan.
  • Boutros F; Fraunhofer Institute for Computer Graphics Research IGD, 64283 Darmstadt, Germany.
  • Damer N; Mathematical and Applied Visual Computing, Technical University of Darmstadt (TU Darmstadt), 64289 Darmstadt, Germany.
  • Raja K; Fraunhofer Institute for Computer Graphics Research IGD, 64283 Darmstadt, Germany.
  • Kirchbuchner F; Mathematical and Applied Visual Computing, Technical University of Darmstadt (TU Darmstadt), 64289 Darmstadt, Germany.
  • Kuijper A; The Norwegian Colour and Visual Computing Laboratory, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway.
Sensors (Basel) ; 22(5)2022 Mar 01.
Article in English | MEDLINE | ID: covidwho-1742607
ABSTRACT
This work addresses the challenge of building an accurate and generalizable periocular recognition model with a small number of learnable parameters. Deeper (larger) models are typically more capable of learning complex information. For this reason, knowledge distillation (kd) was previously proposed to carry this knowledge from a large model (teacher) into a small model (student). Conventional KD optimizes the student output to be similar to the teacher output (commonly classification output). In biometrics, comparison (verification) and storage operations are conducted on biometric templates, extracted from pre-classification layers. In this work, we propose a novel template-driven KD approach that optimizes the distillation process so that the student model learns to produce templates similar to those produced by the teacher model. We demonstrate our approach on intra- and cross-device periocular verification. Our results demonstrate the superiority of our proposed approach over a network trained without KD and networks trained with conventional (vanilla) KD. For example, the targeted small model achieved an equal error rate (EER) value of 22.2% on cross-device verification without KD. The same model achieved an EER of 21.9% with the conventional KD, and only 14.7% EER when using our proposed template-driven KD.
Subject(s)
Keywords

Full text: Available Collection: International databases Database: MEDLINE Main subject: Deep Learning Type of study: Randomized controlled trials Limits: Humans Language: English Year: 2022 Document Type: Article Affiliation country: S22051921

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Main subject: Deep Learning Type of study: Randomized controlled trials Limits: Humans Language: English Year: 2022 Document Type: Article Affiliation country: S22051921