ABSTRACT
The use of face mask during the COVID-19 pandemic has increased the popularity of the periocular biometrics in surveillance applications. Despite of the rapid advancements in this area, matching images over cross spectrum is still a challenging problem. Reason may be two-fold 1) variations in image illumination 2) small size of available data sets and/or class imbalance problem. This paper proposes Siamese architecture based convolutional neural networks which works on the concept of one-shot classification. In one shot classification, network requires a single training example from each class to train the complete model which may lead to reduce the need of large dataset as well as doesn't matter whether the dataset is imbalance. The proposed architectures comprise of identical subnetworks with shared weights whose performance is assessed on three publicly available databases namely IMP, UTIRIS and PolyU with four different loss functions namely Binary cross entropy loss, Hinge loss, contrastive loss and Triplet loss. In order to mitigate the inherent illumination variations of cross spectrum images CLAHE was used to preprocess images. Extensive experiments show that the proposed Siamese CNN model with triplet loss function outperforms the states of the art periocular verification methods for cross, mono and multi spectral periocular image matching. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
ABSTRACT
The digitalization of human work has been an ever-evolving process. Student's and employee's attendance systems are automated by using fingerprint biometrics. Specifically covid situation has created the need for touchless attendance system. Many institutions have already implemented a face detection-based attendance system. However, the major problem in designing face-recognising biometric applications is the scalability and accuracy in time to differentiate between multiple faces from a single clip/image. This paper used the OpenFace model for face recognition and developed a multi-face recognition model. The Torch and Python deployment module of deep neural network-based face recognition was used, and it was predicated accurately in time. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
ABSTRACT
Biometrics are the among most popular authentication methods due to their advantages over traditional methods, such as higher security, better accuracy and more convenience. The recent COVID-19 pandemic has led to the wide use of face masks, which greatly affects the traditional face recognition technology. The pandemic has also increased the focus on hygienic and contactless identity verification methods. The forearm is a new biometric that contains discriminative information. In this paper, we proposed a multimodal recognition method that combines the veins and geometry of a forearm. Five features are extracted from a forearm Near-Infrared (Near-Infrared) image: SURF, local line structures, global graph representations, forearm width feature and forearm boundary feature. These features are matched individually and then fused at the score level based on the Improved Analytic Hierarchy Process-entropy weight combination. Comprehensive experiments were carried out to evaluate the proposed recognition method and the fusion rule. The matching results showed that the proposed method can achieve a satisfactory performance. © 2022 The Authors. IET Biometrics published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
ABSTRACT
COVID-19 has ushered in a new era of face masks dominating daily life. Furthermore, facial recognition is ineffective for medical personnel who wear surgical masks. Periocular biometrics is the automatic recognition and classification of a person based on features gathered from the area of the face that surrounds the eye. Analysed and identified various aspects from current work in this comprehensive survey work on periocular biometrics, such as datasets available for periocular regions, biometrics systems available, periocular area detection and segmentation, local and global descriptors, and so on. This research, as predicted, cites current and recent literature to demonstrate the shortcomings of periocular biometrics. This paper also outlined the direction of future periocular recognition studies. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
ABSTRACT
Gender classification is an important biometric task. It has been widely studied in the literature. Face modality is the most studied aspect of human -gender classification. Moreover, the task has also been investigated in terms of different face components such as irises, ears, and the periocular region. In this paper, we aim to investigate gender classification based on the oral region. In the proposed approach, we adopt a convolutional neural network. For experimentation, we extracted the region of interest using the RetinaFace algorithm from the FFHQ faces dataset. We achieved acceptable results, surpassing those that use the mouth as a modality or facial sub-region in geometric approaches. The obtained results also proclaim the importance of the oral region as a facial part lost in the Covid-19 context when people wear facial mask. We suppose that the adaptation of existing facial data analysis solutions from the whole face is indispensable to keep-up their robustness.
ABSTRACT
The demand for contactless biometric authentication has significantly increased during the COVID-19 pandemic and beyond to prevent the spread of Coronavirus. The global pandemic unexpectedly affords a greater opportunity for contactless authentication, but iris and facial recognition biometrics have many usability, security, and privacy challenges, including mask-wearing and presentation attacks (PAs). Mainly, liveness detection against spoofing is notably a challenging task as various biometric authentication methods cannot efficiently assess the real user's physical presence in unsupervised environments. Although several face anti-spoofing methods have been proposed using add-on sensors, dynamic facial texture features, and 3-D mapping, most of them require expensive sensors and substantial computational resources, or fail to detect sophisticated 3-D face spoofing. This article presents a software-based facial liveness detection method named Apple in My Eyes (AIME). AIME is intended to detect the liveness against spoofing for mobile device security using challenge-response testing. AIME generates various screen patterns as authentication challenges, then passively detects corneal-specular reflection responses from human eyes using a frontal camera and analyzes the detected reflections using lightweight machine learning techniques. AIME system components include challenge and pattern detection, feature extraction and classification, and data augmentation and training. We have implemented AIME as a cross-platform application compatible with Android, iOS, and the Web. Our comprehensive experimental results reveal that AIME detects liveness with high accuracy at around 200-ms against different types of sophisticated PAs. AIME can also efficiently detect liveness in multiple contactless biometric authentications without any costly extra sensors nor involving users' active responses.
ABSTRACT
This paper provides a follow-up audit of security checkpoints (or simply checkpoints) for mass transportation hubs such as airports and seaports aiming at the post-pandemic R&D adjustments. The goal of our study is to determine biometric-enabled resources of checkpoints for a counter-epidemic response. To achieve the follow-up audit goals, we embedded the checkpoint into the Emergency Management Cycle (EMC) –the core of any doctrine that challenges disaster. This embedding helps to identify the technology-societal gaps between contemporary and post-pandemic checkpoints. Our study advocates a conceptual exploration of the problem using EMC profiling and formulates new tasks for checkpoints based on the COVID-19 pandemic lessons learned. In order to increase practical value, we chose a case study of face biometrics for an experimental post-pandemic follow-up audit. Author
ABSTRACT
This paper attempts to give an overview of the system which is designed keeping social distancing guidelines in mind. Our system will detect in real-time, if the person in the captured live video is wearing a mask properly or not using a mask detecting algorithm developed using deep learning and neural networks with an accuracy of 96.05%. If and only if the person is wearing a mask, they will be allowed to scan the iris and hence record their attendance, which can be stored in excel or CSV formats. The location of iris biometric is translated to a real-life position in the 3D space with the resolution of 0.lmm. To scan the located biometric this system comprises a robotic arm. End effector of this robotic arm traverses to the translated position of the person's eye to scan iris with an iris scanner. The system employs a 'four degrees of motion' robotic arm that can autonomously align itself to the iris with an accuracy of 96.86%. It is battery operated and has a cylindrical workspace with maximum range of 300mm, hence it is easily deployable in institutions requiring secure authorization while monitoring COVID-19 safety norms. © 2022 IEEE.
ABSTRACT
Face recognition is one of the most widely used biometric identification systems due to its practicality and ease of use. The COVID-19 outbreak has recently expanded rapidly over the world, posing a major threat to people's health and economic well-being. Using masks in public places is an efficient approach to prevent the spread of infections. However, due to the absence of facial features detail, a masked face recognition system is a difficult task. We offer a technique to identify the masked faces in this project.Masked face recognition is a subset of occluded face recognition that requires prior knowledge of the obscured portion of the targeted face. Occluded face recognition is a current research topic that has captured the interest of the computer vision community. Occluded face recognition systems have previously been focused on detecting and recognizing an individual's face in the wild when the occluded part of the face is in a random form and position. Meanwhile, the nose, mouth, and cheeks of a masked face are frequently hidden. The eyes, brows, and forehead may be the only remaining clear areas. As a result, a masked face recognition system could effectively concentrate on analyzing traits that can be derived from the subject's uncovered portions, such as the eyes, brows, and forehead. © 2022 IEEE.
ABSTRACT
Long COVID consequences have changed the perception towards disease management, and it is moving towards personal healthcare monitoring. In this regard, wearable devices have revolutionized the personal healthcare sector to track and monitor physiological parameters of the human body continuously. This would be largely beneficial for early detection (asymptomatic and pre-symptomatic cases of COVID-19), live patient conditions, and long COVID monitoring (COVID recovered patients and healthy individuals) for better COVID-19 management. There are multitude of wearable devices that can observe various human body parameters for remotely monitoring patients and self-monitoring mode for individuals. Smart watches, smart tattoos, rings, smart facemasks, nano-patches, etc., have emerged as the monitoring devices for key physiological parameters, such as body temperature, respiration rate, heart rate, oxygen level, etc. This review includes long COVID challenges for frequent monitoring of biometrics and its possible solution with wearable device technologies for diagnosis and post-therapy of diseases.