Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(17)2023 Sep 01.
Article in English | MEDLINE | ID: mdl-37688059

ABSTRACT

Many "Industry 4.0" applications rely on data-driven methodologies such as Machine Learning and Deep Learning to enable automatic tasks and implement smart factories. Among these applications, the automatic quality control of manufacturing materials is of utmost importance to achieve precision and standardization in production. In this regard, most of the related literature focused on combining Deep Learning with Nondestructive Testing techniques, such as Infrared Thermography, requiring dedicated settings to detect and classify defects in composite materials. Instead, the research described in this paper aims at understanding whether deep neural networks and transfer learning can be applied to plain images to classify surface defects in carbon look components made with Carbon Fiber Reinforced Polymers used in the automotive sector. To this end, we collected a database of images from a real case study, with 400 images to test binary classification (defect vs. no defect) and 1500 for the multiclass classification (components with no defect vs. recoverable vs. non-recoverable). We developed and tested ten deep neural networks as classifiers, comparing ten different pre-trained CNNs as feature extractors. Specifically, we evaluated VGG16, VGG19, ResNet50 version 2, ResNet101 version 2, ResNet152 version 2, Inception version 3, MobileNet version 2, NASNetMobile, DenseNet121, and Xception, all pre-trainined with ImageNet, combined with fully connected layers to act as classifiers. The best classifier, i.e., the network based on DenseNet121, achieved a 97% accuracy in classifying components with no defects, recoverable components, and non-recoverable components, demonstrating the viability of the proposed methodology to classify surface defects from images taken with a smartphone in varying conditions, without the need for dedicated settings. The collected images and the source code of the experiments are available in two public, open-access repositories, making the presented research fully reproducible.

2.
Sensors (Basel) ; 23(4)2023 Feb 09.
Article in English | MEDLINE | ID: mdl-36850537

ABSTRACT

Although face recognition technology is currently integrated into industrial applications, it has open challenges, such as verification and identification from arbitrary poses. Specifically, there is a lack of research about face recognition in surveillance videos using, as reference images, mugshots taken from multiple Points of View (POVs) in addition to the frontal picture and the right profile traditionally collected by national police forces. To start filling this gap and tackling the scarcity of databases devoted to the study of this problem, we present the Face Recognition from Mugshots Database (FRMDB). It includes 28 mugshots and 5 surveillance videos taken from different angles for 39 distinct subjects. The FRMDB is intended to analyze the impact of using mugshots taken from multiple points of view on face recognition on the frames of the surveillance videos. To validate the FRMDB and provide a first benchmark on it, we ran accuracy tests using two CNNs, namely VGG16 and ResNet50, pre-trained on the VGGFace and VGGFace2 datasets for the extraction of face image features. We compared the results to those obtained from a dataset from the related literature, the Surveillance Cameras Face Database (SCFace). In addition to showing the features of the proposed database, the results highlight that the subset of mugshots composed of the frontal picture and the right profile scores the lowest accuracy result among those tested. Therefore, additional research is suggested to understand the ideal number of mugshots for face recognition on frames from surveillance videos.


Subject(s)
Facial Recognition , Humans , Benchmarking , Databases, Factual , Videotape Recording
3.
Data Brief ; 33: 106587, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33318975

ABSTRACT

The automatic detection of violence and crimes in videos is gaining attention, specifically as a tool to unburden security officers and authorities from the need to watch hours of footages to identify event lasting few seconds. So far, most of the available datasets was composed of few clips, in low resolution, often built on too specific cases (e.g. hockey fight). While high resolution datasets are emerging, there is still the need of datasets to test the robustness of violence detection techniques to false positives, due to behaviours which might resemble violent actions. To this end, we propose a dataset composed of 350 clips (MP4 video files, 1920 × 1080 pixels, 30 fps), labelled as non-violent (120 clips) when representing non-violent behaviours, and violent (230 clips) when representing violent behaviours. In particular, the non-violent clips include behaviours (hugs, claps, exulting, etc.) that can cause false positives in the violence detection task, due to fast movements and the similarity with violent behaviours. The clips were performed by non-professional actors, varying from 2 to 4 per clip.

SELECTION OF CITATIONS
SEARCH DETAIL
...