Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Forensic Sci Int Synerg ; 8: 100458, 2024.
Article in English | MEDLINE | ID: mdl-38487302

ABSTRACT

In forensic and security scenarios, accurate facial recognition in surveillance videos, often challenged by variations in pose, illumination, and expression, is essential. Traditional manual comparison methods lack standardization, revealing a critical gap in evidence reliability. We propose an enhanced images-to-video recognition approach, pairing facial images with attributes like pose and quality. Utilizing datasets such as ENFSI 2015, SCFace, XQLFW, ChokePoint, and ForenFace, we assess evidence strength using calibration methods for likelihood ratio estimation. Three models-ArcFace, FaceNet, and QMagFace-undergo validation, with the log-likelihood ratio cost (Cllr) as a key metric. Results indicate that prioritizing high-quality frames and aligning attributes with reference images optimizes recognition, yielding similar Cllr values to the top 25% best frames approach. A combined embedding weighted by frame quality emerges as the second-best method. Upon preprocessing facial images with the super resolution CodeFormer, it unexpectedly increased Cllr, undermining evidence reliability, advising against its use in such forensic applications.

2.
J Forensic Sci ; 65(4): 1169-1183, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32396227

ABSTRACT

In this study, we aim to compare the performance of systems and forensic facial comparison experts in terms of likelihood ratio computation to assess the potential of the machine to support the human expert in the courtroom. In forensics, transparency in the methods is essential. Consequently, state-of-the-art free software was preferred over commercial software. Three different open-source automated systems chosen for their availability and clarity were as follows: OpenFace, SeetaFace, and FaceNet; all three based on convolutional neural networks that return a distance (OpenFace, FaceNet) or similarity (SeetaFace). The returned distance or similarity is converted to a likelihood ratio using three different distribution fits: parametric fit Weibull distribution, nonparametric fit kernel density estimation, and isotonic regression with pool adjacent violators algorithm. The results show that with low-quality frontal images, automated systems have better performance to detect nonmatches than investigators: 100% of precision and specificity in confusion matrix against 89% and 86% obtained by investigators, but with good quality images forensic experts have better results. The rank correlation between investigators and software is around 80%. We conclude that the software can assist in reporting officers as it can do faster and more reliable comparisons with full-frontal images, which can help the forensic expert in casework.


Subject(s)
Automated Facial Recognition/methods , Likelihood Functions , Neural Networks, Computer , Forensic Sciences/methods , Humans , Models, Statistical , Sensitivity and Specificity , Software
3.
Forensic Sci Res ; 3(3): 240-255, 2018.
Article in English | MEDLINE | ID: mdl-30483674

ABSTRACT

Google Location Timeline, once activated, allows to track devices and save their locations. This feature might be useful in the future as available data for evidence in investigations. For that, the court would be interested in the reliability of this data. The position is presented in the form of a pair of coordinates and a radius, hence the estimated area for tracked device is enclosed by a circle. This research focuses on the assessment of the accuracy of the locations given by Google Location History Timeline, which variables affect this accuracy and the initial steps to develop a linear multivariate model that can potentially predict the actual error with respect to the true location considering environmental variables. The determination of the potential influential variables (configuration of mobile device connectivity, speed of movement and environment) was set through a series of experiments in which the true position of the device was recorded with a reference Global Positioning System (GPS) device with a superior order of accuracy. The accuracy was assessed measuring the distance between the Google provided position and the de facto one, later referred to as Google error. If this Google error distance is less than the radius provided, we define it as a hit. The configuration that has the largest hit rate is when the mobile device has GPS available, with a 52% success. Then the use of 3G and 2G connection go with 38% and 33% respectively. The Wi-Fi connection only has a hit rate of 7%. Regarding the means of transport, when the connection is 2G or 3G, the worst results are in Still with a hit rate of 9% and the best in Car with 57%. Regarding the prediction model, the distances and angles from the position of the device to the three nearest cell towers, and the categorical (non-numerical) variables of Environment and means of transport were taking as input variables in this initial study. To evaluate the usability of a model, a Model hit is defined when the actual observation is within the 95% confidence interval provided by the model. Out of the models developed, the one that shows the best results was the one that predicted the accuracy when the used network is 2G, with 76% of Model hits. The second model with best performance had only a 23% success (with the mobile network set to 3G).

SELECTION OF CITATIONS
SEARCH DETAIL
...