Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Forensic Sci Int Synerg ; 8: 100458, 2024.
Article in English | MEDLINE | ID: mdl-38487302

ABSTRACT

In forensic and security scenarios, accurate facial recognition in surveillance videos, often challenged by variations in pose, illumination, and expression, is essential. Traditional manual comparison methods lack standardization, revealing a critical gap in evidence reliability. We propose an enhanced images-to-video recognition approach, pairing facial images with attributes like pose and quality. Utilizing datasets such as ENFSI 2015, SCFace, XQLFW, ChokePoint, and ForenFace, we assess evidence strength using calibration methods for likelihood ratio estimation. Three models-ArcFace, FaceNet, and QMagFace-undergo validation, with the log-likelihood ratio cost (Cllr) as a key metric. Results indicate that prioritizing high-quality frames and aligning attributes with reference images optimizes recognition, yielding similar Cllr values to the top 25% best frames approach. A combined embedding weighted by frame quality emerges as the second-best method. Upon preprocessing facial images with the super resolution CodeFormer, it unexpectedly increased Cllr, undermining evidence reliability, advising against its use in such forensic applications.

3.
Forensic Sci Int Synerg ; 4: 100230, 2022.
Article in English | MEDLINE | ID: mdl-35647509

ABSTRACT

We agree wholeheartedly with Biedermann (2022) FSI Synergy article 100222 in its criticism of research publications that treat forensic inference in source attribution as an "identification" or "individualization" task. We disagree, however, with its criticism of the use of machine learning for forensic inference. The argument it makes is a strawman argument. There is a growing body of literature on the calculation of well-calibrated likelihood ratios using machine-learning methods and relevant data, and on the validation under casework conditions of such machine-learning-based systems.

4.
Forensic Sci Int ; 334: 111239, 2022 May.
Article in English | MEDLINE | ID: mdl-35364422

ABSTRACT

Forensic facial image comparison lacks a methodological standardization and empirical validation. We aim to address this problem by assessing the potential of machine learning to support the human expert in the courtroom. To yield valid evidence in court, decision making systems for facial image comparison should not only be accurate, they should also provide a calibrated confidence measure. This confidence is best conveyed using a score-based likelihood ratio. In this study we compare the performance of different calibrations for such scores. The score, either a distance or a similarity, is converted to a likelihood ratio using three types of calibration following similar techniques as applied in forensic fields such as speaker comparison and DNA matching, but which have not yet been tested in facial image comparison. The calibration types tested are: naive, quality score based on typicality, and feature-based. As transparency is essential in forensics, we focus on state-of-the-art open software and study their power compared to a state-of-the-art commercial system. With the European Network of Forensic Science Institutes (ENFSI) Proficiency tests as benchmark, calibration results on three public databases namely Labeled Faces in the Wild, SC Face and ForenFace show that both quality score and feature based calibration outperform naive calibration. Overall, the commercial system outperforms open software when evaluating these Likelihood Ratios. In general, we conclude that calibration implemented before likelihood ratio estimation is recommended. Furthermore, in terms of performance the commercial system is preferred over open software. As open software is more transparent, more research on open software is urged for.


Subject(s)
Forensic Sciences , Software , Calibration , Forensic Medicine , Forensic Sciences/methods , Humans , Machine Learning
6.
J Imaging ; 7(1)2021 Jan 13.
Article in English | MEDLINE | ID: mdl-34460579

ABSTRACT

The Photo Response Non-Uniformity pattern (PRNU-pattern) can be used to identify the source of images or to indicate whether images have been made with the same camera. This pattern is also recognized as the "fingerprint" of a camera since it is a highly characteristic feature. However, this pattern, identically to a real fingerprint, is sensitive to many different influences, e.g., the influence of camera settings. In this study, several previously investigated factors were noted, after which three were selected for further investigation. The computation and comparison methods are evaluated under variation of the following factors: resolution, length of the video and compression. For all three studies, images were taken with a single iPhone 6. It was found that a higher resolution ensures a more reliable comparison, and that the length of a (reference) video should always be as high as possible to gain a better PRNU-pattern. It also became clear that compression (i.e., in this study the compression that Snapchat uses) has a negative effect on the correlation value. Therefore, it was found that many different factors play a part when comparing videos. Due to the large amount of controllable and non-controllable factors that influence the PRNU-pattern, it is of great importance that further research is carried out to gain clarity on the individual influences that factors exert.

7.
J Forensic Sci ; 65(4): 1169-1183, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32396227

ABSTRACT

In this study, we aim to compare the performance of systems and forensic facial comparison experts in terms of likelihood ratio computation to assess the potential of the machine to support the human expert in the courtroom. In forensics, transparency in the methods is essential. Consequently, state-of-the-art free software was preferred over commercial software. Three different open-source automated systems chosen for their availability and clarity were as follows: OpenFace, SeetaFace, and FaceNet; all three based on convolutional neural networks that return a distance (OpenFace, FaceNet) or similarity (SeetaFace). The returned distance or similarity is converted to a likelihood ratio using three different distribution fits: parametric fit Weibull distribution, nonparametric fit kernel density estimation, and isotonic regression with pool adjacent violators algorithm. The results show that with low-quality frontal images, automated systems have better performance to detect nonmatches than investigators: 100% of precision and specificity in confusion matrix against 89% and 86% obtained by investigators, but with good quality images forensic experts have better results. The rank correlation between investigators and software is around 80%. We conclude that the software can assist in reporting officers as it can do faster and more reliable comparisons with full-frontal images, which can help the forensic expert in casework.


Subject(s)
Automated Facial Recognition/methods , Likelihood Functions , Neural Networks, Computer , Forensic Sciences/methods , Humans , Models, Statistical , Sensitivity and Specificity , Software
8.
Forensic Sci Int Synerg ; 2: 540-562, 2020.
Article in English | MEDLINE | ID: mdl-33385146

ABSTRACT

This review paper covers the forensic-relevant literature in imaging and video analysis from 2016 to 2019 as a part of the 19th Interpol International Forensic Science Managers Symposium. The review papers are also available at the Interpol website at: https://www.interpol.int/content/download/14458/file/Interpol%20Review%20Papers%202019.pdf.

9.
J Forensic Sci ; 65(1): 6-7, 2020 01.
Article in English | MEDLINE | ID: mdl-31743448
10.
Forensic Sci Res ; 3(3): 179-182, 2018.
Article in English | MEDLINE | ID: mdl-30483667
11.
Forensic Sci Res ; 3(3): 202-209, 2018.
Article in English | MEDLINE | ID: mdl-30483670

ABSTRACT

This article studies the application of models of OpenFace (an open-source deep learning algorithm) to forensics by using multiple datasets. The discussion focuses on the ability of the software to identify similarities and differences between faces based on images from forensics. Experiments using OpenFace on the Labeled Faces in the Wild (LFW)-raw dataset, the LFW-deep funnelled dataset, the Surveillance Cameras Face Database (SCface) and ForenFace datasets showed that as the resolution of the input images worsened, the effectiveness of the models degraded. In general, the effect of the quality of the query images on the efficiency of OpenFace was apparent. Therefore, OpenFace in its current form is inadequate for application to forensics, but can be improved to offer promising uses in the field.

12.
Forensic Sci Res ; 3(3): 183-193, 2018.
Article in English | MEDLINE | ID: mdl-30483668

ABSTRACT

This review summarizes the scientific basis of forensic gait analysis and evaluates its use in the Netherlands, United Kingdom and Denmark, following recent critique on the admission of gait evidence in Canada. A useful forensic feature is (1) measurable, (2) consistent within and (3) different between individuals. Reviewing the academic literature, this article found that (1) forensic gait features can be quantified or observed from surveillance video, but research into accuracy, validity and reliability of these methods is needed; (2) gait is variable within individuals under differing and constant circumstances, with speed having major influence; (3) the discriminative strength of gait features needs more research, although clearly variation exists between individuals. Nevertheless, forensic gait analysis has contributed to several criminal trials in Europe in the past 15 years. The admission of gait evidence differs between courts. The methods are mainly observer-based: multiple gait analysts (independently) assess gait features on video footage of a perpetrator and suspect. Using gait feature databases, likelihood ratios of the hypotheses that the observed individuals have the same or another identity can be calculated. Automated gait recognition algorithms calculate a difference measure between video clips, which is compared with a threshold value derived from a video gait recognition database to indicate likelihood. However, only partly automated algorithms have been used in practice. We argue that the scientific basis of forensic gait analysis is limited. However, gait feature databases enable its use in court for supportive evidence with relatively low evidential value. The recommendations made in this review are (1) to expand knowledge on inter- and intra-subject gait variabilities, discriminative strength and interdependency of gait features, method accuracies, gait feature databases and likelihood ratio estimations; (2) to compare automated and observer-based gait recognition methods; to design (3) an international standard method with known validity, reliability and proficiency tests for analysts; (4) an international standard gait feature data collection method resulting in database(s); (5) (inter)national guidelines for the admission of gait evidence in court; and (6) to decrease the risk for cognitive and contextual bias in forensic gait analysis. This is expected to improve admission of gait evidence in court and judgment of its evidential value. Several ongoing research projects focus on parts of these recommendations.

13.
Forensic Sci Res ; 3(3): 210-218, 2018.
Article in English | MEDLINE | ID: mdl-30483671

ABSTRACT

In this paper, camera recognition with the use of deep learning technique is introduced. To identify the various cameras, their characteristic photo-response non-uniformity (PRNU) noise pattern was extracted. In forensic science, it is important, especially for child pornography cases, to link a photo or a set of photos to a specific camera. Deep learning is a sub-field of machine learning which trains the computer as a human brain to recognize similarities and differences by scanning it, in order to identify an object. The innovation of this research is the use of PRNU noise patterns and a deep learning technique in order to achieve camera identification. In this paper, AlexNet was modified producing an improved training procedure with high maximum accuracy of 80%-90%. DIGITS showed to have identified correctly six cameras out of 10 with a success rate higher than 75% in the database. However, many of the cameras were falsely identified indicating a fault occurring during the procedure. A possible explanation for this is that the PRNU signal is based on the quality of the sensor and the artefacts introduced during the production process of the camera. Some manufacturers may use the same or similar imaging sensors, which could result in similar PRNU noise patterns. In an attempt to form a database which contained different cameras of the same model as different categories, the accuracy rate was low. This provided further proof of the limitations of this technique, since PRNU is stochastic in nature and should be able to distinguish between different cameras from the same brand. Therefore, this study showed that current convolutional neural networks (CNNs) cannot achieve individualization with PRNU patterns. Nevertheless, the paper provided material for further research.

14.
Forensic Sci Res ; 3(3): 219-229, 2018.
Article in English | MEDLINE | ID: mdl-30483672

ABSTRACT

Attribute-based identification systems are essential for forensic investigations because they help in identifying individuals. An item such as clothing is a visual attribute because it can usually be used to describe people. The method proposed in this article aims to identify people based on the visual information derived from their attire. Deep learning is used to train the computer to classify images based on clothing content. We first demonstrate clothing classification using a large scale dataset, where the proposed model performs relatively poorly. Then, we use clothing classification on a dataset containing popular logos and famous brand images. The results show that the model correctly classifies most of the test images with a success rate that is higher than 70%. Finally, we evaluate clothing classification using footage from surveillance cameras. The system performs well on this dataset, labelling 70% of the test images correctly.

15.
Forensic Sci Res ; 3(3): 240-255, 2018.
Article in English | MEDLINE | ID: mdl-30483674

ABSTRACT

Google Location Timeline, once activated, allows to track devices and save their locations. This feature might be useful in the future as available data for evidence in investigations. For that, the court would be interested in the reliability of this data. The position is presented in the form of a pair of coordinates and a radius, hence the estimated area for tracked device is enclosed by a circle. This research focuses on the assessment of the accuracy of the locations given by Google Location History Timeline, which variables affect this accuracy and the initial steps to develop a linear multivariate model that can potentially predict the actual error with respect to the true location considering environmental variables. The determination of the potential influential variables (configuration of mobile device connectivity, speed of movement and environment) was set through a series of experiments in which the true position of the device was recorded with a reference Global Positioning System (GPS) device with a superior order of accuracy. The accuracy was assessed measuring the distance between the Google provided position and the de facto one, later referred to as Google error. If this Google error distance is less than the radius provided, we define it as a hit. The configuration that has the largest hit rate is when the mobile device has GPS available, with a 52% success. Then the use of 3G and 2G connection go with 38% and 33% respectively. The Wi-Fi connection only has a hit rate of 7%. Regarding the means of transport, when the connection is 2G or 3G, the worst results are in Still with a hit rate of 9% and the best in Car with 57%. Regarding the prediction model, the distances and angles from the position of the device to the three nearest cell towers, and the categorical (non-numerical) variables of Environment and means of transport were taking as input variables in this initial study. To evaluate the usability of a model, a Model hit is defined when the actual observation is within the 95% confidence interval provided by the model. Out of the models developed, the one that shows the best results was the one that predicted the accuracy when the used network is 2G, with 76% of Model hits. The second model with best performance had only a 23% success (with the mobile network set to 3G).

16.
Philos Trans R Soc Lond B Biol Sci ; 370(1674)2015 Aug 05.
Article in English | MEDLINE | ID: mdl-26101289

ABSTRACT

In this paper, the importance of modern technology in forensic investigations is discussed. Recent technological developments are creating new possibilities to perform robust scientific measurements and studies outside the controlled laboratory environment. The benefits of real-time, on-site forensic investigations are manifold and such technology has the potential to strongly increase the speed and efficacy of the criminal justice system. However, such benefits are only realized when quality can be guaranteed at all times and findings can be used as forensic evidence in court. At the Netherlands Forensic Institute, innovation efforts are currently undertaken to develop integrated forensic platform solutions that allow for the forensic investigation of human biological traces, the chemical identification of illicit drugs and the study of large amounts of digital evidence. These platforms enable field investigations, yield robust and validated evidence and allow for forensic intelligence and targeted use of expert capacity at the forensic institutes. This technological revolution in forensic science could ultimately lead to a paradigm shift in which a new role of the forensic expert emerges as developer and custodian of integrated forensic platforms.


Subject(s)
Forensic Sciences/standards , Forensic Sciences/trends , Jurisprudence , Technology/standards , Technology/trends
17.
Forensic Sci Int ; 244: 222-30, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25279802

ABSTRACT

Photo-response non-uniformity noise patterns are a robust way to identify the source of an image. However, identifying a common source of images in a large database may be impractical due to long computation times. In this paper a solution for large volume digital camera identification is proposed, which combines, and sometimes slightly modifies, existing methods for a 500 times improvement in the speed of common source identification. Single image comparisons are often plagued by considerable noise contamination from scene content and random noise, which makes it harder to accomplish reliable common source identification. Therefore a new method is introduced that can increase true positive rates by more than 45% at very low computation costs. Analysis of real data from a fraud case shows the effectiveness of the proposed method. As a whole the proposed solution makes it possible to analyze a large database in forensically relevant time, without resorting to large and expensive computer clusters.

18.
J Forensic Sci ; 59(6): 1559-67, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25069532

ABSTRACT

On recordings of certain crimes, the face is not always shown. In such cases, hands can offer a solution, if they are completely visible. An important aspect of this study was to develop a method for hand comparison. The research method was based on the morphology, anthropometry, and biometry of hands. A new aspect of this study was that a manual and automated test were applied, which, respectively, assess many features and provide identification rates quickly. An important observation was that good quality images can provide sufficient hand details. The most distinctive features were the length/width ratio, the palm line pattern and the quantity of highly distinctive features present, and how they are distributed. The results indicate that experience did not improve the identification rates, while the manual test did. Intra-observer variability did not influence the results, whereas hands of relatives were frequently misjudged. Both tests provided high identification rates.


Subject(s)
Anthropometry , Forensic Sciences , Hand/anatomy & histology , Algorithms , Checklist , Databases, Factual , Female , Humans , Image Processing, Computer-Assisted , Male , Professional Competence , Skin Pigmentation , White People
19.
J Forensic Sci ; 57(2): 521-7, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22329355

ABSTRACT

Each digital camera has an intrinsic fingerprint that is unique to each camera. This device fingerprint can be extracted from an image and can be compared with a reference device fingerprint to determine the device origin. The complexity of the filters proposed to accomplish this is increasing. In this note, we use a relatively simple algorithm to extract the sensor noise from images. It has the advantages of being easy to implement and parallelize, and working faster than the wavelet filter that is common for this application. In addition, we compare the performance with a simple median filter and assess whether a previously proposed fingerprint enhancement technique improves results. Experiments are performed on approximately 7500 images originating from 69 cameras, and the results are compared with this often used wavelet filter. Despite the simplicity of the proposed method, the performance exceeds the common wavelet filter and reduces the time needed for the extraction.

20.
J Forensic Sci ; 54(3): 628-38, 2009 May.
Article in English | MEDLINE | ID: mdl-19432739

ABSTRACT

In this research, we examined whether fixed pattern noise or more specifically Photo Response Non-Uniformity (PRNU) can be used to identify the source camera of heavily JPEG compressed digital photographs of resolution 640 x 480 pixels. We extracted PRNU patterns from both reference and questioned images using a two-dimensional Gaussian filter and compared these patterns by calculating the correlation coefficient between them. Both the closed and open-set problems were addressed, leading the problems in the closed set to high accuracies for 83% for single images and 100% for around 20 simultaneously identified questioned images. The correct source camera was chosen from a set of 38 cameras of four different types. For the open-set problem, decision levels were obtained for several numbers of simultaneously identified questioned images. The corresponding false rejection rates were unsatisfactory for single images but improved for simultaneous identification of multiple images.

SELECTION OF CITATIONS
SEARCH DETAIL
...