Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Forensic Sci Int ; 350: 111790, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37567041

ABSTRACT

Automatic speaker recognition (ASR) is a method used in forensic speaker comparison (FSC) casework. It needs collections of audio data that are representative of the case audio in order to perform reference normalization and to train a score-to-LR function. Audio from a certain minimum number of speakers is needed for each of those purposes to obtain relatively stable performance of ASR. Although it is not possible to set a hard cut-off, for the purpose of this work this number was chosen to be 30 for each, and 60 for both. Lack of representative data from that many speakers and uncertainty about what exactly constitutes representative data are major reasons for not employing ASR in FSC. An experiment was carried out in which a situation was simulated where a practitioner has only 30 speakers available. Several data strategies are tried out to handle the lack of data: leaving out reference normalization, splitting the 30 speakers into two groups of 15 (ignoring the minimum of 30) and a leave 1 or 2 out strategy where all 30 speakers are used for both reference normalization and calibration. They are compared to the baseline situation where the practitioner does have the required 60 speakers. The leave 1 or 2 out strategy with 30 speakers performs on par with baseline, and extension of that strategy to the full 60 speakers even outperforms baseline. This shows that a strategy that halves the data need is viable, lessening the data requirements for ASR in FSC and making the use of ASR possible in more cases.


Subject(s)
Forensic Medicine , Recognition, Psychology
2.
Forensic Sci Int Synerg ; 4: 100230, 2022.
Article in English | MEDLINE | ID: mdl-35647509

ABSTRACT

We agree wholeheartedly with Biedermann (2022) FSI Synergy article 100222 in its criticism of research publications that treat forensic inference in source attribution as an "identification" or "individualization" task. We disagree, however, with its criticism of the use of machine learning for forensic inference. The argument it makes is a strawman argument. There is a growing body of literature on the calculation of well-calibrated likelihood ratios using machine-learning methods and relevant data, and on the validation under casework conditions of such machine-learning-based systems.

3.
Sci Justice ; 61(3): 299-309, 2021 05.
Article in English | MEDLINE | ID: mdl-33985678

ABSTRACT

Since the 1960s, there have been calls for forensic voice comparison to be empirically validated under casework conditions. Since around 2000, there have been an increasing number of researchers and practitioners who conduct forensic-voice-comparison research and casework within the likelihood-ratio framework. In recent years, this community of researchers and practitioners has made substantial progress toward validation under casework conditions becoming a standard part of practice: Procedures for conducting validation have been developed, along with graphics and metrics for representing the results, and an increasing number of papers are being published that include empirical validation of forensic-voice-comparison systems under conditions reflecting casework conditions. An outstanding question, however, is: In the context of a case, given the results of an empirical validation of a forensic-voice-comparison system, how can one decide whether the system is good enough for its output to be used in court? This paper provides a statement of consensus developed in response to this question. Contributors included individuals who had knowledge and experience of validating forensic-voice-comparison systems in research and/or casework contexts, and individuals who had actually presented validation results to courts. They also included individuals who could bring a legal perspective on these matters, and individuals with knowledge and experience of validation in forensic science more broadly. We provide recommendations on what practitioners should do when conducting evaluations and validations, and what they should present to the court. Although our focus is explicitly on forensic voice comparison, we hope that this contribution will be of interest to an audience concerned with validation in forensic science more broadly. Although not written specifically for a legal audience, we hope that this contribution will still be of interest to lawyers.


Subject(s)
Voice , Consensus , Forensic Medicine , Forensic Sciences/methods , Humans , Likelihood Functions
SELECTION OF CITATIONS
SEARCH DETAIL
...