Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
Add more filters










Publication year range
1.
J Forensic Sci ; 2024 Aug 26.
Article in English | MEDLINE | ID: mdl-39185731

ABSTRACT

This study examined how variations in signature complexity affected the ability of forensic document examiners (FDEs) and laypeople to determine whether signatures are authentic or simulated (forged), as well as whether they are disguised. Forty-five FDEs from nine countries evaluated nine different signature comparisons in this online study. Receiver Operating Characteristic (ROC) analyses revealed that FDEs performed in excess of chance levels, but performance varied as a function of signature complexity: Sensitivity (the true-positive rate) did not differ much between complexity levels (i.e., 65% vs. 79% vs. 79% for low vs medium vs high complexity), but specificity (the true-negative rate) was the highest (95%) for the medium complexity signatures and lowest (73%) for low complexity signatures. The specificity of high-complexity signatures (83%) was between these values. The sensitivity for disguised comparisons was only 11% and did not vary across complexity levels. One hundred-one novices also completed the study. A comparison of the area under the ROC curve (AUCs) revealed that FDEs outperformed novices in medium and high-complexity signatures but not low-complexity signatures. Novices also struggled to detect disguised signatures. While these findings elucidate the role of signature complexity in lay and expert evaluations, the error rates observed here may differ from those in forensic practice due to differences in the experimental stimuli and circumstances under which they were evaluated. This investigation of the role of signature complexity in the evaluation process was not intended to estimate error rates in forensic practice.

2.
J Forensic Sci ; 69(4): 1519-1522, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38664902
3.
J Forensic Sci ; 69(1): 378-381, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37877317
4.
Proc Natl Acad Sci U S A ; 120(41): e2301844120, 2023 10 10.
Article in English | MEDLINE | ID: mdl-37782790

ABSTRACT

Forensic pattern analysis requires examiners to compare the patterns of items such as fingerprints or tool marks to assess whether they have a common source. This article uses signal detection theory to model examiners' reported conclusions (e.g., identification, inconclusive, or exclusion), focusing on the connection between the examiner's decision threshold and the probative value of the forensic evidence. It uses a Bayesian network model to explore how shifts in decision thresholds may affect rates and ratios of true and false convictions in a hypothetical legal system. It demonstrates that small shifts in decision thresholds, which may arise from contextual bias, can dramatically affect the value of forensic pattern-matching evidence and its utility in the legal system.


Subject(s)
Dermatoglyphics , Forensic Medicine , Bayes Theorem , Bias
5.
J Forensic Sci ; 68(3): 1049-1063, 2023 May.
Article in English | MEDLINE | ID: mdl-36847295

ABSTRACT

Two probabilistic genotyping (PG) programs, STRMix™ and TrueAllele™, were used to assess the strength of the same item of DNA evidence in a federal criminal case, with strikingly different results. For STRMix, the reported likelihood ratio in favor of the non-contributor hypothesis was 24; for TrueAllele it ranged from 1.2 million to 16.7 million, depending on the reference population. This case report seeks to explain why the two programs produced different results and to consider what the difference tells us about the reliability and trustworthiness of these programs. It uses a locus-by-locus breakdown to trace the differing results to subtle differences in modeling parameters and methods, analytic thresholds, and mixture ratios, as well as TrueAllele's use of an ad hoc procedure for assigning LRs at some loci. These findings illustrate the extent to which PG analysis rests on a lattice of contestable assumptions, highlighting the importance of rigorous validation of PG programs using known-source test samples that closely replicate the characteristics of evidentiary samples. The article also points out misleading aspects of the way STRMix and TrueAllele results are routinely presented in reports and testimony and calls for clarification of forensic reporting standards to address those problems.


Subject(s)
DNA Fingerprinting , Software , Genotype , DNA Fingerprinting/methods , Likelihood Functions , Uncertainty , Reproducibility of Results , Microsatellite Repeats , DNA/genetics
6.
Sci Justice ; 61(3): 299-309, 2021 05.
Article in English | MEDLINE | ID: mdl-33985678

ABSTRACT

Since the 1960s, there have been calls for forensic voice comparison to be empirically validated under casework conditions. Since around 2000, there have been an increasing number of researchers and practitioners who conduct forensic-voice-comparison research and casework within the likelihood-ratio framework. In recent years, this community of researchers and practitioners has made substantial progress toward validation under casework conditions becoming a standard part of practice: Procedures for conducting validation have been developed, along with graphics and metrics for representing the results, and an increasing number of papers are being published that include empirical validation of forensic-voice-comparison systems under conditions reflecting casework conditions. An outstanding question, however, is: In the context of a case, given the results of an empirical validation of a forensic-voice-comparison system, how can one decide whether the system is good enough for its output to be used in court? This paper provides a statement of consensus developed in response to this question. Contributors included individuals who had knowledge and experience of validating forensic-voice-comparison systems in research and/or casework contexts, and individuals who had actually presented validation results to courts. They also included individuals who could bring a legal perspective on these matters, and individuals with knowledge and experience of validation in forensic science more broadly. We provide recommendations on what practitioners should do when conducting evaluations and validations, and what they should present to the court. Although our focus is explicitly on forensic voice comparison, we hope that this contribution will be of interest to an audience concerned with validation in forensic science more broadly. Although not written specifically for a legal audience, we hope that this contribution will still be of interest to lawyers.


Subject(s)
Voice , Consensus , Forensic Medicine , Forensic Sciences/methods , Humans , Likelihood Functions
7.
Forensic Sci Int Synerg ; 3: 100149, 2021.
Article in English | MEDLINE | ID: mdl-35112074

ABSTRACT

This Letter to the Editor is a reply to Mohammed et al. (2021) https://doi.org/10.1016/j.fsisyn.2021.100145, which in turn is a response to Morrison et al. (2020) "Vacuous standards - subversion of the OSAC standards-development process" https://doi.org/10.1016/j.fsisyn.2020.06.005.

9.
J Forensic Sci ; 64(5): 1379-1388, 2019 Sep.
Article in English | MEDLINE | ID: mdl-30791101

ABSTRACT

Contextual bias has been widely discussed as a possible problem in forensic science. The trial simulation experiment reported here examined reactions of jurors at a county courthouse to cross-examination and arguments about contextual bias in a hypothetical case. We varied whether the key prosecution witness (a forensic odontologist) was cross-examined about the subjectivity of his interpretations and about his exposure to potentially biasing task-irrelevant information. Jurors found the expert less credible and were less likely to convict when the expert admitted that his interpretation rested on subjective judgment, and when he admitted having been exposed to potentially biasing task-irrelevant contextual information (relative to when these issues were not raised by the lawyers). The findings suggest, however, that forensic scientists can immunize themselves against such challenges and maximize the weight jurors give their evidence by adopting context management procedures that blind them to task-irrelevant information.


Subject(s)
Bias , Decision Making , Expert Testimony , Forensic Sciences/legislation & jurisprudence , Judgment , Adult , Bites, Human , Criminal Law , Female , Humans , Male
10.
Forensic Sci Int ; 291: e18-e19, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30224092

ABSTRACT

Negative forensic evidence can be defined as the failure to find a trace after looking for it. Such evidence is often dismissed by referring to the aphorism "absence of evidence is not evidence of absence." However, this reasoning can be misleading in the context of forensic science. This commentary is designed to help forensic scientists understand the probative value of negative forensic evidence.

11.
J Law Biosci ; 3(3): 538-575, 2016 Dec.
Article in English | MEDLINE | ID: mdl-28852538

ABSTRACT

Several forensic sciences, especially of the pattern-matching kind, are increasingly seen to lack the scientific foundation needed to justify continuing admission as trial evidence. Indeed, several have been abolished in the recent past. A likely next candidate for elimination is bitemark identification. A number of DNA exonerations have occurred in recent years for individuals convicted based on erroneous bitemark identifications. Intense scientific and legal scrutiny has resulted. An important National Academies review found little scientific support for the field. The Texas Forensic Science Commission recently recommended a moratorium on the admission of bitemark expert testimony. The California Supreme Court has a case before it that could start a national dismantling of forensic odontology. This article describes the (legal) basis for the rise of bitemark identification and the (scientific) basis for its impending fall. The article explains the general logic of forensic identification, the claims of bitemark identification, and reviews relevant empirical research on bitemark identification-highlighting both the lack of research and the lack of support provided by what research does exist. The rise and possible fall of bitemark identification evidence has broader implications-highlighting the weak scientific culture of forensic science and the law's difficulty in evaluating and responding to unreliable and unscientific evidence.

13.
Law Hum Behav ; 39(4): 332-49, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25984887

ABSTRACT

Forensic scientists have come under increasing pressure to quantify the strength of their evidence, but it is not clear which of several possible formats for presenting quantitative conclusions will be easiest for lay people, such as jurors, to understand. This experiment examined the way that people recruited from Amazon's Mechanical Turk (n = 541) responded to 2 types of forensic evidence--a DNA comparison and a shoeprint comparison--when an expert explained the strength of this evidence 3 different ways: using random match probabilities (RMPs), likelihood ratios (LRs), or verbal equivalents of likelihood ratios (VEs). We found that verdicts were sensitive to the strength of DNA evidence regardless of how the expert explained it, but verdicts were sensitive to the strength of shoeprint evidence only when the expert used RMPs. The weight given to DNA evidence was consistent with the predictions of a Bayesian network model that incorporated the perceived risk of a false match from 3 causes (coincidence, a laboratory error, and a frame-up), but shoeprint evidence was undervalued relative to the same Bayesian model. Fallacious interpretations of the expert's testimony (consistent with the source probability error and the defense attorney's fallacy) were common and were associated with the weight given to the evidence and verdicts. The findings indicate that perceptions of forensic science evidence are shaped by prior beliefs and expectations as well as expert testimony and consequently that the best way to characterize and explain forensic evidence may vary across forensic disciplines.


Subject(s)
Forensic Sciences/legislation & jurisprudence , Likelihood Functions , Statistics as Topic , Decision Making , Expert Testimony , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...