Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Genet ; 9: 126, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29713334

RESUMO

Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

2.
Forensic Sci Int ; 272: e7-e9, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-27817943

RESUMO

This letter comments on the report "Forensic science in criminal courts: Ensuring scientific validity of feature-comparison methods" recently released by the President's Council of Advisors on Science and Technology (PCAST). The report advocates a procedure for evaluation of forensic evidence that is a two-stage procedure in which the first stage is "match"/"non-match" and the second stage is empirical assessment of sensitivity (correct acceptance) and false alarm (false acceptance) rates. Almost always, quantitative data from feature-comparison methods are continuously-valued and have within-source variability. We explain why a two-stage procedure is not appropriate for this type of data, and recommend use of statistical procedures which are appropriate.

3.
J Forensic Sci ; 54(1): 135-51, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19040667

RESUMO

Procedures are reviewed and recommendations made for the choice of the size of a sample to estimate the characteristics (sometimes known as parameters) of a population consisting of discrete items which may belong to one and only one of a number of categories with examples drawn from forensic science. Four sampling procedures are described for binary responses, where the number of possible categories is only two, e.g., licit or illicit pills. One is based on priors informed from historical data. The other three are sequential. The first of these is a sequential probability ratio test with a stopping rule derived by controlling the probabilities of type 1 and type 2 errors. The second is a sequential variation of a procedure based on the predictive distribution of the data yet to be inspected and the distribution of the data that have been inspected, with a stopping rule determined by a prespecified threshold on the probability of a wrong decision. The third is a two-sided sequential criterion which stops sampling when one of two competitive hypotheses has a probability of being accepted which is larger than another prespecified threshold. The fifth procedure extends the ideas developed for binary responses to multinomial responses where the number of possible categories (e.g., types of drug or types of glass) may be more than two. The procedure is sequential and recommends stopping when the joint probability interval or ellipsoid for the estimates of the proportions is less than a given threshold in size. For trinomial data this last procedure is illustrated with a ternary diagram with an ellipse formed around the sample proportions. There is a straightforward generalization of this approach to multinomial populations with more than three categories. A conclusion provides recommendations for sampling procedures in various contexts.


Assuntos
Teorema de Bayes , Funções Verossimilhança , Tamanho da Amostra , Medicina Legal , Drogas Ilícitas , Comprimidos
4.
J Forensic Sci ; 52(2): 412-9, 2007 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17316242

RESUMO

A random effects model using two levels of hierarchical nesting has been applied to the calculation of a likelihood ratio as a solution to the problem of comparison between two sets of replicated multivariate continuous observations where it is unknown whether the sets of measurements shared a common origin. Replicate measurements from a population of such measurements allow the calculation of both within-group and between-group variances/covariances. The within-group distribution has been modelled assuming a Normal distribution, and the between-group distribution has been modelled using a kernel density estimation procedure. A graphical method of estimating the dependency structure among the variables has been used to reduce this highly multivariate problem to several problems of lower dimension. The approach was tested using a database comprising measurements of eight major elements from each of four fragments from each of 200 glass objects and found to perform well compared with previous approaches, achieving a 15.2% false-positive rate, and a 5.5% false-negative rate. The modelling was then applied to two examples of casework in which glass found at the scene of the criminal activity has been compared with that found in association with a suspect.

5.
J Forensic Sci ; 48(1): 47-54, 2003 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-12570198

RESUMO

Errors in sample handling or test interpretation may cause false positives in forensic DNA testing. This article uses a Bayesian model to show how the potential for a false positive affects the evidentiary value of DNA evidence and the sufficiency of DNA evidence to meet traditional legal standards for conviction. The Bayesian analysis is contrasted with the "false positive fallacy," an intuitively appealing but erroneous alternative interpretation. The findings show the importance of having accurate information about both the random match probability and the false positive probability when evaluating DNA evidence. It is argued that ignoring or underestimating the potential for a false positive can lead to serious errors of interpretation, particularly when the suspect is identified through a "DNA dragnet" or database search, and that ignorance of the true rate of error creates an important element of uncertainty about the value of DNA evidence.


Assuntos
Técnicas de Laboratório Clínico/normas , Medicina Legal/normas , Teorema de Bayes , Impressões Digitais de DNA/normas , Reações Falso-Positivas , Humanos , Funções Verossimilhança
6.
J Forensic Sci ; 47(5): 968-75, 2002 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-12353583

RESUMO

A consignment of individual packages is thought to contain illegal material, such as drugs, in some or all of the packages. A sample from the consignment is inspected and the quantity of drugs in each package of the sample is measured. It is desired to estimate the total quantity of drugs in the consignment. Sampling variation is present in the original measurements and it is not sufficient just to adjust the sample mean pro rata. An analysis is described which takes account of the uncertainty concerning the proportion of the packages that contain drugs and provides a probabilistic summary of the quantity of drugs in the consignment. In particular, a probabilistic lower bound for the quantity of drugs in the consignment is given, which is dependent on the required standard of proof. This is in contrast to the approach based on confidence intervals which assumes that in the long run, the interval will contain the correct quantity the appropriate proportion of the time, but gives no measure of uncertainty associated with the particular consignment under consideration.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...