Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Sci Justice ; 58(3): 200-218, 2018 May.
Article in English | MEDLINE | ID: mdl-29685302

ABSTRACT

When strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value of one. The procedures are: (1) a Bayesian procedure with uninformative priors, (2) use of empirical lower and upper bounds (ELUB), and (3) a novel form of regularized logistic regression. As a benchmark, they are compared with linear discriminant analysis, and in some instances with non-regularized logistic regression. The behaviours of the procedures are explored using Monte Carlo simulated data, and tested on real data from comparisons of voice recordings, face images, and glass fragments.

2.
J Biomed Inform ; 76: 69-77, 2017 Dec.
Article in English | MEDLINE | ID: mdl-29042246

ABSTRACT

In order for clinicians to manage disease progression and make effective decisions about drug dosage, treatment regimens or scheduling follow up appointments, it is necessary to be able to identify both short and long-term trends in repeated biomedical measurements. However, this is complicated by the fact that these measurements are irregularly sampled and influenced by both genuine physiological changes and external factors. In their current forms, existing regression algorithms often do not fulfil all of a clinician's requirements for identifying short-term (acute) events while still being able to identify long-term, chronic, trends in disease progression. Therefore, in order to balance both short term interpretability and long term flexibility, an extension to broken-stick regression models is proposed in order to make them more suitable for modelling clinical time series. The proposed probabilistic broken-stick model can robustly estimate both short-term and long-term trends simultaneously, while also accommodating the unequal length and irregularly sampled nature of clinical time series. Moreover, since the model is parametric and completely generative, its first derivative provides a long-term non-linear estimate of the annual rate of change in the measurements more reliably than linear regression. The benefits of the proposed model are illustrated using estimated glomerular filtration rate as a case study used to manage patients with chronic kidney disease.


Subject(s)
Algorithms , Glomerular Filtration Rate , Models, Theoretical , Probability , Humans , Renal Insufficiency, Chronic/physiopathology
3.
Elife ; 62017 02 20.
Article in English | MEDLINE | ID: mdl-28218891

ABSTRACT

Diagnosis and treatment of circadian rhythm sleep-wake disorders both require assessment of circadian phase of the brain's circadian pacemaker. The gold-standard univariate method is based on collection of a 24-hr time series of plasma melatonin, a suprachiasmatic nucleus-driven pineal hormone. We developed and validated a multivariate whole-blood mRNA-based predictor of melatonin phase which requires few samples. Transcriptome data were collected under normal, sleep-deprivation and abnormal sleep-timing conditions to assess robustness of the predictor. Partial least square regression (PLSR), applied to the transcriptome, identified a set of 100 biomarkers primarily related to glucocorticoid signaling and immune function. Validation showed that PLSR-based predictors outperform published blood-derived circadian phase predictors. When given one sample as input, the R2 of predicted vs observed phase was 0.74, whereas for two samples taken 12 hr apart, R2 was 0.90. This blood transcriptome-based model enables assessment of circadian phase from a few samples.


Subject(s)
Biomarkers/blood , Circadian Rhythm , Gene Expression Profiling , Melatonin/biosynthesis , Humans
4.
Mach Vis Appl ; 28(3): 393-407, 2017.
Article in English | MEDLINE | ID: mdl-32103860

ABSTRACT

Images of the kidneys using dynamic contrast-enhanced magnetic resonance renography (DCE-MRR) contains unwanted complex organ motion due to respiration. This gives rise to motion artefacts that hinder the clinical assessment of kidney function. However, due to the rapid change in contrast agent within the DCE-MR image sequence, commonly used intensity-based image registration techniques are likely to fail. While semi-automated approaches involving human experts are a possible alternative, they pose significant drawbacks including inter-observer variability, and the bottleneck introduced through manual inspection of the multiplicity of images produced during a DCE-MRR study. To address this issue, we present a novel automated, registration-free movement correction approach based on windowed and reconstruction variants of dynamic mode decomposition (WR-DMD). Our proposed method is validated on ten different healthy volunteers' kidney DCE-MRI data sets. The results, using block-matching-block evaluation on the image sequence produced by WR-DMD, show the elimination of 99 % of mean motion magnitude when compared to the original data sets, thereby demonstrating the viability of automatic movement correction using WR-DMD.

5.
J Innov Health Inform ; 22(2): 293-301, 2015 Apr 14.
Article in English | MEDLINE | ID: mdl-26245243

ABSTRACT

INTRODUCTION: Renal function is reported using the estimates of glomerular filtration rate (eGFR). However, eGFR values are recorded without reference to the particular serum creatinine (SCr) assays used to derive them, and newer assays were introduced at different time points across the laboratories in the United Kingdom. These changes may cause systematic bias in eGFR reported in routinely collected data, even though laboratory-reported eGFR values have a correction factor applied. DESIGN: An algorithm to detect changes in SCr that in turn affect eGFR calculation method was developed. It compares the mapping of SCr values on to eGFR values across a time series of paired eGFR and SCr measurements. SETTING: Routinely collected primary care data from 20,000 people with the richest renal function data from the quality improvement in chronic kidney disease trial. RESULTS: The algorithm identified a change in eGFR calculation method in 114 (90%) of the 127 included practices. This change was identified in 4736 (23.7%) patient time series analysed. This change in calibration method was found to cause a significant step change in the reported eGFR values, producing a systematic bias. The eGFR values could not be recalibrated by applying the Modification of Diet in Renal Disease equation to the laboratory reported SCr values. CONCLUSIONS: This algorithm can identify laboratory changes in eGFR calculation methods and changes in SCr assay. Failure to account for these changes may misconstrue renal function changes over time. Researchers using routine eGFR data should account for these effects.


Subject(s)
Automation, Laboratory , Creatinine/blood , Electronic Health Records , Health Information Exchange , Kidney Failure, Chronic/blood , Kidney Failure, Chronic/therapy , Kidney Function Tests/methods , Quality Improvement , Aged , Aged, 80 and over , Algorithms , England , Female , Glomerular Filtration Rate/physiology , Humans , Longitudinal Studies , Male , Middle Aged , Primary Health Care
6.
Stud Health Technol Inform ; 180: 1105-7, 2012.
Article in English | MEDLINE | ID: mdl-22874368

ABSTRACT

BACKGROUND: Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. METHODS: Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. RESULTS: We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). DISCUSSION: These requirements and their associated models should become part of research study protocols.


Subject(s)
Biomedical Research/methods , Database Management Systems , Electronic Health Records , Health Records, Personal , Information Storage and Retrieval/methods , Medical Record Linkage/methods , Vocabulary, Controlled , Models, Theoretical , United Kingdom
7.
Inform Prim Care ; 19(2): 57-63, 2011.
Article in English | MEDLINE | ID: mdl-22417815

ABSTRACT

BACKGROUND: Personalised medicine involves customising management to meet patients' needs. In chronic kidney disease (CKD) at the population level there is steady decline in renal function with increasing age; and progressive CKD has been defined as marked variation from this rate of decline. OBJECTIVE: To create visualisations of individual patient's renal function and display smoothed trend lines and confidence intervals for their renal function and other important co-variants. METHOD: Applying advanced pattern recognition techniques developed in biometrics to routinely collected primary care data collected as part of the Quality Improvement in Chronic Kidney Disease (QICKD) trial. We plotted trend lines, using regression, and confidence intervals for individual patients. We also created a visualisation which allowed renal function to be compared with six other covariants: glycated haemoglobin (HbA1c), body mass index (BMI), BP, and therapy. The outputs were reviewed by an expert panel. RESULTS: We successfully extracted and displayed data. We demonstrated that estimated glomerular filtration (eGFR) is a noisy variable, and showed that a large number of people would exceed the 'progressive CKD' criteria. We created a data display that could be readily automated. This display was well received by our expert panel but requires extensive development before testing in a clinical setting. CONCLUSIONS: It is feasible to utilise data visualisation methods developed in biometrics to look at CKD data. The criteria for defining 'progressive CKD' need revisiting, as many patients exceed them. Further development work and testing is needed to explore whether this type of data modelling and visualisation might improve patient care.


Subject(s)
Biometry/methods , Kidney Failure, Chronic/therapy , Precision Medicine , Primary Health Care , Aged , Aged, 80 and over , Aging/physiology , Biomarkers/analysis , Female , Glomerular Filtration Rate , Humans , Kidney Failure, Chronic/physiopathology , Male , Middle Aged , Pattern Recognition, Automated , Pilot Projects , Quality Improvement
8.
IEEE Trans Pattern Anal Mach Intell ; 32(6): 1097-111, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20431134

ABSTRACT

A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.


Subject(s)
Biometric Identification , Data Interpretation, Statistical , Database Management Systems , Databases, Factual , Dermatoglyphics , Face , Female , Humans , Iris , Male , Reproducibility of Results , Voice
9.
IEEE Trans Pattern Anal Mach Intell ; 29(3): 492-8, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17224618

ABSTRACT

Biometric authentication performance is often depicted by a detection error trade-off (DET) curve. We show that this curve is dependent on the choice of samples available, the demographic composition and the number of users specific to a database. We propose a two-step bootstrap procedure to take into account the three mentioned sources of variability. This is an extension to the Bolle et al.'s bootstrap subset technique. Preliminary experiments on the NIST2005 and XM2VTS benchmark databases are encouraging, e.g., the average result across all 24 systems evaluated on NIST2005 indicates that one can predict, with more than 75 percent of DET coverage, an unseen DET curve with eight times more users. Furthermore, our finding suggests that with more data available, the confidence intervals become smaller and, hence, more useful.


Subject(s)
Algorithms , Artificial Intelligence , Biometry/methods , Face/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Speech Recognition Software , Computer Simulation , Humans , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...