Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
J Med Syst ; 46(5): 22, 2022 Mar 25.
Article in English | MEDLINE | ID: mdl-35338425

ABSTRACT

Cardiac structure contouring is a time consuming and tedious manual activity used for radiotherapeutic dose toxicity planning. We developed an automatic cardiac structure segmentation pipeline for use in low-dose non-contrast planning CT based on deep learning algorithms for small datasets. Fifty CT scans were retrospectively selected and the whole heart, ventricles and atria were contoured. A two stage deep learning pipeline was trained on 41 non contrast planning CTs, tuned with 3 CT scans and validated on 6 CT scans. In the first stage, An InceptionResNetV2 network was used to identify the slices that contained cardiac structures. The second stage consisted of three deep learning models trained on the images containing cardiac structures to segment the structures. The three deep learning models predicted the segmentations/contours on axial, coronal and sagittal images and are combined to create the final prediction. The final accuracy of the pipeline was quantified on 6 volumes by calculating the Dice similarity coefficient (DC), 95% Hausdorff distance (95% HD) and volume ratios between predicted and ground truth volumes. Median DC and 95% HD of 0.96, 0.88, 0.92, 0.80 and 0.82, and 1.86, 2.98, 2.02, 6.16 and 6.46 were achieved for the whole heart, right and left ventricle, and right and left atria respectively. The median differences in volume were -4, -1, + 5, -16 and -20% for the whole heart, right and left ventricle, and right and left atria respectively. The automatic contouring pipeline achieves good results for whole heart and ventricles. Robust automatic contouring with deep learning methods seems viable for local centers with small datasets.


Subject(s)
Deep Learning , Algorithms , Heart/diagnostic imaging , Heart Ventricles/diagnostic imaging , Humans , Retrospective Studies
2.
J Digit Imaging ; 35(2): 240-247, 2022 04.
Article in English | MEDLINE | ID: mdl-35083620

ABSTRACT

Organs-at-risk contouring is time consuming and labour intensive. Automation by deep learning algorithms would decrease the workload of radiotherapists and technicians considerably. However, the variety of metrics used for the evaluation of deep learning algorithms make the results of many papers difficult to interpret and compare. In this paper, a qualitative evaluation is done on five established metrics to assess whether their values correlate with clinical usability. A total of 377 CT volumes with heart delineations were randomly selected for training and evaluation. A deep learning algorithm was used to predict the contours of the heart. A total of 101 CT slices from the validation set with the predicted contours were shown to three experienced radiologists. They examined each slice independently whether they would accept or adjust the prediction and if there were (small) mistakes. For each slice, the scores of this qualitative evaluation were then compared with the Sørensen-Dice coefficient (DC), the Hausdorff distance (HD), pixel-wise accuracy, sensitivity and precision. The statistical analysis of the qualitative evaluation and metrics showed a significant correlation. Of the slices with a DC over 0.96 (N = 20) or a 95% HD under 5 voxels (N = 25), no slices were rejected by the readers. Contours with lower DC or higher HD were seen in both rejected and accepted contours. Qualitative evaluation shows that it is difficult to use common quantification metrics as indicator for use in clinic. We might need to change the reporting of quantitative metrics to better reflect clinical acceptance.


Subject(s)
Deep Learning , Algorithms , Benchmarking , Humans , Organs at Risk , Tomography, X-Ray Computed/methods
3.
Forensic Sci Rev ; 30(1): 21-32, 2018 Jan.
Article in English | MEDLINE | ID: mdl-29273569

ABSTRACT

This paper surveys the literature on forensic face recognition (FFR), with a particular focus on the strength of evidence as used in a court of law. FFR is the use of biometric face recognition for several applications in forensic science. It includes scenarios of ID verification and open-set identification, investigation and intelligence, and evaluation of the strength of evidence. We present FFR from operational, tactical, and strategic perspectives. We discuss criticism of FFR and we provide an overview of research efforts from multiple perspectives that relate to the domain of FFR. Finally, we sketch possible future directions for FFR.


Subject(s)
Biometric Identification , Face/anatomy & histology , Datasets as Topic , Expert Testimony , Forensic Sciences , Humans , Image Processing, Computer-Assisted , Neural Networks, Computer , Professional Competence , Research/trends
4.
J Acoust Soc Am ; 109(5 Pt 1): 2085-97, 2001 May.
Article in English | MEDLINE | ID: mdl-11386560

ABSTRACT

Both in speech synthesis and in sound coding it is often beneficial to have a measure that predicts whether, and to what extent, two sounds are different. This paper addresses the problem of estimating the perceptual effects of small modifications to the spectral envelope of a harmonic sound. A recently proposed auditory model is investigated that transforms the physical spectrum into a pattern of specific loudness as a function of critical band rate. A distance measure based on the concept of partial loudness is presented, which treats detectability in terms of a partial loudness threshold. This approach is adapted to the problem of estimating discrimination thresholds related to modifications of the spectral envelope of synthetic vowels. Data obtained from subjective listening tests using a representative set of stimuli in a 3IFC adaptive procedure show that the model makes reasonably good predictions of the discrimination threshold. Systematic deviations from the predicted thresholds may be related to individual differences in auditory filter selectivity. The partial loudness measure is compared with previously proposed distance measures such as the Euclidean distance between excitation patterns and between specific loudness applied to the same experimental data. An objective test measure shows that the partial loudness measure and the Euclidean distance of the excitation patterns are equally appropriate as distance measures for predicting audibility thresholds. The Euclidean distance between specific loudness is worse in performance compared with the other two.


Subject(s)
Auditory Threshold/physiology , Speech Perception/physiology , Humans , Loudness Perception/physiology , Noise , Perceptual Masking/physiology , Phonetics , Psychoacoustics , Speech/physiology
5.
J Acoust Soc Am ; 103(1): 566-71, 1998 Jan.
Article in English | MEDLINE | ID: mdl-9440341

ABSTRACT

An alternative for the Liljencrants-Fant (LF) glottal-pulse model is presented. This alternative is derived from the Rosenberg model. Therefore, it is called the Rosenberg++ model. In the derivation a general framework is used for glottal-pulse models. The Rosenberg++ model is described by the same set of T or R parameters as the LF model but it has the advantage over the LF model that it is computationally more efficient. It is compared with the LF model in a psychoacoustic experiment, from which it is concluded that in a practical situation it is capable of producing synthetic speech which is perceptually equivalent to speech generated with the LF model.


Subject(s)
Auditory Perception/physiology , Electronic Data Processing , Models, Biological , Humans
SELECTION OF CITATIONS
SEARCH DETAIL