Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Radiother Oncol ; 180: 109483, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36690302

RESUMO

BACKGROUND AND PURPOSE: The aim of this study was to develop and evaluate a prediction model for 2-year overall survival (OS) in stage I-IIIA non-small cell lung cancer (NSCLC) patients who received definitive radiotherapy by considering clinical variables and image features from pre-treatment CT-scans. MATERIALS AND METHODS: NSCLC patients who received stereotactic radiotherapy were prospectively collected at the UMCG and split into a training and a hold out test set including 189 and 81 patients, respectively. External validation was performed on 228 NSCLC patients who were treated with radiation or concurrent chemoradiation at the Maastro clinic (Lung1 dataset). A hybrid model that integrated both image and clinical features was implemented using deep learning. Image features were learned from cubic patches containing lung tumours extracted from pre-treatment CT scans. Relevant clinical variables were selected by univariable and multivariable analyses. RESULTS: Multivariable analysis showed that age and clinical stage were significant prognostic clinical factors for 2-year OS. Using these two clinical variables in combination with image features from pre-treatment CT scans, the hybrid model achieved a median AUC of 0.76 [95 % CI: 0.65-0.86] and 0.64 [95 % CI: 0.58-0.70] on the complete UMCG and Maastro test sets, respectively. The Kaplan-Meier survival curves showed significant separation between low and high mortality risk groups on these two test sets (log-rank test: p-value < 0.001, p-value = 0.012, respectively) CONCLUSION: We demonstrated that a hybrid model could achieve reasonable performance by utilizing both clinical and image features for 2-year OS prediction. Such a model has the potential to identify patients with high mortality risk and guide clinical decision making.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Carcinoma Pulmonar de Células não Pequenas/terapia , Carcinoma Pulmonar de Células não Pequenas/tratamento farmacológico , Neoplasias Pulmonares/patologia , Estadiamento de Neoplasias , Tomografia Computadorizada por Raios X/métodos , Estudos Retrospectivos
2.
Eur Radiol ; 32(9): 6384-6396, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35362751

RESUMO

OBJECTIVE: To develop an automatic COVID-19 Reporting and Data System (CO-RADS)-based classification in a multi-demographic setting. METHODS: This multi-institutional review boards-approved retrospective study included 2720 chest CT scans (mean age, 58 years [range 18-100 years]) from Italian and Russian patients. Three board-certified radiologists from three countries assessed randomly selected subcohorts from each population and provided CO-RADS-based annotations. CT radiomic features were extracted from the selected subcohorts after preprocessing steps like lung lobe segmentation and automatic noise reduction. We compared three machine learning models, logistic regression (LR), multilayer perceptron (MLP), and random forest (RF) for the automated CO-RADS classification. Model evaluation was carried out in two scenarios, first, training on a mixed multi-demographic subcohort and testing on an independent hold-out dataset. In the second scenario, training was done on a single demography and externally validated on the other demography. RESULTS: The overall inter-observer agreement for the CO-RADS scoring between the radiologists was substantial (k = 0.80). Irrespective of the type of validation test scenario, suspected COVID-19 CT scans were identified with an accuracy of 84%. SHapley Additive exPlanations (SHAP) interpretation showed that the "wavelet_(LH)_GLCM_Imc1" feature had a positive impact on COVID prediction both with and without noise reduction. The application of noise reduction improved the overall performance between the classifiers for all types. CONCLUSION: Using an automated model based on the COVID-19 Reporting and Data System (CO-RADS), we achieved clinically acceptable performance in a multi-demographic setting. This approach can serve as a standardized tool for automated COVID-19 assessment. KEYPOINTS: • Automatic CO-RADS scoring of large-scale multi-demographic chest CTs with mean AUC of 0.93 ± 0.04. • Validation procedure resembles TRIPOD 2b and 3 categories, enhancing the quality of experimental design to test the cross-dataset domain shift between institutions aiding clinical integration. • Identification of COVID-19 pneumonia in the presence of community-acquired pneumonia and other comorbidities with an AUC of 0.92.


Assuntos
COVID-19 , Pneumonia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Demografia , Humanos , Pessoa de Meia-Idade , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Adulto Jovem
3.
J Digit Imaging ; 35(3): 538-550, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35182291

RESUMO

The objective of this study is to evaluate the feasibility of a disease-specific deep learning (DL) model based on minimum intensity projection (minIP) for automated emphysema detection in low-dose computed tomography (LDCT) scans. LDCT scans of 240 individuals from a population-based cohort in the Netherlands (ImaLife study, mean age ± SD = 57 ± 6 years) were retrospectively chosen for training and internal validation of the DL model. For independent testing, LDCT scans of 125 individuals from a lung cancer screening cohort in the USA (NLST study, mean age ± SD = 64 ± 5 years) were used. Dichotomous emphysema diagnosis based on radiologists' annotation was used to develop the model. The automated model included minIP processing (slab thickness range: 1 mm to 11 mm), classification, and detection maps generation. The data-split for the pipeline evaluation involved class-balanced and imbalanced settings. The proposed DL pipeline showed the highest performance (area under receiver operating characteristics curve) for 11 mm slab thickness in both the balanced (ImaLife = 0.90 ± 0.05) and the imbalanced dataset (NLST = 0.77 ± 0.06). For ImaLife subcohort, the variation in minIP slab thickness from 1 to 11 mm increased the DL model's sensitivity from 75 to 88% and decreased the number of false-negative predictions from 10 to 5. The minIP-based DL model can automatically detect emphysema in LDCTs. The performance of thicker minIP slabs was better than that of thinner slabs. LDCT can be leveraged for emphysema detection by applying disease specific augmentation.


Assuntos
Enfisema , Enfisema Pulmonar , Tomografia Computadorizada por Raios X , Inteligência Artificial , Enfisema/diagnóstico por imagem , Humanos , Enfisema Pulmonar/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
4.
Eur J Radiol ; 146: 110068, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34871936

RESUMO

OBJECTIVE: To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program. MATERIALS AND METHODS: One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS. RESULTS: The reference standard consisted of 262 nodules ≥ 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules ≥ 4 - ≤ 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001). CONCLUSIONS: The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulo Pulmonar Solitário , China/epidemiologia , Detecção Precoce de Câncer , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
5.
PLoS One ; 16(10): e0259036, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34705870

RESUMO

The color of particular parts of a flower is often employed as one of the features to differentiate between flower types. Thus, color is also used in flower-image classification. Color labels, such as 'green', 'red', and 'yellow', are used by taxonomists and lay people alike to describe the color of plants. Flower image datasets usually only consist of images and do not contain flower descriptions. In this research, we have built a flower-image dataset, especially regarding orchid species, which consists of human-friendly textual descriptions of features of specific flowers, on the one hand, and digital photographs indicating how a flower looks like, on the other hand. Using this dataset, a new automated color detection model was developed. It is the first research of its kind using color labels and deep learning for color detection in flower recognition. As deep learning often excels in pattern recognition in digital images, we applied transfer learning with various amounts of unfreezing of layers with five different neural network architectures (VGG16, Inception, Resnet50, Xception, Nasnet) to determine which architecture and which scheme of transfer learning performs best. In addition, various color scheme scenarios were tested, including the use of primary and secondary color together, and, in addition, the effectiveness of dealing with multi-class classification using multi-class, combined binary, and, finally, ensemble classifiers were studied. The best overall performance was achieved by the ensemble classifier. The results show that the proposed method can detect the color of flower and labellum very well without having to perform image segmentation. The result of this study can act as a foundation for the development of an image-based plant recognition system that is able to offer an explanation of a provided classification.


Assuntos
Cor , Aprendizado Profundo , Flores , Plantas/classificação , Algoritmos
6.
Phys Med Biol ; 66(11)2021 05 26.
Artigo em Inglês | MEDLINE | ID: mdl-33906186

RESUMO

Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador
7.
Med Phys ; 48(2): 733-744, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33300162

RESUMO

PURPOSE: Early detection of lung cancer is of importance since it can increase patients' chances of survival. To detect nodules accurately during screening, radiologists would commonly take the axial, coronal, and sagittal planes into account, rather than solely the axial plane in clinical evaluation. Inspired by clinical work, the paper aims to develop an accurate deep learning framework for nodule detection by a combination of multiple planes. METHODS: The nodule detection system is designed in two stages, multiplanar nodule candidate detection, multiscale false positive (FP) reduction. At the first stage, a deeply supervised encoder-decoder network is trained by axial, coronal, and sagittal slices for the candidate detection task. All possible nodule candidates from the three different planes are merged. To further refine results, a three-dimensional multiscale dense convolutional neural network that extracts multiscale contextual information is applied to remove non-nodules. In the public LIDC-IDRI dataset, 888 computed tomography scans with 1186 nodules accepted by at least three of four radiologists are selected to train and evaluate our proposed system via a tenfold cross-validation scheme. The free-response receiver operating characteristic curve is used for performance assessment. RESULTS: The proposed system achieves a sensitivity of 94.2% with 1.0 FP/scan and a sensitivity of 96.0% with 2.0 FPs/scan. Although it is difficult to detect small nodules (i.e., <6 mm), our designed CAD system reaches a sensitivity of 93.4% (95.0%) of these small nodules at an overall FP rate of 1.0 (2.0) FPs/scan. At the nodule candidate detection stage, results show that the system with a multiplanar method is capable to detect more nodules compared to using a single plane. CONCLUSION: Our approach achieves good performance not only for small nodules but also for large lesions on this dataset. This demonstrates the effectiveness of our developed CAD system for lung nodule detection.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
8.
Comput Methods Programs Biomed ; 196: 105620, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32615493

RESUMO

BACKGROUND AND OBJECTIVE: To investigate the effect of the slab thickness in maximum intensity projections (MIPs) on the candidate detection performance of a deep learning-based computer-aided detection (DL-CAD) system for pulmonary nodule detection in CT scans. METHODS: The public LUNA16 dataset includes 888 CT scans with 1186 nodules annotated by four radiologists. From those scans, MIP images were reconstructed with slab thicknesses of 5 to 50 mm (at 5 mm intervals) and 3 to 13 mm (at 2 mm intervals). The architecture in the nodule candidate detection part of the DL-CAD system was trained separately using MIP images with various slab thicknesses. Based on ten-fold cross-validation, the sensitivity and the F2 score were determined to evaluate the performance of using each slab thickness at the nodule candidate detection stage. The free-response receiver operating characteristic (FROC) curve was used to assess the performance of the whole DL-CAD system that took the results combined from 16 MIP slab thickness settings. RESULTS: At the nodule candidate detection stage, the combination of results from 16 MIP slab thickness settings showed a high sensitivity of 98.0% with 46 false positives (FPs) per scan. Regarding a single MIP slab thickness of 10 mm, the highest sensitivity of 90.0% with 8 FPs/scan was reached before false positive reduction. The sensitivity increased (82.8% to 90.0%) for slab thickness of 1 to 10 mm and decreased (88.7% to 76.6%) for slab thickness of 15-50 mm. The number of FPs was decreasing with increasing slab thickness, but was stable at 5 FPs/scan at a slab thickness of 30 mm or more. After false positive reduction, the DL-CAD system, utilizing 16 MIP slab thickness settings, had the sensitivity of 94.4% with 1 FP/scan. CONCLUSIONS: The utilization of multi-MIP images could improve the performance at the nodule candidate detection stage, even for the whole DL-CAD system. For a single slab thickness of 10 mm, the highest sensitivity for pulmonary nodule detection was reached at the nodule candidate detection stage, similar to the slab thickness usually applied by radiologists.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Sensibilidade e Especificidade , Nódulo Pulmonar Solitário/diagnóstico por imagem
9.
Eur J Radiol ; 129: 109114, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32531719

RESUMO

PURPOSE: Coronary artery calcium (CAC) score has shown to be an accurate predictor of future cardiovascular events. Early detection by CAC scoring might reduce the number of deaths by cardiovascular disease (CVD). Automatically excluding scans which test negative for CAC could significantly reduce the workload of radiologists. We propose an algorithm that both excludes negative scans and segments the CAC. METHOD: The training and internal validation data were collected from the ROBINSCA study. The external validation data were collected from the ImaLife study. Both contain annotated low-dose non-contrast cardiac CT scans. 60 scans of participants were used for training and 2 sets of 50 CT scans of participants without CAC and 50 CT scans of participants with an Agatston score between 10 and 20 were collected for both internal and external validation. The effect of dilated convolutional layers was tested by using 2 CNN architectures. We used the patient-level accuracy as metric for assessing the accuracy of our pipeline for detection of CAC and the Dice coefficient score as metric for the segmentation of CAC. RESULTS: Of the 50 negative cases in the internal and external validation set, 62 % and 86 % were classified correctly, respectively. There were no false negative predictions. For the segmentation task, Dice Coefficient scores of 0.63 and 0.84 were achieved for the internal and external validation datasets, respectively. CONCLUSIONS: Our algorithm excluded 86 % of all scans without CAC. Radiologists might need to spend less time on participants without CAC and could spend more time on participants that need their attention.


Assuntos
Calcinose/diagnóstico por imagem , Doença da Artéria Coronariana/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Idoso , Vasos Coronários/diagnóstico por imagem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Risco
10.
IEEE Trans Med Imaging ; 39(3): 797-805, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31425026

RESUMO

Accurate pulmonary nodule detection is a crucial step in lung cancer screening. Computer-aided detection (CAD) systems are not routinely used by radiologists for pulmonary nodule detection in clinical practice despite their potential benefits. Maximum intensity projection (MIP) images improve the detection of pulmonary nodules in radiological evaluation with computed tomography (CT) scans. Inspired by the clinical methodology of radiologists, we aim to explore the feasibility of applying MIP images to improve the effectiveness of automatic lung nodule detection using convolutional neural networks (CNNs). We propose a CNN-based approach that takes MIP images of different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices as input. Such an approach augments the two-dimensional (2-D) CT slice images with more representative spatial information that helps discriminate nodules from vessels through their morphologies. Our proposed method achieves sensitivity of 92.7% with 1 false positive per scan and sensitivity of 94.2% with 2 false positives per scan for lung nodule detection on 888 scans in the LIDC-IDRI dataset. The use of thick MIP images helps the detection of small pulmonary nodules (3 mm-10 mm) and results in fewer false positives. Experimental results show that utilizing MIP images can increase the sensitivity and lower the number of false positives, which demonstrates the effectiveness and significance of the proposed MIP-based CNNs framework for automatic pulmonary nodule detection in CT scans. The proposed method also shows the potential that CNNs could gain benefits for nodule detection by combining the clinical procedure.


Assuntos
Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Bases de Dados Factuais , Detecção Precoce de Câncer/métodos , Humanos , Imageamento Tridimensional/métodos , Neoplasias Pulmonares/patologia , Sensibilidade e Especificidade , Nódulo Pulmonar Solitário/patologia
11.
Sci Justice ; 55(6): 499-508, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26654086

RESUMO

Recently, in the forensic biometric community, there is a growing interest to compute a metric called "likelihood-ratio" when a pair of biometric specimens is compared using a biometric recognition system. Generally, a biometric recognition system outputs a score and therefore a likelihood-ratio computation method is used to convert the score to a likelihood-ratio. The likelihood-ratio is the probability of the score given the hypothesis of the prosecution, Hp (the two biometric specimens arose from a same source), divided by the probability of the score given the hypothesis of the defense, Hd (the two biometric specimens arose from different sources). Given a set of training scores under Hp and a set of training scores under Hd, several methods exist to convert a score to a likelihood-ratio. In this work, we focus on the issue of sampling variability in the training sets and carry out a detailed empirical study to quantify its effect on commonly proposed likelihood-ratio computation methods. We study the effect of the sampling variability varying: 1) the shapes of the probability density functions which model the distributions of scores in the two training sets; 2) the sizes of the training sets and 3) the score for which a likelihood-ratio is computed. For this purpose, we introduce a simulation framework which can be used to study several properties of a likelihood-ratio computation method and to quantify the effect of sampling variability in the likelihood-ratio computation. It is empirically shown that the sampling variability can be considerable, particularly when the training sets are small. Furthermore, a given method of likelihood-ratio computation can behave very differently for different shapes of the probability density functions of the scores in the training sets and different scores for which likelihood-ratios are computed.


Assuntos
Funções Verossimilhança , Ciências Forenses , Humanos
12.
IEEE Trans Pattern Anal Mach Intell ; 36(1): 127-39, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24231871

RESUMO

The increase of the dimensionality of data sets often leads to problems during estimation, which are denoted as the curse of dimensionality. One of the problems of second-order statistics (SOS) estimation in high-dimensional data is that the resulting covariance matrices are not full rank, so their inversion, for example, needed in verification systems based on the likelihood ratio, is an ill-posed problem, known as the singularity problem. A classical solution to this problem is the projection of the data onto a lower dimensional subspace using principle component analysis (PCA) and it is assumed that any further estimation on this dimension-reduced data is free from the effects of the high dimensionality. Using theory on SOS estimation in high-dimensional spaces, we show that the solution with PCA is far from optimal in verification systems if the high dimensionality is the sole source of error. For moderate dimensionality, it is already outperformed by solutions based on euclidean distances and it breaks down completely if the dimensionality becomes very high. We propose a new method, the fixed-point eigenwise correction, which does not have these disadvantages and performs close to optimal.

13.
Sensors (Basel) ; 12(5): 5246-72, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22778583

RESUMO

In a biometric authentication system using protected templates, a pseudonymous identifier is the part of a protected template that can be directly compared. Each compared pair of pseudonymous identifiers results in a decision testing whether both identifiers are derived from the same biometric characteristic. Compared to an unprotected system, most existing biometric template protection methods cause to a certain extent degradation in biometric performance. Fusion is therefore a promising way to enhance the biometric performance in template-protected biometric systems. Compared to feature level fusion and score level fusion, decision level fusion has not only the least fusion complexity, but also the maximum interoperability across different biometric features, template protection and recognition algorithms, templates formats, and comparison score rules. However, performance improvement via decision level fusion is not obvious. It is influenced by both the dependency and the performance gap among the conducted tests for fusion. We investigate in this paper several fusion scenarios (multi-sample, multi-instance, multi-sensor, multi-algorithm, and their combinations) on the binary decision level, and evaluate their biometric performance and fusion efficiency on a multi-sensor fingerprint database with 71,994 samples.


Assuntos
Biometria , Algoritmos , Técnicas de Apoio para a Decisão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...