Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 73
Filter
1.
IEEE Trans Med Imaging ; PP2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38530714

ABSTRACT

Pulmonary nodules may be an early manifestation of lung cancer, the leading cause of cancer-related deaths among both men and women. Numerous studies have established that deep learning methods can yield high-performance levels in the detection of lung nodules in chest X-rays. However, the lack of gold-standard public datasets slows down the progression of the research and prevents benchmarking of methods for this task. To address this, we organized a public research challenge, NODE21, aimed at the detection and generation of lung nodules in chest X-rays. While the detection track assesses state-of-the-art nodule detection systems, the generation track determines the utility of nodule generation algorithms to augment training data and hence improve the performance of the detection systems. This paper summarizes the results of the NODE21 challenge and performs extensive additional experiments to examine the impact of the synthetically generated nodule training images on the detection algorithm performance.

2.
Eur Radiol ; 2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38536463

ABSTRACT

OBJECTIVE: To investigate the effect of uncertainty estimation on the performance of a Deep Learning (DL) algorithm for estimating malignancy risk of pulmonary nodules. METHODS AND MATERIALS: In this retrospective study, we integrated an uncertainty estimation method into a previously developed DL algorithm for nodule malignancy risk estimation. Uncertainty thresholds were developed using CT data from the Danish Lung Cancer Screening Trial (DLCST), containing 883 nodules (65 malignant) collected between 2004 and 2010. We used thresholds on the 90th and 95th percentiles of the uncertainty score distribution to categorize nodules into certain and uncertain groups. External validation was performed on clinical CT data from a tertiary academic center containing 374 nodules (207 malignant) collected between 2004 and 2012. DL performance was measured using area under the ROC curve (AUC) for the full set of nodules, for the certain cases and for the uncertain cases. Additionally, nodule characteristics were compared to identify trends for inducing uncertainty. RESULTS: The DL algorithm performed significantly worse in the uncertain group compared to the certain group of DLCST (AUC 0.62 (95% CI: 0.49, 0.76) vs 0.93 (95% CI: 0.88, 0.97); p < .001) and the clinical dataset (AUC 0.62 (95% CI: 0.50, 0.73) vs 0.90 (95% CI: 0.86, 0.94); p < .001). The uncertain group included larger benign nodules as well as more part-solid and non-solid nodules than the certain group. CONCLUSION: The integrated uncertainty estimation showed excellent performance for identifying uncertain cases in which the DL-based nodule malignancy risk estimation algorithm had significantly worse performance. CLINICAL RELEVANCE STATEMENT: Deep Learning algorithms often lack the ability to gauge and communicate uncertainty. For safe clinical implementation, uncertainty estimation is of pivotal importance to identify cases where the deep learning algorithm harbors doubt in its prediction. KEY POINTS: • Deep learning (DL) algorithms often lack uncertainty estimation, which potentially reduce the risk of errors and improve safety during clinical adoption of the DL algorithm. • Uncertainty estimation identifies pulmonary nodules in which the discriminative performance of the DL algorithm is significantly worse. • Uncertainty estimation can further enhance the benefits of the DL algorithm and improve its safety and trustworthiness.

3.
Med Phys ; 51(4): 2834-2845, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38329315

ABSTRACT

BACKGROUND: Automated estimation of Pulmonary function test (PFT) results from Computed Tomography (CT) could advance the use of CT in screening, diagnosis, and staging of restrictive pulmonary diseases. Estimating lung function per lobe, which cannot be done with PFTs, would be helpful for risk assessment for pulmonary resection surgery and bronchoscopic lung volume reduction. PURPOSE: To automatically estimate PFT results from CT and furthermore disentangle the individual contribution of pulmonary lobes to a patient's lung function. METHODS: We propose I3Dr, a deep learning architecture for estimating global measures from an image that can also estimate the contributions of individual parts of the image to this global measure. We apply it to estimate the separate contributions of each pulmonary lobe to a patient's total lung function from CT, while requiring only CT scans and patient level lung function measurements for training. I3Dr consists of a lobe-level and a patient-level model. The lobe-level model extracts all anatomical pulmonary lobes from a CT scan and processes them in parallel to produce lobe level lung function estimates that sum up to a patient level estimate. The patient-level model directly estimates patient level lung function from a CT scan and is used to re-scale the output of the lobe-level model to increase performance. After demonstrating the viability of the proposed approach, the I3Dr model is trained and evaluated for PFT result estimation using a large data set of 8 433 CT volumes for training, 1 775 CT volumes for validation, and 1 873 CT volumes for testing. RESULTS: First, we demonstrate the viability of our approach by showing that a model trained with a collection of digit images to estimate their sum implicitly learns to assign correct values to individual digits. Next, we show that our models can estimate lobe-level quantities, such as COVID-19 severity scores, pulmonary volume (PV), and functional pulmonary volume (FPV) from CT while only provided with patient-level quantities during training. Lastly, we train and evaluate models for producing spirometry and diffusion capacity of carbon mono-oxide (DLCO) estimates at the patient and lobe level. For producing Forced Expiratory Volume in one second (FEV1), Forced Vital Capacity (FVC), and DLCO estimates, I3Dr obtains mean absolute errors (MAE) of 0.377 L, 0.297 L, and 2.800 mL/min/mm Hg respectively. We release the resulting algorithms for lung function estimation to the research community at https://grand-challenge.org/algorithms/lobe-wise-lung-function-estimation/ CONCLUSIONS: I3Dr can estimate global measures from an image, as well as the contributions of individual parts of the image to this global measure. It offers a promising approach for estimating PFT results from CT scans and disentangling the individual contribution of pulmonary lobes to a patient's lung function. The findings presented in this work may advance the use of CT in screening, diagnosis, and staging of restrictive pulmonary diseases as well as in risk assessment for pulmonary resection surgery and bronchoscopic lung volume reduction.


Subject(s)
Lung Diseases , Lung , Humans , Lung/diagnostic imaging , Lung/surgery , Tomography, X-Ray Computed/methods , Vital Capacity , Machine Learning
4.
Sci Rep ; 14(1): 3724, 2024 Feb 14.
Article in English | MEDLINE | ID: mdl-38355772

ABSTRACT

Here we present a sample of 12 massive quiescent galaxy candidates at [Formula: see text] observed with the James Webb Space Telescope (JWST) Near Infrared Spectrograph (NIRSpec). These galaxies were pre-selected from the Hubble Space Telescope imaging and 10 of our sources were unable to be spectroscopically confirmed by ground based spectroscopy. By combining spectroscopic data from NIRSpec with multi-wavelength imaging data from the JWST Near Infrared Camera (NIRCam), we analyse their stellar populations and their formation histories. We find that all of our galaxies classify as quiescent based on the reconstruction of their star formation histories but show a variety of quenching timescales and ages. All our galaxies are massive ([Formula: see text] M[Formula: see text]), with masses comparable to massive galaxies in the local Universe. We find that the oldest galaxy in our sample formed [Formula: see text] M[Formula: see text] of mass within the first few hundred million years of the Universe and has been quenched for more than a billion years by the time of observation at [Formula: see text] ([Formula: see text] billion years after the Big Bang). Our results point to very early formation of massive galaxies requiring a high conversion rate of baryons to stars in the early Universe.

5.
Nature ; 628(8007): 277-281, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38354832

ABSTRACT

The formation of galaxies by gradual hierarchical co-assembly of baryons and cold dark matter halos is a fundamental paradigm underpinning modern astrophysics1,2 and predicts a strong decline in the number of massive galaxies at early cosmic times3-5. Extremely massive quiescent galaxies (stellar masses of more than 1011 M⊙) have now been observed as early as 1-2 billion years after the Big Bang6-13. These galaxies are extremely constraining on theoretical models, as they had formed 300-500 Myr earlier, and only some models can form massive galaxies this early12,14. Here we report on the spectroscopic observations with the JWST of a massive quiescent galaxy ZF-UDS-7329 at redshift 3.205 ± 0.005. It has eluded deep ground-based spectroscopy8, it is significantly redder than is typical and its spectrum reveals features typical of much older stellar populations. Detailed modelling shows that its stellar population formed around 1.5 billion years earlier in time (z ≈ 11) at an epoch when dark matter halos of sufficient hosting mass had not yet assembled in the standard scenario4,5. This observation may indicate the presence of undetected populations of early galaxies and the possibility of significant gaps in our understanding of early stellar populations, galaxy formation and the nature of dark matter.

9.
Commun Med (Lond) ; 3(1): 156, 2023 Oct 27.
Article in English | MEDLINE | ID: mdl-37891360

ABSTRACT

BACKGROUND: Outside a screening program, early-stage lung cancer is generally diagnosed after the detection of incidental nodules in clinically ordered chest CT scans. Despite the advances in artificial intelligence (AI) systems for lung cancer detection, clinical validation of these systems is lacking in a non-screening setting. METHOD: We developed a deep learning-based AI system and assessed its performance for the detection of actionable benign nodules (requiring follow-up), small lung cancers, and pulmonary metastases in CT scans acquired in two Dutch hospitals (internal and external validation). A panel of five thoracic radiologists labeled all nodules, and two additional radiologists verified the nodule malignancy status and searched for any missed cancers using data from the national Netherlands Cancer Registry. The detection performance was evaluated by measuring the sensitivity at predefined false positive rates on a free receiver operating characteristic curve and was compared with the panel of radiologists. RESULTS: On the external test set (100 scans from 100 patients), the sensitivity of the AI system for detecting benign nodules, primary lung cancers, and metastases is respectively 94.3% (82/87, 95% CI: 88.1-98.8%), 96.9% (31/32, 95% CI: 91.7-100%), and 92.0% (104/113, 95% CI: 88.5-95.5%) at a clinically acceptable operating point of 1 false positive per scan (FP/s). These sensitivities are comparable to or higher than the radiologists, albeit with a slightly higher FP/s (average difference of 0.6). CONCLUSIONS: The AI system reliably detects benign and malignant pulmonary nodules in clinically indicated CT scans and can potentially assist radiologists in this setting.


Early-stage lung cancer can be diagnosed after identifying an abnormal spot on a chest CT scan ordered for other medical reasons. These spots or lung nodules can be overlooked by radiologists, as they are not necessarily the focus of an examination and can be as small as a few millimeters. Software using Artificial Intelligence (AI) technology has proven to be successful for aiding radiologists in this task, but its performance is understudied outside a lung cancer screening setting. We therefore developed and validated AI software for the detection of cancerous nodules or non-cancerous nodules that would need attention. We show that the software can reliably detect these nodules in a non-screening setting and could potentially aid radiologists in daily clinical practice.

10.
Radiology ; 308(3): e230275, 2023 09.
Article in English | MEDLINE | ID: mdl-37724961

ABSTRACT

Background A priori identification of patients at risk of artificial intelligence (AI) failure in diagnosing cancer would contribute to the safer clinical integration of diagnostic algorithms. Purpose To evaluate AI prediction variability as an uncertainty quantification (UQ) metric for identifying cases at risk of AI failure in diagnosing cancer at MRI and CT across different cancer types, data sets, and algorithms. Materials and Methods Multicenter data sets and publicly available AI algorithms from three previous studies that evaluated detection of pancreatic cancer on contrast-enhanced CT images, detection of prostate cancer on MRI scans, and prediction of pulmonary nodule malignancy on low-dose CT images were analyzed retrospectively. Each task's algorithm was extended to generate an uncertainty score based on ensemble prediction variability. AI accuracy percentage and partial area under the receiver operating characteristic curve (pAUC) were compared between certain and uncertain patient groups in a range of percentile thresholds (10%-90%) for the uncertainty score using permutation tests for statistical significance. The pulmonary nodule malignancy prediction algorithm was compared with 11 clinical readers for the certain group (CG) and uncertain group (UG). Results In total, 18 022 images were used for training and 838 images were used for testing. AI diagnostic accuracy was higher for the cases in the CG across all tasks (P < .001). At an 80% threshold of certain predictions, accuracy in the CG was 21%-29% higher than in the UG and 4%-6% higher than in the overall test data sets. The lesion-level pAUC in the CG was 0.25-0.39 higher than in the UG and 0.05-0.08 higher than in the overall test data sets (P < .001). For pulmonary nodule malignancy prediction, accuracy of AI was on par with clinicians for cases in the CG (AI results vs clinician results, 80% [95% CI: 76, 85] vs 78% [95% CI: 70, 87]; P = .07) but worse for cases in the UG (AI results vs clinician results, 50% [95% CI: 37, 64] vs 68% [95% CI: 60, 76]; P < .001). Conclusion An AI-prediction UQ metric consistently identified reduced performance of AI in cancer diagnosis. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Babyn in this issue.


Subject(s)
Lung Neoplasms , Mental Disorders , Male , Humans , Artificial Intelligence , Retrospective Studies , Magnetic Resonance Imaging , Lung Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
11.
Sci Rep ; 13(1): 14147, 2023 08 29.
Article in English | MEDLINE | ID: mdl-37644032

ABSTRACT

Accurate identification of emphysema subtypes and severity is crucial for effective management of COPD and the study of disease heterogeneity. Manual analysis of emphysema subtypes and severity is laborious and subjective. To address this challenge, we present a deep learning-based approach for automating the Fleischner Society's visual score system for emphysema subtyping and severity analysis. We trained and evaluated our algorithm using 9650 subjects from the COPDGene study. Our algorithm achieved the predictive accuracy at 52%, outperforming a previously published method's accuracy of 45%. In addition, the agreement between the predicted scores of our method and the visual scores was good, where the previous method obtained only moderate agreement. Our approach employs a regression training strategy to generate categorical labels while simultaneously producing high-resolution localized activation maps for visualizing the network predictions. By leveraging these dense activation maps, our method possesses the capability to compute the percentage of emphysema involvement per lung in addition to categorical severity scores. Furthermore, the proposed method extends its predictive capabilities beyond centrilobular emphysema to include paraseptal emphysema subtypes.


Subject(s)
Emphysema , Pulmonary Emphysema , Humans , Pulmonary Emphysema/diagnostic imaging , Neural Networks, Computer , Algorithms , Tomography, X-Ray Computed
12.
Radiology ; 308(2): e223308, 2023 08.
Article in English | MEDLINE | ID: mdl-37526548

ABSTRACT

Background Prior chest CT provides valuable temporal information (eg, changes in nodule size or appearance) to accurately estimate malignancy risk. Purpose To develop a deep learning (DL) algorithm that uses a current and prior low-dose CT examination to estimate 3-year malignancy risk of pulmonary nodules. Materials and Methods In this retrospective study, the algorithm was trained using National Lung Screening Trial data (collected from 2002 to 2004), wherein patients were imaged at most 2 years apart, and evaluated with two external test sets from the Danish Lung Cancer Screening Trial (DLCST) and the Multicentric Italian Lung Detection Trial (MILD), collected in 2004-2010 and 2005-2014, respectively. Performance was evaluated using area under the receiver operating characteristic curve (AUC) on cancer-enriched subsets with size-matched benign nodules imaged 1 and 2 years apart from DLCST and MILD, respectively. The algorithm was compared with a validated DL algorithm that only processed a single CT examination and the Pan-Canadian Early Lung Cancer Detection Study (PanCan) model. Results The training set included 10 508 nodules (422 malignant) in 4902 trial participants (mean age, 64 years ± 5 [SD]; 2778 men). The size-matched external test sets included 129 nodules (43 malignant) and 126 nodules (42 malignant). The algorithm achieved AUCs of 0.91 (95% CI: 0.85, 0.97) and 0.94 (95% CI: 0.89, 0.98). It significantly outperformed the DL algorithm that only processed a single CT examination (AUC, 0.85 [95% CI: 0.78, 0.92; P = .002]; and AUC, 0.89 [95% CI: 0.84, 0.95; P = .01]) and the PanCan model (AUC, 0.64 [95% CI: 0.53, 0.74; P < .001]; and AUC, 0.63 [95% CI: 0.52, 0.74; P < .001]). Conclusion A DL algorithm using current and prior low-dose CT examinations was more effective at estimating 3-year malignancy risk of pulmonary nodules than established models that only use a single CT examination. Clinical trial registration nos. NCT00047385, NCT00496977, NCT02837809 © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Horst and Nishino in this issue.


Subject(s)
Deep Learning , Lung Neoplasms , Multiple Pulmonary Nodules , Male , Humans , Middle Aged , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Retrospective Studies , Early Detection of Cancer , Canada , Multiple Pulmonary Nodules/diagnostic imaging , Multiple Pulmonary Nodules/pathology , Tomography, X-Ray Computed/methods
13.
Eur Radiol ; 33(11): 8279-8288, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37338552

ABSTRACT

OBJECTIVE: To study trends in the incidence of reported pulmonary nodules and stage I lung cancer in chest CT. METHODS: We analyzed the trends in the incidence of detected pulmonary nodules and stage I lung cancer in chest CT scans in the period between 2008 and 2019. Imaging metadata and radiology reports from all chest CT studies were collected from two large Dutch hospitals. A natural language processing algorithm was developed to identify studies with any reported pulmonary nodule. RESULTS: Between 2008 and 2019, a total of 74,803 patients underwent 166,688 chest CT examinations at both hospitals combined. During this period, the annual number of chest CT scans increased from 9955 scans in 6845 patients in 2008 to 20,476 scans in 13,286 patients in 2019. The proportion of patients in whom nodules (old or new) were reported increased from 38% (2595/6845) in 2008 to 50% (6654/13,286) in 2019. The proportion of patients in whom significant new nodules (≥ 5 mm) were reported increased from 9% (608/6954) in 2010 to 17% (1660/9883) in 2017. The number of patients with new nodules and corresponding stage I lung cancer diagnosis tripled and their proportion doubled, from 0.4% (26/6954) in 2010 to 0.8% (78/9883) in 2017. CONCLUSION: The identification of incidental pulmonary nodules in chest CT has steadily increased over the past decade and has been accompanied by more stage I lung cancer diagnoses. CLINICAL RELEVANCE STATEMENT: These findings stress the importance of identifying and efficiently managing incidental pulmonary nodules in routine clinical practice. KEY POINTS: • The number of patients who underwent chest CT examinations substantially increased over the past decade, as did the number of patients in whom pulmonary nodules were identified. • The increased use of chest CT and more frequently identified pulmonary nodules were associated with more stage I lung cancer diagnoses.


Subject(s)
Lung Neoplasms , Multiple Pulmonary Nodules , Solitary Pulmonary Nodule , Humans , Incidence , Solitary Pulmonary Nodule/diagnostic imaging , Solitary Pulmonary Nodule/epidemiology , Multiple Pulmonary Nodules/diagnostic imaging , Multiple Pulmonary Nodules/epidemiology , Tomography, X-Ray Computed/methods , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/epidemiology
14.
Eur J Epidemiol ; 38(4): 445-454, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36943671

ABSTRACT

Trials show that low-dose computed tomography (CT) lung cancer screening in long-term (ex-)smokers reduces lung cancer mortality. However, many individuals were exposed to unnecessary diagnostic procedures. This project aims to improve the efficiency of lung cancer screening by identifying high-risk participants, and improving risk discrimination for nodules. This study is an extension of the Dutch-Belgian Randomized Lung Cancer Screening Trial, with a focus on personalized outcome prediction (NELSON-POP). New data will be added on genetics, air pollution, malignancy risk for lung nodules, and CT biomarkers beyond lung nodules (emphysema, coronary calcification, bone density, vertebral height and body composition). The roles of polygenic risk scores and air pollution in screen-detected lung cancer diagnosis and survival will be established. The association between the AI-based nodule malignancy score and lung cancer will be evaluated at baseline and incident screening rounds. The association of chest CT imaging biomarkers with outcomes will be established. Based on these results, multisource prediction models for pre-screening and post-baseline-screening participant selection and nodule management will be developed. The new models will be externally validated. We hypothesize that we can identify 15-20% participants with low-risk of lung cancer or short life expectancy and thus prevent ~140,000 Dutch individuals from being screened unnecessarily. We hypothesize that our models will improve the specificity of nodule management by 10% without loss of sensitivity as compared to assessment of nodule size/growth alone, and reduce unnecessary work-up by 40-50%.


Subject(s)
Lung Neoplasms , Multiple Pulmonary Nodules , Humans , Early Detection of Cancer/methods , Lung , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/genetics , Mass Screening/methods , Multiple Pulmonary Nodules/pathology , Prognosis
15.
Med Image Anal ; 86: 102771, 2023 05.
Article in English | MEDLINE | ID: mdl-36848720

ABSTRACT

Automatic lesion segmentation on thoracic CT enables rapid quantitative analysis of lung involvement in COVID-19 infections. However, obtaining a large amount of voxel-level annotations for training segmentation networks is prohibitively expensive. Therefore, we propose a weakly-supervised segmentation method based on dense regression activation maps (dRAMs). Most weakly-supervised segmentation approaches exploit class activation maps (CAMs) to localize objects. However, because CAMs were trained for classification, they do not align precisely with the object segmentations. Instead, we produce high-resolution activation maps using dense features from a segmentation network that was trained to estimate a per-lobe lesion percentage. In this way, the network can exploit knowledge regarding the required lesion volume. In addition, we propose an attention neural network module to refine dRAMs, optimized together with the main regression task. We evaluated our algorithm on 90 subjects. Results show our method achieved 70.2% Dice coefficient, substantially outperforming the CAM-based baseline at 48.6%. We published our source code at https://github.com/DIAGNijmegen/bodyct-dram.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Algorithms , Image Processing, Computer-Assisted/methods
16.
Med Image Anal ; 84: 102680, 2023 02.
Article in English | MEDLINE | ID: mdl-36481607

ABSTRACT

In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.


Subject(s)
Benchmarking , Liver Neoplasms , Humans , Retrospective Studies , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/pathology , Liver/diagnostic imaging , Liver/pathology , Algorithms , Image Processing, Computer-Assisted/methods
17.
Respirology ; 27(10): 818-833, 2022 10.
Article in English | MEDLINE | ID: mdl-35965430

ABSTRACT

In recent years, pulmonary imaging has seen enormous progress, with the introduction, validation and implementation of new hardware and software. There is a general trend from mere visual evaluation of radiological images to quantification of abnormalities and biomarkers, and assessment of 'non visual' markers that contribute to establishing diagnosis or prognosis. Important catalysts to these developments in thoracic imaging include new indications (like computed tomography [CT] lung cancer screening) and the COVID-19 pandemic. This review focuses on developments in CT, radiomics, artificial intelligence (AI) and x-ray velocimetry for imaging of the lungs. Recent developments in CT include the potential for ultra-low-dose CT imaging for lung nodules, and the advent of a new generation of CT systems based on photon-counting detector technology. Radiomics has demonstrated potential towards predictive and prognostic tasks particularly in lung cancer, previously not achievable by visual inspection by radiologists, exploiting high dimensional patterns (mostly texture related) on medical imaging data. Deep learning technology has revolutionized the field of AI and as a result, performance of AI algorithms is approaching human performance for an increasing number of specific tasks. X-ray velocimetry integrates x-ray (fluoroscopic) imaging with unique image processing to produce quantitative four dimensional measurement of lung tissue motion, and accurate calculations of lung ventilation.


Subject(s)
COVID-19 , Lung Neoplasms , Artificial Intelligence , COVID-19/diagnostic imaging , Early Detection of Cancer/methods , Humans , Lung Neoplasms/diagnostic imaging , Pandemics , Rheology , Tomography, X-Ray Computed/methods , X-Rays
18.
IEEE Trans Artif Intell ; 3(2): 129-138, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35582210

ABSTRACT

Amidst the ongoing pandemic, the assessment of computed tomography (CT) images for COVID-19 presence can exceed the workload capacity of radiologists. Several studies addressed this issue by automating COVID-19 classification and grading from CT images with convolutional neural networks (CNNs). Many of these studies reported initial results of algorithms that were assembled from commonly used components. However, the choice of the components of these algorithms was often pragmatic rather than systematic and systems were not compared to each other across papers in a fair manner. We systematically investigated the effectiveness of using 3-D CNNs instead of 2-D CNNs for seven commonly used architectures, including DenseNet, Inception, and ResNet variants. For the architecture that performed best, we furthermore investigated the effect of initializing the network with pretrained weights, providing automatically computed lesion maps as additional network input, and predicting a continuous instead of a categorical output. A 3-D DenseNet-201 with these components achieved an area under the receiver operating characteristic curve of 0.930 on our test set of 105 CT scans and an AUC of 0.919 on a publicly available set of 742 CT scans, a substantial improvement in comparison with a previously published 2-D CNN. This article provides insights into the performance benefits of various components for COVID-19 classification and grading systems. We have created a challenge on grand-challenge.org to allow for a fair comparison between the results of this and future research.

19.
Cognition ; 225: 105128, 2022 08.
Article in English | MEDLINE | ID: mdl-35462323

ABSTRACT

To distribute resources in a fair way, identifying an appropriate outcome is not enough: We must also find a way to produce it. To solve this problem, young children spontaneously use number words and counting in fairness tasks. We hypothesized that children are also sensitive to other people's use of counting, as it reveals that the distributor was motivated to produce the outcome they believed was fair. Across four experiments, we show that U.S. children (N = 184 from the New Haven area; ages four to six; Approximately 58% White, 16% Black, 18% Hispanic, 4% Asian, and 4% other) believe that agents who count when distributing resources are more fair than agents who produce the same outcome without counting, even when both agents invest the same amount of effort. And vice versa, when the same two agents produce an unfair outcome, children now condemn the agent who counted. Our findings suggest that, from childhood, people understand that counting reflects a motivation to be precise and use this to evaluate other people's behavior in fairness contexts.


Subject(s)
Motivation , Child , Child, Preschool , Humans
20.
Eur Respir J ; 59(5)2022 05.
Article in English | MEDLINE | ID: mdl-34649976

ABSTRACT

BACKGROUND: A baseline computed tomography (CT) scan for lung cancer (LC) screening may reveal information indicating that certain LC screening participants can be screened less, and instead require dedicated early cardiac and respiratory clinical input. We aimed to develop and validate competing death (CD) risk models using CT information to identify participants with a low LC risk and a high CD risk. METHODS: Participant demographics and quantitative CT measures of LC, cardiovascular disease and chronic obstructive pulmonary disease were considered for deriving a logistic regression model for predicting 5-year CD risk using a sample from the National Lung Screening Trial (n=15 000). Multicentric Italian Lung Detection data were used to perform external validation (n=2287). RESULTS: Our final CD model outperformed an external pre-scan model (CD Risk Assessment Tool) in both the derivation (area under the curve (AUC) 0.744 (95% CI 0.727-0.761) and 0.677 (95% CI 0.658-0.695), respectively) and validation cohorts (AUC 0.744 (95% CI 0.652-0.835) and 0.725 (95% CI 0.633-0.816), respectively). By also taking LC incidence risk into consideration, we suggested a risk threshold where a subgroup (6258/23 096 (27%)) was identified with a number needed to screen to detect one LC of 216 (versus 23 in the remainder of the cohort) and ratio of 5.41 CDs per LC case (versus 0.88). The respective values in the validation cohort subgroup (774/2287 (34%)) were 129 (versus 29) and 1.67 (versus 0.43). CONCLUSIONS: Evaluating both LC and CD risks post-scan may improve the efficiency of LC screening and facilitate the initiation of multidisciplinary trajectories among certain participants.


Subject(s)
Early Detection of Cancer , Lung Neoplasms , Early Detection of Cancer/methods , Humans , Lung , Lung Neoplasms/diagnosis , Mass Screening , Risk Assessment/methods , Tomography, X-Ray Computed/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...