Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
ArXiv ; 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38699170

ABSTRACT

Importance: The efficacy of lung cancer screening can be significantly impacted by the imaging modality used. This Virtual Lung Screening Trial (VLST) addresses the critical need for precision in lung cancer diagnostics and the potential for reducing unnecessary radiation exposure in clinical settings. Objectives: To establish a virtual imaging trial (VIT) platform that accurately simulates real-world lung screening trials (LSTs) to assess the diagnostic accuracy of CT and CXR modalities. Design Setting and Participants: Utilizing computational models and machine learning algorithms, we created a diverse virtual patient population. The cohort, designed to mirror real-world demographics, was assessed using virtual imaging techniques that reflect historical imaging technologies. Main Outcomes and Measures: The primary outcome was the difference in the Area Under the Curve (AUC) for CT and CXR modalities across lesion types and sizes. Results: The study analyzed 298 CT and 313 CXR simulated images from 313 virtual patients, with a lesion-level AUC of 0.81 (95% CI: 0.78-0.84) for CT and 0.55 (95% CI: 0.53-0.56) for CXR. At the patient level, CT demonstrated an AUC of 0.85 (95% CI: 0.80-0.89), compared to 0.53 (95% CI: 0.47-0.60) for CXR. Subgroup analyses indicated CT's superior performance in detecting homogeneous lesions (AUC of 0.97 for lesion-level) and heterogeneous lesions (AUC of 0.71 for lesion-level) as well as in identifying larger nodules (AUC of 0.98 for nodules > 8 mm). Conclusion and Relevance: The VIT platform validated the superior diagnostic accuracy of CT over CXR, especially for smaller nodules, underscoring its potential to replicate real clinical imaging trials. These findings advocate for the integration of virtual trials in the evaluation and improvement of imaging-based diagnostic tools.

2.
Inform Med Unlocked ; 30: 100945, 2022.
Article in English | MEDLINE | ID: mdl-35434261

ABSTRACT

Since the COVID-19 pandemic, several research studies have proposed Deep Learning (DL)-based automated COVID-19 detection, reporting high cross-validation accuracy when classifying COVID-19 patients from normal or other common Pneumonia. Although the reported outcomes are very high in most cases, these results were obtained without an independent test set from a separate data source(s). DL models are likely to overfit training data distribution when independent test sets are not utilized or are prone to learn dataset-specific artifacts rather than the actual disease characteristics and underlying pathology. This study aims to assess the promise of such DL methods and datasets by investigating the key challenges and issues by examining the compositions of the available public image datasets and designing different experimental setups. A convolutional neural network-based network, called CVR-Net (COVID-19 Recognition Network), has been proposed for conducting comprehensive experiments to validate our hypothesis. The presented end-to-end CVR-Net is a multi-scale-multi-encoder ensemble model that aggregates the outputs from two different encoders and their different scales to convey the final prediction probability. Three different classification tasks, such as 2-, 3-, 4-classes, are designed where the train-test datasets are from the single, multiple, and independent sources. The obtained binary classification accuracy is 99.8% for a single train-test data source, where the accuracies fall to 98.4% and 88.7% when multiple and independent train-test data sources are utilized. Similar outcomes are noticed in multi-class categorization tasks for single, multiple, and independent data sources, highlighting the challenges in developing DL models with the existing public datasets without an independent test set from a separate dataset. Such a result concludes a requirement for a better-designed dataset for developing DL tools applicable in actual clinical settings. The dataset should have an independent test set; for a single machine or hospital source, have a more balanced set of images for all the prediction classes; and have a balanced dataset from several hospitals and demography. Our source codes and model are publicly available for the research community for further improvements.

3.
Comput Biol Med ; 120: 103738, 2020 05.
Article in English | MEDLINE | ID: mdl-32421644

ABSTRACT

BACKGROUND AND OBJECTIVE: Automatic segmentation of skin lesions is considered a crucial step in Computer-aided Diagnosis (CAD) systems for melanoma detection. Despite its significance, skin lesion segmentation remains an unsolved challenge due to their variability in color, texture, and shapes and indistinguishable boundaries. METHODS: Through this study, we present a new and automatic semantic segmentation network for robust skin lesion segmentation named Dermoscopic Skin Network (DSNet). In order to reduce the number of parameters to make the network lightweight, we used a depth-wise separable convolution in lieu of standard convolution to project the learned discriminating features onto the pixel space at different stages of the encoder. Additionally, we implemented both a U-Net and a Fully Convolutional Network (FCN8s) to compare against the proposed DSNet. RESULTS: We evaluate our proposed model on two publicly available datasets, namely ISIC-20171 and PH22. The obtained mean Intersection over Union (mIoU) is 77.5% and 87.0% respectively for ISIC-2017 and PH2 datasets which outperformed the ISIC-2017 challenge winner by 1.0% with respect to mIoU. Our proposed network also outperformed U-Net and FCN8s respectively by 3.6% and 6.8% with respect to mIoU on the ISIC-2017 dataset. CONCLUSION: Our network for skin lesion segmentation outperforms the other methods discussed in the article and is able to provide better-segmented masks on two different test datasets which can lead to better performance in melanoma detection. Our trained model along with the source code and predicted masks are made publicly available3.


Subject(s)
Melanoma , Skin Diseases , Skin Neoplasms , Dermoscopy , Humans , Melanoma/diagnostic imaging , Neural Networks, Computer , Skin/diagnostic imaging , Skin Neoplasms/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...