Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Sci Rep ; 14(1): 13689, 2024 06 13.
Article in English | MEDLINE | ID: mdl-38871803

ABSTRACT

This study aims to correlate adaptive optics-transscleral flood illumination (AO-TFI) images of the retinal pigment epithelium (RPE) in central serous chorioretinopathy (CSCR) with standard clinical images and compare cell morphological features with those of healthy eyes. After stitching 125 AO-TFI images acquired in CSCR eyes (including 6 active CSCR, 15 resolved CSCR, and 3 from healthy contralateral), 24 montages were correlated with blue-autofluorescence, infrared and optical coherence tomography images. All 68 AO-TFI images acquired in pathological areas exhibited significant RPE contrast changes. Among the 52 healthy areas in clinical images, AO-TFI revealed a normal RPE mosaic in 62% of the images and an altered RPE pattern in 38% of the images. Morphological features of the RPE cells were quantified in 54 AO-TFI images depicting clinically normal areas (from 12 CSCR eyes). Comparison with data from 149 AO-TFI images acquired in 33 healthy eyes revealed significantly increased morphological heterogeneity. In CSCR, AO-TFI not only enabled high-resolution imaging of outer retinal alterations, but also revealed RPE abnormalities undetectable by all other imaging modalities. Further studies are required to estimate the prognosis value of these abnormalities. Imaging of the RPE using AO-TFI holds great promise for improving our understanding of the CSCR pathogenesis.


Subject(s)
Central Serous Chorioretinopathy , Retinal Pigment Epithelium , Tomography, Optical Coherence , Humans , Retinal Pigment Epithelium/diagnostic imaging , Retinal Pigment Epithelium/pathology , Central Serous Chorioretinopathy/diagnostic imaging , Central Serous Chorioretinopathy/pathology , Male , Female , Middle Aged , Tomography, Optical Coherence/methods , Adult , Fluorescein Angiography/methods , Optical Imaging/methods , Sclera/diagnostic imaging , Sclera/pathology
2.
J Biomed Inform ; 150: 104583, 2024 02.
Article in English | MEDLINE | ID: mdl-38191010

ABSTRACT

OBJECTIVE: The primary objective of our study is to address the challenge of confidentially sharing medical images across different centers. This is often a critical necessity in both clinical and research environments, yet restrictions typically exist due to privacy concerns. Our aim is to design a privacy-preserving data-sharing mechanism that allows medical images to be stored as encoded and obfuscated representations in the public domain without revealing any useful or recoverable content from the images. In tandem, we aim to provide authorized users with compact private keys that could be used to reconstruct the corresponding images. METHOD: Our approach involves utilizing a neural auto-encoder. The convolutional filter outputs are passed through sparsifying transformations to produce multiple compact codes. Each code is responsible for reconstructing different attributes of the image. The key privacy-preserving element in this process is obfuscation through the use of specific pseudo-random noise. When applied to the codes, it becomes computationally infeasible for an attacker to guess the correct representation for all the codes, thereby preserving the privacy of the images. RESULTS: The proposed framework was implemented and evaluated using chest X-ray images for different medical image analysis tasks, including classification, segmentation, and texture analysis. Additionally, we thoroughly assessed the robustness of our method against various attacks using both supervised and unsupervised algorithms. CONCLUSION: This study provides a novel, optimized, and privacy-assured data-sharing mechanism for medical images, enabling multi-party sharing in a secure manner. While we have demonstrated its effectiveness with chest X-ray images, the mechanism can be utilized in other medical images modalities as well.


Subject(s)
Algorithms , Privacy , Information Dissemination
3.
Comput Methods Programs Biomed ; 240: 107706, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37506602

ABSTRACT

BACKGROUND AND OBJECTIVE: Generalizable and trustworthy deep learning models for PET/CT image segmentation necessitates large diverse multi-institutional datasets. However, legal, ethical, and patient privacy issues challenge sharing of datasets between different centers. To overcome these challenges, we developed a federated learning (FL) framework for multi-institutional PET/CT image segmentation. METHODS: A dataset consisting of 328 FL (HN) cancer patients who underwent clinical PET/CT examinations gathered from six different centers was enrolled. A pure transformer network was implemented as fully core segmentation algorithms using dual channel PET/CT images. We evaluated different frameworks (single center-based, centralized baseline, as well as seven different FL algorithms) using 68 PET/CT images (20% of each center data). In particular, the implemented FL algorithms include clipping with the quantile estimator (ClQu), zeroing with the quantile estimator (ZeQu), federated averaging (FedAvg), lossy compression (LoCo), robust aggregation (RoAg), secure aggregation (SeAg), and Gaussian differentially private FedAvg with adaptive quantile clipping (GDP-AQuCl). RESULTS: The Dice coefficient was 0.80±0.11 for both centralized and SeAg FL algorithms. All FL approaches achieved centralized learning model performance with no statistically significant differences. Among the FL algorithms, SeAg and GDP-AQuCl performed better than the other techniques. However, there was no statistically significant difference. All algorithms, except the center-based approach, resulted in relative errors less than 5% for SUVmax and SUVmean for all FL and centralized methods. Centralized and FL algorithms significantly outperformed the single center-based baseline. CONCLUSIONS: The developed FL-based (with centralized method performance) algorithms exhibited promising performance for HN tumor segmentation from PET/CT images.


Subject(s)
Deep Learning , Neoplasms , Humans , Algorithms , Image Processing, Computer-Assisted/methods , Neoplasms/diagnostic imaging , Positron Emission Tomography Computed Tomography/methods
4.
Patterns (N Y) ; 4(3): 100689, 2023 Mar 10.
Article in English | MEDLINE | ID: mdl-36960445

ABSTRACT

Success rate of clinical trials (CTs) is low, with the protocol design itself being considered a major risk factor. We aimed to investigate the use of deep learning methods to predict the risk of CTs based on their protocols. Considering protocol changes and their final status, a retrospective risk assignment method was proposed to label CTs according to low, medium, and high risk levels. Then, transformer and graph neural networks were designed and combined in an ensemble model to learn to infer the ternary risk categories. The ensemble model achieved robust performance (area under the receiving operator characteristic curve [AUROC] of 0.8453 [95% confidence interval: 0.8409-0.8495]), similar to the individual architectures but significantly outperforming a baseline based on bag-of-words features (0.7548 [0.7493-0.7603] AUROC). We demonstrate the potential of deep learning in predicting the risk of CTs from their protocols, paving the way for customized risk mitigation strategies during protocol design.

5.
Eur J Nucl Med Mol Imaging ; 50(4): 1034-1050, 2023 03.
Article in English | MEDLINE | ID: mdl-36508026

ABSTRACT

PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Positron Emission Tomography Computed Tomography , Positron-Emission Tomography/methods , Magnetic Resonance Imaging/methods
6.
Clin Nucl Med ; 47(7): 606-617, 2022 Jul 01.
Article in English | MEDLINE | ID: mdl-35442222

ABSTRACT

PURPOSE: The generalizability and trustworthiness of deep learning (DL)-based algorithms depend on the size and heterogeneity of training datasets. However, because of patient privacy concerns and ethical and legal issues, sharing medical images between different centers is restricted. Our objective is to build a federated DL-based framework for PET image segmentation utilizing a multicentric dataset and to compare its performance with the centralized DL approach. METHODS: PET images from 405 head and neck cancer patients from 9 different centers formed the basis of this study. All tumors were segmented manually. PET images converted to SUV maps were resampled to isotropic voxels (3 × 3 × 3 mm3) and then normalized. PET image subvolumes (12 × 12 × 12 cm3) consisting of whole tumors and background were analyzed. Data from each center were divided into train/validation (80% of patients) and test sets (20% of patients). The modified R2U-Net was used as core DL model. A parallel federated DL model was developed and compared with the centralized approach where the data sets are pooled to one server. Segmentation metrics, including Dice similarity and Jaccard coefficients, percent relative errors (RE%) of SUVpeak, SUVmean, SUVmedian, SUVmax, metabolic tumor volume, and total lesion glycolysis were computed and compared with manual delineations. RESULTS: The performance of the centralized versus federated DL methods was nearly identical for segmentation metrics: Dice (0.84 ± 0.06 vs 0.84 ± 0.05) and Jaccard (0.73 ± 0.08 vs 0.73 ± 0.07). For quantitative PET parameters, we obtained comparable RE% for SUVmean (6.43% ± 4.72% vs 6.61% ± 5.42%), metabolic tumor volume (12.2% ± 16.2% vs 12.1% ± 15.89%), and total lesion glycolysis (6.93% ± 9.6% vs 7.07% ± 9.85%) and negligible RE% for SUVmax and SUVpeak. No significant differences in performance (P > 0.05) between the 2 frameworks (centralized vs federated) were observed. CONCLUSION: The developed federated DL model achieved comparable quantitative performance with respect to the centralized DL model. Federated DL models could provide robust and generalizable segmentation, while addressing patient privacy and legal and ethical issues in clinical data sharing.


Subject(s)
Deep Learning , Head and Neck Neoplasms , Algorithms , Humans , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography
7.
J Digit Imaging ; 35(3): 469-481, 2022 06.
Article in English | MEDLINE | ID: mdl-35137305

ABSTRACT

A small dataset commonly affects generalization, robustness, and overall performance of deep neural networks (DNNs) in medical imaging research. Since gathering large clinical databases is always difficult, we proposed an analytical method for producing a large realistic/diverse dataset. Clinical brain PET/CT/MR images including full-dose (FD), low-dose (LD) corresponding to only 5 % of events acquired in the FD scan, non-attenuated correction (NAC) and CT-based measured attenuation correction (MAC) PET images, CT images and T1 and T2 MR sequences of 35 patients were included. All images were registered to the Montreal Neurological Institute (MNI) template. Laplacian blending was used to make a natural presentation using information in the frequency domain of images from two separate patients, as well as the blending mask. This classical technique from the computer vision and image processing communities is still widely used and unlike modern DNNs, does not require the availability of training data. A modified ResNet DNN was implemented to evaluate four image-to-image translation tasks, including LD to FD, LD+MR to FD, NAC to MAC, and MRI to CT, with and without using the synthesized images. Quantitative analysis using established metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and joint histogram analysis was performed for quantitative evaluation. The quantitative comparison between the registered small dataset containing 35 patients and the large dataset containing 350 synthesized plus 35 real dataset demonstrated improvement of the RMSE and SSIM by 29% and 8% for LD to FD, 40% and 7% for LD+MRI to FD, 16% and 8% for NAC to MAC, and 24% and 11% for MRI to CT mapping task, respectively. The qualitative/quantitative analysis demonstrated that the proposed model improved the performance of all four DNN models through producing images of higher quality and lower quantitative bias and variance compared to reference images.


Subject(s)
Deep Learning , Brain/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Neuroimaging/methods , Positron Emission Tomography Computed Tomography
8.
Neuroimage ; 245: 118697, 2021 12 15.
Article in English | MEDLINE | ID: mdl-34742941

ABSTRACT

PURPOSE: Reducing the injected activity and/or the scanning time is a desirable goal to minimize radiation exposure and maximize patients' comfort. To achieve this goal, we developed a deep neural network (DNN) model for synthesizing full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/low-dose (LD) TOF bin sinograms. METHODS: Clinical brain PET/CT raw data of 140 normal and abnormal patients were employed to create LD and FD TOF bin sinograms. The LD TOF sinograms were created through 5% undersampling of FD list-mode PET data. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). Residual network (ResNet) algorithms were trained separately to generate FD bins from LD bins. An extra ResNet model was trained to synthesize FD images from LD images to compare the performance of DNN in sinogram space (SS) vs implementation in image space (IS). Comprehensive quantitative and statistical analysis was performed to assess the performance of the proposed model using established quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) region-wise standardized uptake value (SUV) bias and statistical analysis for 83 brain regions. RESULTS: SSIM and PSNR values of 0.97 ± 0.01, 0.98 ± 0.01 and 33.70 ± 0.32, 39.36 ± 0.21 were obtained for IS and SS, respectively, compared to 0.86 ± 0.02and 31.12 ± 0.22 for reference LD images. The absolute average SUV bias was 0.96 ± 0.95% and 1.40 ± 0.72% for SS and IS implementations, respectively. The joint histogram analysis revealed the lowest mean square error (MSE) and highest correlation (R2 = 0.99, MSE = 0.019) was achieved by SS compared to IS (R2 = 0.97, MSE= 0.028). The Bland & Altman analysis showed that the lowest SUV bias (-0.4%) and minimum variance (95% CI: -2.6%, +1.9%) were achieved by SS images. The voxel-wise t-test analysis revealed the presence of voxels with statistically significantly lower values in LD, IS, and SS images compared to FD images respectively. CONCLUSION: The results demonstrated that images reconstructed from the predicted TOF FD sinograms using the SS approach led to higher image quality and lower bias compared to images predicted from LD images.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Neurodegenerative Diseases/diagnostic imaging , Neuroimaging/methods , Positron Emission Tomography Computed Tomography , Aged , Databases, Factual , Female , Humans , Male , Signal-To-Noise Ratio
9.
Front Digit Health ; 3: 745674, 2021.
Article in English | MEDLINE | ID: mdl-34796360

ABSTRACT

The 2019 coronavirus (COVID-19) pandemic revealed the urgent need for the acceleration of vaccine development worldwide. Rapid vaccine development poses numerous risks for each category of vaccine technology. By using the Risklick artificial intelligence (AI), we estimated the risks associated with all types of COVID-19 vaccine during the early phase of vaccine development. We then performed a postmortem analysis of the probability and the impact matrix calculations by comparing the 2020 prognosis to the contemporary situation. We used the Risklick AI to evaluate the risks and their incidence associated with vaccine development in the early stage of the COVID-19 pandemic. Our analysis revealed the diversity of risks among vaccine technologies currently used by pharmaceutical companies providing vaccines. This analysis highlighted the current and future potential pitfalls connected to vaccine production during the COVID-19 pandemic. Hence, the Risklick AI appears as an essential tool in vaccine development for the treatment of COVID-19 in order to formally anticipate the risks, and increases the overall performance from the production to the distribution of the vaccines. The Risklick AI could, therefore, be extended to other fields of research and development and represent a novel opportunity in the calculation of production-associated risks.

10.
J Med Internet Res ; 23(9): e30161, 2021 09 17.
Article in English | MEDLINE | ID: mdl-34375298

ABSTRACT

BACKGROUND: The COVID-19 global health crisis has led to an exponential surge in published scientific literature. In an attempt to tackle the pandemic, extremely large COVID-19-related corpora are being created, sometimes with inaccurate information, which is no longer at scale of human analyses. OBJECTIVE: In the context of searching for scientific evidence in the deluge of COVID-19-related literature, we present an information retrieval methodology for effective identification of relevant sources to answer biomedical queries posed using natural language. METHODS: Our multistage retrieval methodology combines probabilistic weighting models and reranking algorithms based on deep neural architectures to boost the ranking of relevant documents. Similarity of COVID-19 queries is compared to documents, and a series of postprocessing methods is applied to the initial ranking list to improve the match between the query and the biomedical information source and boost the position of relevant documents. RESULTS: The methodology was evaluated in the context of the TREC-COVID challenge, achieving competitive results with the top-ranking teams participating in the competition. Particularly, the combination of bag-of-words and deep neural language models significantly outperformed an Okapi Best Match 25-based baseline, retrieving on average, 83% of relevant documents in the top 20. CONCLUSIONS: These results indicate that multistage retrieval supported by deep learning could enhance identification of literature for COVID-19-related questions posed using natural language.


Subject(s)
COVID-19 , Algorithms , Humans , Information Storage and Retrieval , Language , SARS-CoV-2
11.
Pharmacology ; 106(5-6): 244-253, 2021.
Article in English | MEDLINE | ID: mdl-33910199

ABSTRACT

INTRODUCTION: The SARS-CoV-2 pandemic has led to one of the most critical and boundless waves of publications in the history of modern science. The necessity to find and pursue relevant information and quantify its quality is broadly acknowledged. Modern information retrieval techniques combined with artificial intelligence (AI) appear as one of the key strategies for COVID-19 living evidence management. Nevertheless, most AI projects that retrieve COVID-19 literature still require manual tasks. METHODS: In this context, we pre-sent a novel, automated search platform, called Risklick AI, which aims to automatically gather COVID-19 scientific evidence and enables scientists, policy makers, and healthcare professionals to find the most relevant information tailored to their question of interest in real time. RESULTS: Here, we compare the capacity of Risklick AI to find COVID-19-related clinical trials and scientific publications in comparison with clinicaltrials.gov and PubMed in the field of pharmacology and clinical intervention. DISCUSSION: The results demonstrate that Risklick AI is able to find COVID-19 references more effectively, both in terms of precision and recall, compared to the baseline platforms. Hence, Risklick AI could become a useful alternative assistant to scientists fighting the COVID-19 pandemic.


Subject(s)
Artificial Intelligence/trends , COVID-19/therapy , Data Interpretation, Statistical , Drug Development/trends , Evidence-Based Medicine/trends , Pharmacology/trends , Artificial Intelligence/statistics & numerical data , COVID-19/diagnosis , COVID-19/epidemiology , Clinical Trials as Topic/statistics & numerical data , Drug Development/statistics & numerical data , Evidence-Based Medicine/statistics & numerical data , Humans , Pharmacology/statistics & numerical data , Registries
SELECTION OF CITATIONS
SEARCH DETAIL
...