Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 46
Filter
1.
Childs Nerv Syst ; 2024 Apr 20.
Article in English | MEDLINE | ID: mdl-38642113

ABSTRACT

BACKGROUND: Texture analysis extracts many quantitative image features, offering a valuable, cost-effective, and non-invasive approach for individual medicine. Furthermore, multimodal machine learning could have a large impact for precision medicine, as texture biomarkers can underlie tissue microstructure. This study aims to investigate imaging-based biomarkers of radio-induced neurotoxicity in pediatric patients with metastatic medulloblastoma, using radiomic and dosiomic analysis. METHODS: This single-center study retrospectively enrolled children diagnosed with metastatic medulloblastoma (MB) and treated with hyperfractionated craniospinal irradiation (CSI). Histological confirmation of medulloblastoma and baseline follow-up magnetic resonance imaging (MRI) were mandatory. Treatment involved helical tomotherapy (HT) delivering a dose of 39 Gray (Gy) to brain and spinal axis and a posterior fossa boost up to 60 Gy. Clinical outcomes, such as local and distant brain control and neurotoxicity, were recorded. Radiomic and dosiomic features were extracted from tumor regions on T1, T2, FLAIR (fluid-attenuated inversion recovery) MRI-maps, and radiotherapy dose distribution. Different machine learning feature selection and reduction approaches were performed for supervised and unsupervised clustering. RESULTS: Forty-eight metastatic medulloblastoma patients (29 males and 19 females) with a mean age of 12 ± 6 years were enrolled. For each patient, 332 features were extracted. Greater level of abstraction of input data by combining selection of most performing features and dimensionality reduction returns the best performance. The resulting one-component radiomic signature yielded an accuracy of 0.73 with sensitivity, specificity, and precision of 0.83, 0.64, and 0.68, respectively. CONCLUSIONS: Machine learning radiomic-dosiomic approach effectively stratified pediatric medulloblastoma patients who experienced radio-induced neurotoxicity. Strategy needs further validation in external dataset for its potential clinical use in ab initio management paradigms of medulloblastoma.

2.
Biomed Phys Eng Express ; 10(4)2024 May 07.
Article in English | MEDLINE | ID: mdl-38653209

ABSTRACT

Objective. Radiomics is a promising valuable analysis tool consisting in extracting quantitative information from medical images. However, the extracted radiomics features are too sensitive to variations in used image acquisition and reconstruction parameters. This limited robustness hinders the generalizable validity of radiomics-assisted models. Our aim is to investigate a possible harmonization strategy based on matching image quality to improve feature robustness.Approach.We acquired CT scans of a phantom with two scanners across different dose levels and percentages of Iterative Reconstruction algorithms. The detectability index was used as a comprehensive task-based image quality metric. A statistical analysis based on the Intraclass Correlation Coefficient was performed to determine if matching image quality/appearance could enhance the robustness of radiomics features extracted from the phantom images. Additionally, an Artificial Neural Network was trained on these features to automatically classify the scanner used for image acquisition.Main results.We found that the ICC of the features across protocols providing a similar detectability index improves with respect to the ICC of the features across protocols providing a different detectability index. This improvement was particularly noticeable in features relevant for distinguishing between scanners.Significance.This preliminary study demonstrates that a harmonization based on image quality/appearance matching could improve radiomics features robustness and heterogeneous protocols can be used to obtain a similar image appearance in terms of the detectability index. Thus protocols with a lower dose level could be selected to reduce the amount of radiation dose delivered to the patient and simultaneously obtain a more robust quantitative analysis.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Neural Networks, Computer , Phantoms, Imaging , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Image Processing, Computer-Assisted/methods , Radiomics
3.
Phys Med ; 120: 103329, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38492331

ABSTRACT

GOAL: In-beam Positron Emission Tomography (PET) is a technique for in-vivo non-invasive treatment monitoring for proton therapy. To detect anatomical changes in patients with PET, various analysis methods exist, but their clinical interpretation is problematic. The goal of this work is to investigate whether the gamma-index analysis, widely used for dose comparisons, is an appropriate tool for comparing in-beam PET distributions. Focusing on a head-and-neck patient, we investigate whether the gamma-index map and the passing rate are sensitive to progressive anatomical changes. METHODS/MATERIALS: We simulated a treatment course of a proton therapy patient using FLUKA Monte Carlo simulations. Gradual emptying of the sinonasal cavity was modeled through a series of artificially modified CT scans. The in-beam PET activity distributions from three fields were evaluated, simulating a planar dual head geometry. We applied the 3D-gamma evaluation method to compare the PET images with a reference image without changes. Various tolerance criteria and parameters were tested, and results were compared to the CT-scans. RESULTS: Based on 210 MC simulations we identified appropriate parameters for the gamma-index analysis. Tolerance values of 3 mm/3% and 2 mm/2% were suited for comparison of simulated in-beam PET distributions. The gamma passing rate decreased with increasing volume change for all fields. CONCLUSION: The gamma-index analysis was found to be a useful tool for comparing simulated in-beam PET images, sensitive to sinonasal cavity emptying. Monitoring the gamma passing rate behavior over the treatment course is useful to detect anatomical changes occurring during the treatment course.


Subject(s)
Proton Therapy , Humans , Proton Therapy/methods , Monte Carlo Method , Positron-Emission Tomography/methods , Tomography, X-Ray Computed/methods , Computer Simulation , Etoposide , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods
4.
Phys Med Biol ; 69(6)2024 Mar 13.
Article in English | MEDLINE | ID: mdl-38373343

ABSTRACT

Objective.This study addresses a fundamental limitation of in-beam positron emission tomography (IB-PET) in proton therapy: the lack of direct anatomical representation in the images it produces. We aim to overcome this shortcoming by pioneering the application of deep learning techniques to create synthetic control CT images (sCT) from combining IB-PET and planning CT scan data.Approach.We conducted simulations involving six patients who underwent irradiation with proton beams. Leveraging the architecture of a visual transformer (ViT) neural network, we developed a model to generate sCT images of these patients using the planning CT scans and the inter-fractional simulated PET activity maps during irradiation. To evaluate the model's performance, a comparison was conducted between the sCT images produced by the ViT model and the authentic control CT images-serving as the benchmark.Main results.The structural similarity index was computed at a mean value across all patients of 0.91, while the mean absolute error measured 22 Hounsfield Units (HU). Root mean squared error and peak signal-to-noise ratio values were 56 HU and 30 dB, respectively. The Dice similarity coefficient exhibited a value of 0.98. These values are comparable to or exceed those found in the literature. More than 70% of the synthetic morphological changes were found to be geometrically compatible with the ones reported in the real control CT scan.Significance.Our study presents an innovative approach to surface the hidden anatomical information of IB-PET in proton therapy. Our ViT-based model successfully generates sCT images from inter-fractional PET data and planning CT scans. Our model's performance stands on par with existing models relying on input from cone beam CT or magnetic resonance imaging, which contain more anatomical information than activity maps.


Subject(s)
Image Processing, Computer-Assisted , Proton Therapy , Humans , Image Processing, Computer-Assisted/methods , Proton Therapy/methods , Tomography, X-Ray Computed/methods , Neural Networks, Computer , Magnetic Resonance Imaging/methods , Positron-Emission Tomography , Radiotherapy Planning, Computer-Assisted/methods
5.
Brain Inform ; 11(1): 2, 2024 Jan 09.
Article in English | MEDLINE | ID: mdl-38194126

ABSTRACT

BACKGROUND: The integration of the information encoded in multiparametric MRI images can enhance the performance of machine-learning classifiers. In this study, we investigate whether the combination of structural and functional MRI might improve the performances of a deep learning (DL) model trained to discriminate subjects with Autism Spectrum Disorders (ASD) with respect to typically developing controls (TD). MATERIAL AND METHODS: We analyzed both structural and functional MRI brain scans publicly available within the ABIDE I and II data collections. We considered 1383 male subjects with age between 5 and 40 years, including 680 subjects with ASD and 703 TD from 35 different acquisition sites. We extracted morphometric and functional brain features from MRI scans with the Freesurfer and the CPAC analysis packages, respectively. Then, due to the multisite nature of the dataset, we implemented a data harmonization protocol. The ASD vs. TD classification was carried out with a multiple-input DL model, consisting in a neural network which generates a fixed-length feature representation of the data of each modality (FR-NN), and a Dense Neural Network for classification (C-NN). Specifically, we implemented a joint fusion approach to multiple source data integration. The main advantage of the latter is that the loss is propagated back to the FR-NN during the training, thus creating informative feature representations for each data modality. Then, a C-NN, with a number of layers and neurons per layer to be optimized during the model training, performs the ASD-TD discrimination. The performance was evaluated by computing the Area under the Receiver Operating Characteristic curve within a nested 10-fold cross-validation. The brain features that drive the DL classification were identified by the SHAP explainability framework. RESULTS: The AUC values of 0.66±0.05 and of 0.76±0.04 were obtained in the ASD vs. TD discrimination when only structural or functional features are considered, respectively. The joint fusion approach led to an AUC of 0.78±0.04. The set of structural and functional connectivity features identified as the most important for the two-class discrimination supports the idea that brain changes tend to occur in individuals with ASD in regions belonging to the Default Mode Network and to the Social Brain. CONCLUSIONS: Our results demonstrate that the multimodal joint fusion approach outperforms the classification results obtained with data acquired by a single MRI modality as it efficiently exploits the complementarity of structural and functional brain information.

6.
Brain Inform ; 10(1): 32, 2023 Nov 25.
Article in English | MEDLINE | ID: mdl-38006422

ABSTRACT

Machine Learning (ML) is nowadays an essential tool in the analysis of Magnetic Resonance Imaging (MRI) data, in particular in the identification of brain correlates in neurological and neurodevelopmental disorders. ML requires datasets of appropriate size for training, which in neuroimaging are typically obtained collecting data from multiple acquisition centers. However, analyzing large multicentric datasets can introduce bias due to differences between acquisition centers. ComBat harmonization is commonly used to address batch effects, but it can lead to data leakage when the entire dataset is used to estimate model parameters. In this study, structural and functional MRI data from the Autism Brain Imaging Data Exchange (ABIDE) collection were used to classify subjects with Autism Spectrum Disorders (ASD) compared to Typical Developing controls (TD). We compared the classical approach (external harmonization) in which harmonization is performed before train/test split, with an harmonization calculated only on the train set (internal harmonization), and with the dataset with no harmonization. The results showed that harmonization using the whole dataset achieved higher discrimination performance, while non-harmonized data and harmonization using only the train set showed similar results, for both structural and connectivity features. We also showed that the higher performances of the external harmonization are not due to larger size of the sample for the estimation of the model and hence these improved performance with the entire dataset may be ascribed to data leakage. In order to prevent this leakage, it is recommended to define the harmonization model solely using the train set.

7.
Phys Med ; 110: 102577, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37126963

ABSTRACT

Initiatives for the collection of harmonized MRI datasets are growing continuously, opening questions on the reliability of results obtained in multi-site contexts. Here we present the assessment of the brain anatomical variability of MRI-derived measurements obtained from T1-weighted images, acquired according to the Standard Operating Procedures, promoted by the RIN-Neuroimaging Network. A multicentric dataset composed of 77 brain T1w acquisitions of young healthy volunteers (mean age = 29.7 ± 5.0 years), collected in 15 sites with MRI scanners of three different vendors, was considered. Parallelly, a dataset of 7 "traveling" subjects, each undergoing three acquisitions with scanners from different vendors, was also used. Intra-site, intra-vendor, and inter-site variabilities were evaluated in terms of the percentage standard deviation of volumetric and cortical thickness measures. Image quality metrics such as contrast-to-noise and signal-to-noise ratio in gray and white matter were also assessed for all sites and vendors. The results showed a measured global variability that ranges from 11% to 19% for subcortical volumes and from 3% to 10% for cortical thicknesses. Univariate distributions of the normalized volumes of subcortical regions, as well as the distributions of the thickness of cortical parcels appeared to be significantly different among sites in 8 subcortical (out of 17) and 21 cortical (out of 68) regions of i nterest in the multicentric study. The Bland-Altman analysis on "traveling" brain measurements did not detect systematic scanner biases even though a multivariate classification approach was able to classify the scanner vendor from brain measures with an accuracy of 0.60 ± 0.14 (chance level 0.33).


Subject(s)
Brain , Magnetic Resonance Imaging , Humans , Young Adult , Adult , Reproducibility of Results , Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , Neuroimaging , Signal-To-Noise Ratio
8.
Eur Phys J Plus ; 138(4): 326, 2023.
Article in English | MEDLINE | ID: mdl-37064789

ABSTRACT

Computed tomography (CT) scans are used to evaluate the severity of lung involvement in patients affected by COVID-19 pneumonia. Here, we present an improved version of the LungQuant automatic segmentation software (LungQuant v2), which implements a cascade of three deep neural networks (DNNs) to segment the lungs and the lung lesions associated with COVID-19 pneumonia. The first network (BB-net) defines a bounding box enclosing the lungs, the second one (U-net 1 ) outputs the mask of the lungs, and the final one (U-net 2 ) generates the mask of the COVID-19 lesions. With respect to the previous version (LungQuant v1), three main improvements are introduced: the BB-net, a new term in the loss function in the U-net for lesion segmentation and a post-processing procedure to separate the right and left lungs. The three DNNs were optimized, trained and tested on publicly available CT scans. We evaluated the system segmentation capability on an independent test set consisting of ten fully annotated CT scans, the COVID-19-CT-Seg benchmark dataset. The test performances are reported by means of the volumetric dice similarity coefficient (vDSC) and the surface dice similarity coefficient (sDSC) between the reference and the segmented objects. LungQuant v2 achieves a vDSC (sDSC) equal to 0.96 ± 0.01 (0.97 ± 0.01) and 0.69 ± 0.08 (0.83 ± 0.07) for the lung and lesion segmentations, respectively. The output of the segmentation software was then used to assess the percentage of infected lungs, obtaining a Mean Absolute Error (MAE) equal to 2%.

9.
Eur Radiol Exp ; 7(1): 18, 2023 04 10.
Article in English | MEDLINE | ID: mdl-37032383

ABSTRACT

BACKGROUND: The role of computed tomography (CT) in the diagnosis and characterization of coronavirus disease 2019 (COVID-19) pneumonia has been widely recognized. We evaluated the performance of a software for quantitative analysis of chest CT, the LungQuant system, by comparing its results with independent visual evaluations by a group of 14 clinical experts. The aim of this work is to evaluate the ability of the automated tool to extract quantitative information from lung CT, relevant for the design of a diagnosis support model. METHODS: LungQuant segments both the lungs and lesions associated with COVID-19 pneumonia (ground-glass opacities and consolidations) and computes derived quantities corresponding to qualitative characteristics used to clinically assess COVID-19 lesions. The comparison was carried out on 120 publicly available CT scans of patients affected by COVID-19 pneumonia. Scans were scored for four qualitative metrics: percentage of lung involvement, type of lesion, and two disease distribution scores. We evaluated the agreement between the LungQuant output and the visual assessments through receiver operating characteristics area under the curve (AUC) analysis and by fitting a nonlinear regression model. RESULTS: Despite the rather large heterogeneity in the qualitative labels assigned by the clinical experts for each metric, we found good agreement on the metrics compared to the LungQuant output. The AUC values obtained for the four qualitative metrics were 0.98, 0.85, 0.90, and 0.81. CONCLUSIONS: Visual clinical evaluation could be complemented and supported by computer-aided quantification, whose values match the average evaluation of several independent clinical experts. KEY POINTS: We conducted a multicenter evaluation of the deep learning-based LungQuant automated software. We translated qualitative assessments into quantifiable metrics to characterize coronavirus disease 2019 (COVID-19) pneumonia lesions. Comparing the software output to the clinical evaluations, results were satisfactory despite heterogeneity of the clinical evaluations. An automatic quantification tool may contribute to improve the clinical workflow of COVID-19 pneumonia.


Subject(s)
COVID-19 , Deep Learning , Pneumonia , Humans , SARS-CoV-2 , Lung/diagnostic imaging , Software
10.
Phys Med ; 107: 102538, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36796177

ABSTRACT

PURPOSE: Analysis pipelines based on the computation of radiomic features on medical images are widely used exploration tools across a large variety of image modalities. This study aims to define a robust processing pipeline based on Radiomics and Machine Learning (ML) to analyze multiparametric Magnetic Resonance Imaging (MRI) data to discriminate between high-grade (HGG) and low-grade (LGG) gliomas. METHODS: The dataset consists of 158 multiparametric MRI of patients with brain tumor publicly available on The Cancer Imaging Archive, preprocessed by the BraTS organization committee. Three different types of image intensity normalization algorithms were applied and 107 features were extracted for each tumor region, setting the intensity values according to different discretization levels. The predictive power of radiomic features in the LGG versus HGG categorization was evaluated by using random forest classifiers. The impact of the normalization techniques and of the different settings in the image discretization was studied in terms of the classification performances. A set of MRI-reliable features was defined selecting the features extracted according to the most appropriate normalization and discretization settings. RESULTS: The results show that using MRI-reliable features improves the performance in glioma grade classification (AUC=0.93±0.05) with respect to the use of raw (AUC=0.88±0.08) and robust features (AUC=0.83±0.08), defined as those not depending on image normalization and intensity discretization. CONCLUSIONS: These results confirm that image normalization and intensity discretization strongly impact the performance of ML classifiers based on radiomic features. Thus, special attention should be provided in the image preprocessing step before typical radiomic and ML analysis are carried out.


Subject(s)
Brain Neoplasms , Glioma , Multiparametric Magnetic Resonance Imaging , Humans , Glioma/diagnostic imaging , Glioma/pathology , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Machine Learning , Magnetic Resonance Imaging/methods , Retrospective Studies
11.
Front Oncol ; 12: 929949, 2022.
Article in English | MEDLINE | ID: mdl-36226070

ABSTRACT

Morphological changes that may arise through a treatment course are probably one of the most significant sources of range uncertainty in proton therapy. Non-invasive in-vivo treatment monitoring is useful to increase treatment quality. The INSIDE in-beam Positron Emission Tomography (PET) scanner performs in-vivo range monitoring in proton and carbon therapy treatments at the National Center of Oncological Hadrontherapy (CNAO). It is currently in a clinical trial (ID: NCT03662373) and has acquired in-beam PET data during the treatment of various patients. In this work we analyze the in-beam PET (IB-PET) data of eight patients treated with proton therapy at CNAO. The goal of the analysis is twofold. First, we assess the level of experimental fluctuations in inter-fractional range differences (sensitivity) of the INSIDE PET system by studying patients without morphological changes. Second, we use the obtained results to see whether we can observe anomalously large range variations in patients where morphological changes have occurred. The sensitivity of the INSIDE IB-PET scanner was quantified as the standard deviation of the range difference distributions observed for six patients that did not show morphological changes. Inter-fractional range variations with respect to a reference distribution were estimated using the Most-Likely-Shift (MLS) method. To establish the efficacy of this method, we made a comparison with the Beam's Eye View (BEV) method. For patients showing no morphological changes in the control CT the average range variation standard deviation was found to be 2.5 mm with the MLS method and 2.3 mm with the BEV method. On the other hand, for patients where some small anatomical changes occurred, we found larger standard deviation values. In these patients we evaluated where anomalous range differences were found and compared them with the CT. We found that the identified regions were mostly in agreement with the morphological changes seen in the CT scan.

12.
Neuroimage Clin ; 35: 103082, 2022.
Article in English | MEDLINE | ID: mdl-35700598

ABSTRACT

Machine Learning (ML) techniques have been widely used in Neuroimaging studies of Autism Spectrum Disorders (ASD) both to identify possible brain alterations related to this condition and to evaluate the predictive power of brain imaging modalities. The collection and public sharing of large imaging samples has favored an even greater diffusion of the use of ML-based analyses. However, multi-center data collections may suffer the batch effect, which, especially in case of Magnetic Resonance Imaging (MRI) studies, should be curated to avoid confounding effects for ML classifiers and masking biases. This is particularly important in the study of barely separable populations according to MRI data, such as subjects with ASD compared to controls with typical development (TD). Here, we show how the implementation of a harmo- nization protocol on brain structural features unlocks the case-control ML separation capability in the analysis of a multi-center MRI dataset. This effect is demonstrated on the ABIDE data collection, involving subjects encompassing a wide age range. After data harmonization, the overall ASD vs. TD discrimination capability by a Random Forest (RF) classifier improves from a very low performance (AUC = 0.58 ± 0.04) to a still low, but reasonably significant AUC = 0.67 ± 0.03. The performances of the RF classifier have been evaluated also in the age-specific subgroups of children, adolescents and adults, obtaining AUC = 0.62 ± 0.02, AUC = 0.65 ± 0.03 and AUC = 0.69 ± 0.06, respectively. Specific and consistent patterns of anatomical differences related to the ASD condition have been identified for the three different age subgroups.


Subject(s)
Autism Spectrum Disorder , Magnetic Resonance Imaging , Adolescent , Adult , Autism Spectrum Disorder/diagnostic imaging , Autism Spectrum Disorder/pathology , Brain/diagnostic imaging , Brain/pathology , Child , Humans , Machine Learning , Magnetic Resonance Imaging/methods , Neuroimaging
13.
Mol Psychiatry ; 27(4): 2114-2125, 2022 04.
Article in English | MEDLINE | ID: mdl-35136228

ABSTRACT

Small average differences in the left-right asymmetry of cerebral cortical thickness have been reported in individuals with autism spectrum disorder (ASD) compared to typically developing controls, affecting widespread cortical regions. The possible impacts of these regional alterations in terms of structural network effects have not previously been characterized. Inter-regional morphological covariance analysis can capture network connectivity between different cortical areas at the macroscale level. Here, we used cortical thickness data from 1455 individuals with ASD and 1560 controls, across 43 independent datasets of the ENIGMA consortium's ASD Working Group, to assess hemispheric asymmetries of intra-individual structural covariance networks, using graph theory-based topological metrics. Compared with typical features of small-world architecture in controls, the ASD sample showed significantly altered average asymmetry of networks involving the fusiform, rostral middle frontal, and medial orbitofrontal cortex, involving higher randomization of the corresponding right-hemispheric networks in ASD. A network involving the superior frontal cortex showed decreased right-hemisphere randomization. Based on comparisons with meta-analyzed functional neuroimaging data, the altered connectivity asymmetry particularly affected networks that subserve executive functions, language-related and sensorimotor processes. These findings provide a network-level characterization of altered left-right brain asymmetry in ASD, based on a large combined sample. Altered asymmetrical brain development in ASD may be partly propagated among spatially distant regions through structural connectivity.


Subject(s)
Autism Spectrum Disorder , Brain , Brain Mapping , Cerebral Cortex/diagnostic imaging , Humans , Magnetic Resonance Imaging/methods , Neural Pathways
14.
Int J Comput Assist Radiol Surg ; 17(2): 229-237, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34698988

ABSTRACT

PURPOSE: This study aims at exploiting artificial intelligence (AI) for the identification, segmentation and quantification of COVID-19 pulmonary lesions. The limited data availability and the annotation quality are relevant factors in training AI-methods. We investigated the effects of using multiple datasets, heterogeneously populated and annotated according to different criteria. METHODS: We developed an automated analysis pipeline, the LungQuant system, based on a cascade of two U-nets. The first one (U-net[Formula: see text]) is devoted to the identification of the lung parenchyma; the second one (U-net[Formula: see text]) acts on a bounding box enclosing the segmented lungs to identify the areas affected by COVID-19 lesions. Different public datasets were used to train the U-nets and to evaluate their segmentation performances, which have been quantified in terms of the Dice Similarity Coefficients. The accuracy in predicting the CT-Severity Score (CT-SS) of the LungQuant system has been also evaluated. RESULTS: Both the volumetric DSC (vDSC) and the accuracy showed a dependency on the annotation quality of the released data samples. On an independent dataset (COVID-19-CT-Seg), both the vDSC and the surface DSC (sDSC) were measured between the masks predicted by LungQuant system and the reference ones. The vDSC (sDSC) values of 0.95±0.01 and 0.66±0.13 (0.95±0.02 and 0.76±0.18, with 5 mm tolerance) were obtained for the segmentation of lungs and COVID-19 lesions, respectively. The system achieved an accuracy of 90% in CT-SS identification on this benchmark dataset. CONCLUSION: We analysed the impact of using data samples with different annotation criteria in training an AI-based quantification system for pulmonary involvement in COVID-19 pneumonia. In terms of vDSC measures, the U-net segmentation strongly depends on the quality of the lesion annotations. Nevertheless, the CT-SS can be accurately predicted on independent test sets, demonstrating the satisfactory generalization ability of the LungQuant.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , Lung/diagnostic imaging , SARS-CoV-2 , Thorax
15.
Med Phys ; 49(1): 23-40, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34813083

ABSTRACT

PURPOSE: In-beam positron emission tomography (PET) is one of the modalities that can be used for in vivo noninvasive treatment monitoring in proton therapy. Although PET monitoring has been frequently applied for this purpose, there is still no straightforward method to translate the information obtained from the PET images into easy-to-interpret information for clinical personnel. The purpose of this work is to propose a statistical method for analyzing in-beam PET monitoring images that can be used to locate, quantify, and visualize regions with possible morphological changes occurring over the course of treatment. METHODS: We selected a patient treated for squamous cell carcinoma (SCC) with proton therapy, to perform multiple Monte Carlo (MC) simulations of the expected PET signal at the start of treatment, and to study how the PET signal may change along the treatment course due to morphological changes. We performed voxel-wise two-tailed statistical tests of the simulated PET images, resembling the voxel-based morphometry (VBM) method commonly used in neuroimaging data analysis, to locate regions with significant morphological changes and to quantify the change. RESULTS: The VBM resembling method has been successfully applied to the simulated in-beam PET images, despite the fact that such images suffer from image artifacts and limited statistics. Three dimensional probability maps were obtained, that allowed to identify interfractional morphological changes and to visualize them superimposed on the computed tomography (CT) scan. In particular, the characteristic color patterns resulting from the two-tailed statistical tests lend themselves to trigger alarms in case of morphological changes along the course of treatment. CONCLUSIONS: The statistical method presented in this work is a promising method to apply to PET monitoring data to reveal interfractional morphological changes in patients, occurring over the course of treatment. Based on simulated in-beam PET treatment monitoring images, we showed that with our method it was possible to correctly identify the regions that changed. Moreover we could quantify the changes, and visualize them superimposed on the CT scan. The proposed method can possibly help clinical personnel in the replanning procedure in adaptive proton therapy treatments.


Subject(s)
Proton Therapy , Humans , Monte Carlo Method , Positron-Emission Tomography , Tomography, X-Ray Computed
17.
Phys Med ; 91: 140-150, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34801873

ABSTRACT

Artificial Intelligence (AI) techniques have been implemented in the field of Medical Imaging for more than forty years. Medical Physicists, Clinicians and Computer Scientists have been collaborating since the beginning to realize software solutions to enhance the informative content of medical images, including AI-based support systems for image interpretation. Despite the recent massive progress in this field due to the current emphasis on Radiomics, Machine Learning and Deep Learning, there are still some barriers to overcome before these tools are fully integrated into the clinical workflows to finally enable a precision medicine approach to patients' care. Nowadays, as Medical Imaging has entered the Big Data era, innovative solutions to efficiently deal with huge amounts of data and to exploit large and distributed computing resources are urgently needed. In the framework of a collaboration agreement between the Italian Association of Medical Physicists (AIFM) and the National Institute for Nuclear Physics (INFN), we propose a model of an intensive computing infrastructure, especially suited for training AI models, equipped with secure storage systems, compliant with data protection regulation, which will accelerate the development and extensive validation of AI-based solutions in the Medical Imaging field of research. This solution can be developed and made operational by Physicists and Computer Scientists working on complementary fields of research in Physics, such as High Energy Physics and Medical Physics, who have all the necessary skills to tailor the AI-technology to the needs of the Medical Imaging community and to shorten the pathway towards the clinical applicability of AI-based decision support systems.


Subject(s)
Artificial Intelligence , Cloud Computing , Humans , Italy , Nuclear Physics , Precision Medicine
18.
Brain Sci ; 11(4)2021 Apr 14.
Article in English | MEDLINE | ID: mdl-33919984

ABSTRACT

Autism spectrum disorders (ASDs) are a heterogeneous group of neurodevelopmental conditions characterized by impairments in social interaction and communication and restricted patterns of behavior, interests, and activities. Although the etiopathogenesis of idiopathic ASD has not been fully elucidated, compelling evidence suggests an interaction between genetic liability and environmental factors in producing early alterations of structural and functional brain development that are detectable by magnetic resonance imaging (MRI) at the group level. This work shows the results of a network-based approach to characterize not only variations in the values of the extracted features but also in their mutual relationships that might reflect underlying brain structural differences between autistic subjects and healthy controls. We applied a network-based analysis on sMRI data from the Autism Brain Imaging Data Exchange I (ABIDE-I) database, containing 419 features extracted with FreeSurfer software. Two networks were generated: one from subjects with autistic disorder (AUT) (DSM-IV-TR), and one from typically developing controls (TD), adopting a subsampling strategy to overcome class imbalance (235 AUT, 418 TD). We compared the distribution of several node centrality measures and observed significant inter-class differences in averaged centralities. Moreover, a single-node analysis allowed us to identify the most relevant features that distinguished the groups.

19.
Neuroimage ; 226: 117573, 2021 02 01.
Article in English | MEDLINE | ID: mdl-33221451

ABSTRACT

Magnetic resonance fingerprinting (MRF) is highly promising as a quantitative MRI technique due to its accuracy, robustness, and efficiency. Previous studies have found high repeatability and reproducibility of 2D MRF acquisitions in the brain. Here, we have extended our investigations to 3D MRF acquisitions covering the whole brain using spiral projection k-space trajectories. Our travelling head study acquired test/retest data from the brains of 12 healthy volunteers and 8 MRI systems (3 systems at 3 T and 5 at 1.5 T, all from a single vendor), using a study design not requiring all subjects to be scanned at all sites. The pulse sequence and reconstruction algorithm were the same for all acquisitions. After registration of the MRF-derived PD T1 and T2 maps to an anatomical atlas, coefficients of variation (CVs) were computed to assess test/retest repeatability and inter-site reproducibility in each voxel, while a General Linear Model (GLM) was used to determine the voxel-wise variability between all confounders, which included test/retest, subject, field strength and site. Our analysis demonstrated a high repeatability (CVs 0.7-1.3% for T1, 2.0-7.8% for T2, 1.4-2.5% for normalized PD) and reproducibility (CVs of 2.0-5.8% for T1, 7.4-10.2% for T2, 5.2-9.2% for normalized PD) in gray and white matter. Both repeatability and reproducibility improved when compared to similar experiments using 2D acquisitions. Three-dimensional MRF obtains highly repeatable and reproducible estimations of T1 and T2, supporting the translation of MRF-based fast quantitative imaging into clinical applications.


Subject(s)
Brain/diagnostic imaging , Imaging, Three-Dimensional/methods , Multiparametric Magnetic Resonance Imaging/methods , Adult , Female , Healthy Volunteers , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Reproducibility of Results
20.
J Pers Med ; 10(4)2020 Dec 12.
Article in English | MEDLINE | ID: mdl-33322765

ABSTRACT

Autism Spectrum Disorder (ASD) and Childhood Apraxia of Speech (CAS) are developmental disorders with distinct diagnostic criteria and different epidemiology. However, a common genetic background as well as overlapping clinical features between ASD and CAS have been recently reported. To date, brain structural language-related abnormalities have been detected in both the conditions, but no study directly compared young children with ASD, CAS and typical development (TD). In the current work, we aim: (i) to test the hypothesis that ASD and CAS display neurostructural differences in comparison with TD through morphometric Magnetic Resonance Imaging (MRI)-based measures (ASD vs. TD and CAS vs. TD); (ii) to investigate early possible disease-specific brain structural patterns in the two clinical groups (ASD vs. CAS); (iii) to evaluate predictive power of machine-learning (ML) techniques in differentiating the three samples (ASD, CAS, TD). We retrospectively analyzed the T1-weighted brain MRI scans of 68 children (age range: 34-74 months) grouped into three cohorts: (1) 26 children with ASD (mean age ± standard deviation: 56 ± 11 months); (2) 24 children with CAS (57 ± 10 months); (3) 18 children with TD (55 ± 13 months). Furthermore, a ML analysis based on a linear-kernel Support Vector Machine (SVM) was performed. All but one brain structures displayed significant higher volumes in both ASD and CAS children than TD peers. Specifically, ASD alterations involved fronto-temporal regions together with basal ganglia and cerebellum, while CAS alterations are more focused and shifted to frontal regions, suggesting a possible speech-related anomalies distribution. Caudate, superior temporal and hippocampus volumes directly distinguished the two conditions in terms of greater values in ASD compared to CAS. The ML analysis identified significant differences in brain features between ASD and TD children, whereas only some trends in the ML classification capability were detected in CAS as compared to TD peers. Similarly, the MRI structural underpinnings of two clinical groups were not significantly different when evaluated with linear-kernel SVM. Our results may represent the first step towards understanding shared and specific neural substrate in ASD and CAS conditions, which subsequently may contribute to early differential diagnosis and tailoring specific early intervention.

SELECTION OF CITATIONS
SEARCH DETAIL
...