Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biomed Phys Eng Express ; 10(4)2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38653209

RESUMO

Objective. Radiomics is a promising valuable analysis tool consisting in extracting quantitative information from medical images. However, the extracted radiomics features are too sensitive to variations in used image acquisition and reconstruction parameters. This limited robustness hinders the generalizable validity of radiomics-assisted models. Our aim is to investigate a possible harmonization strategy based on matching image quality to improve feature robustness.Approach.We acquired CT scans of a phantom with two scanners across different dose levels and percentages of Iterative Reconstruction algorithms. The detectability index was used as a comprehensive task-based image quality metric. A statistical analysis based on the Intraclass Correlation Coefficient was performed to determine if matching image quality/appearance could enhance the robustness of radiomics features extracted from the phantom images. Additionally, an Artificial Neural Network was trained on these features to automatically classify the scanner used for image acquisition.Main results.We found that the ICC of the features across protocols providing a similar detectability index improves with respect to the ICC of the features across protocols providing a different detectability index. This improvement was particularly noticeable in features relevant for distinguishing between scanners.Significance.This preliminary study demonstrates that a harmonization based on image quality/appearance matching could improve radiomics features robustness and heterogeneous protocols can be used to obtain a similar image appearance in terms of the detectability index. Thus protocols with a lower dose level could be selected to reduce the amount of radiation dose delivered to the patient and simultaneously obtain a more robust quantitative analysis.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Imagens de Fantasmas , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Radiômica
2.
Brain Inform ; 11(1): 2, 2024 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-38194126

RESUMO

BACKGROUND: The integration of the information encoded in multiparametric MRI images can enhance the performance of machine-learning classifiers. In this study, we investigate whether the combination of structural and functional MRI might improve the performances of a deep learning (DL) model trained to discriminate subjects with Autism Spectrum Disorders (ASD) with respect to typically developing controls (TD). MATERIAL AND METHODS: We analyzed both structural and functional MRI brain scans publicly available within the ABIDE I and II data collections. We considered 1383 male subjects with age between 5 and 40 years, including 680 subjects with ASD and 703 TD from 35 different acquisition sites. We extracted morphometric and functional brain features from MRI scans with the Freesurfer and the CPAC analysis packages, respectively. Then, due to the multisite nature of the dataset, we implemented a data harmonization protocol. The ASD vs. TD classification was carried out with a multiple-input DL model, consisting in a neural network which generates a fixed-length feature representation of the data of each modality (FR-NN), and a Dense Neural Network for classification (C-NN). Specifically, we implemented a joint fusion approach to multiple source data integration. The main advantage of the latter is that the loss is propagated back to the FR-NN during the training, thus creating informative feature representations for each data modality. Then, a C-NN, with a number of layers and neurons per layer to be optimized during the model training, performs the ASD-TD discrimination. The performance was evaluated by computing the Area under the Receiver Operating Characteristic curve within a nested 10-fold cross-validation. The brain features that drive the DL classification were identified by the SHAP explainability framework. RESULTS: The AUC values of 0.66±0.05 and of 0.76±0.04 were obtained in the ASD vs. TD discrimination when only structural or functional features are considered, respectively. The joint fusion approach led to an AUC of 0.78±0.04. The set of structural and functional connectivity features identified as the most important for the two-class discrimination supports the idea that brain changes tend to occur in individuals with ASD in regions belonging to the Default Mode Network and to the Social Brain. CONCLUSIONS: Our results demonstrate that the multimodal joint fusion approach outperforms the classification results obtained with data acquired by a single MRI modality as it efficiently exploits the complementarity of structural and functional brain information.

3.
Eur Phys J Plus ; 138(4): 326, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37064789

RESUMO

Computed tomography (CT) scans are used to evaluate the severity of lung involvement in patients affected by COVID-19 pneumonia. Here, we present an improved version of the LungQuant automatic segmentation software (LungQuant v2), which implements a cascade of three deep neural networks (DNNs) to segment the lungs and the lung lesions associated with COVID-19 pneumonia. The first network (BB-net) defines a bounding box enclosing the lungs, the second one (U-net 1 ) outputs the mask of the lungs, and the final one (U-net 2 ) generates the mask of the COVID-19 lesions. With respect to the previous version (LungQuant v1), three main improvements are introduced: the BB-net, a new term in the loss function in the U-net for lesion segmentation and a post-processing procedure to separate the right and left lungs. The three DNNs were optimized, trained and tested on publicly available CT scans. We evaluated the system segmentation capability on an independent test set consisting of ten fully annotated CT scans, the COVID-19-CT-Seg benchmark dataset. The test performances are reported by means of the volumetric dice similarity coefficient (vDSC) and the surface dice similarity coefficient (sDSC) between the reference and the segmented objects. LungQuant v2 achieves a vDSC (sDSC) equal to 0.96 ± 0.01 (0.97 ± 0.01) and 0.69 ± 0.08 (0.83 ± 0.07) for the lung and lesion segmentations, respectively. The output of the segmentation software was then used to assess the percentage of infected lungs, obtaining a Mean Absolute Error (MAE) equal to 2%.

4.
Eur Radiol Exp ; 7(1): 18, 2023 04 10.
Artigo em Inglês | MEDLINE | ID: mdl-37032383

RESUMO

BACKGROUND: The role of computed tomography (CT) in the diagnosis and characterization of coronavirus disease 2019 (COVID-19) pneumonia has been widely recognized. We evaluated the performance of a software for quantitative analysis of chest CT, the LungQuant system, by comparing its results with independent visual evaluations by a group of 14 clinical experts. The aim of this work is to evaluate the ability of the automated tool to extract quantitative information from lung CT, relevant for the design of a diagnosis support model. METHODS: LungQuant segments both the lungs and lesions associated with COVID-19 pneumonia (ground-glass opacities and consolidations) and computes derived quantities corresponding to qualitative characteristics used to clinically assess COVID-19 lesions. The comparison was carried out on 120 publicly available CT scans of patients affected by COVID-19 pneumonia. Scans were scored for four qualitative metrics: percentage of lung involvement, type of lesion, and two disease distribution scores. We evaluated the agreement between the LungQuant output and the visual assessments through receiver operating characteristics area under the curve (AUC) analysis and by fitting a nonlinear regression model. RESULTS: Despite the rather large heterogeneity in the qualitative labels assigned by the clinical experts for each metric, we found good agreement on the metrics compared to the LungQuant output. The AUC values obtained for the four qualitative metrics were 0.98, 0.85, 0.90, and 0.81. CONCLUSIONS: Visual clinical evaluation could be complemented and supported by computer-aided quantification, whose values match the average evaluation of several independent clinical experts. KEY POINTS: We conducted a multicenter evaluation of the deep learning-based LungQuant automated software. We translated qualitative assessments into quantifiable metrics to characterize coronavirus disease 2019 (COVID-19) pneumonia lesions. Comparing the software output to the clinical evaluations, results were satisfactory despite heterogeneity of the clinical evaluations. An automatic quantification tool may contribute to improve the clinical workflow of COVID-19 pneumonia.


Assuntos
COVID-19 , Aprendizado Profundo , Pneumonia , Humanos , SARS-CoV-2 , Pulmão/diagnóstico por imagem , Software
5.
IEEE Trans Technol Soc ; 3(4): 272-289, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36573115

RESUMO

This article's main contributions are twofold: 1) to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare and 2) to investigate the research question of what does "trustworthy AI" mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient's lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia, Italy, since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses sociotechnical scenarios to identify ethical, technical, and domain-specific issues in the use of the AI system in the context of the pandemic.

6.
Int J Comput Assist Radiol Surg ; 17(2): 229-237, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34698988

RESUMO

PURPOSE: This study aims at exploiting artificial intelligence (AI) for the identification, segmentation and quantification of COVID-19 pulmonary lesions. The limited data availability and the annotation quality are relevant factors in training AI-methods. We investigated the effects of using multiple datasets, heterogeneously populated and annotated according to different criteria. METHODS: We developed an automated analysis pipeline, the LungQuant system, based on a cascade of two U-nets. The first one (U-net[Formula: see text]) is devoted to the identification of the lung parenchyma; the second one (U-net[Formula: see text]) acts on a bounding box enclosing the segmented lungs to identify the areas affected by COVID-19 lesions. Different public datasets were used to train the U-nets and to evaluate their segmentation performances, which have been quantified in terms of the Dice Similarity Coefficients. The accuracy in predicting the CT-Severity Score (CT-SS) of the LungQuant system has been also evaluated. RESULTS: Both the volumetric DSC (vDSC) and the accuracy showed a dependency on the annotation quality of the released data samples. On an independent dataset (COVID-19-CT-Seg), both the vDSC and the surface DSC (sDSC) were measured between the masks predicted by LungQuant system and the reference ones. The vDSC (sDSC) values of 0.95±0.01 and 0.66±0.13 (0.95±0.02 and 0.76±0.18, with 5 mm tolerance) were obtained for the segmentation of lungs and COVID-19 lesions, respectively. The system achieved an accuracy of 90% in CT-SS identification on this benchmark dataset. CONCLUSION: We analysed the impact of using data samples with different annotation criteria in training an AI-based quantification system for pulmonary involvement in COVID-19 pneumonia. In terms of vDSC measures, the U-net segmentation strongly depends on the quality of the lesion annotations. Nevertheless, the CT-SS can be accurately predicted on independent test sets, demonstrating the satisfactory generalization ability of the LungQuant.


Assuntos
Inteligência Artificial , COVID-19 , Humanos , Pulmão/diagnóstico por imagem , SARS-CoV-2 , Tórax
7.
Ecotoxicol Environ Saf ; 132: 397-402, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27379980

RESUMO

The effects of both continuous and alternate exposure to 2mgL(-1) of enrofloxacin (EFX) on survival, growth and reproduction were evaluated over four generations of Daphnia magna. Mortality increased, reaching 100% in most groups by the end of the third generation. Growth inhibition was detected in only one group of the fourth generation. Reproduction inhibition was >50% in all groups and, in second and third generations, groups transferred to pure medium showed a greater inhibition of reproduction than those exposed to EFX. To verify whether the effects observed in these groups could be explained by the perinatal exposure to the antibacterial, a reproduction test with daphnids obtained from in vitro exposed D. magna embryos was also carried out. Perinatal exposure to EFX seemed to act as an 'all-or-nothing' toxicity effect as 31.4% of embryos died, but the surviving daphnids did not show any inhibition of reproduction activity. However, the embryonic mortality may at least partially justify the inhibition of reproduction observed in exposed groups along the multigenerational test. Concluding, the multigenerational test with D. magna did show disruption to a population that cannot be evidenced by the official tests. The increasing deterioration across generations might be inferred as the consequence of heritable alterations. Whilst the concentration tested was higher than those usually detected in the natural environment, the increasing toxicity of EFX across generations and the possible additive toxicity of fluoroquinolone mixtures, prevent harm to crustacean populations by effects in the real context from being completely ruled out.


Assuntos
Antibacterianos/toxicidade , Daphnia/efeitos dos fármacos , Fluoroquinolonas/toxicidade , Animais , Daphnia/embriologia , Daphnia/crescimento & desenvolvimento , Embrião não Mamífero/efeitos dos fármacos , Enrofloxacina , Reprodução/efeitos dos fármacos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...