Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Digit Health ; 10: 20552076241277440, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39229464

RESUMO

Objective: Convolutional neural networks (CNNs) have achieved state-of-the-art results in various medical image segmentation tasks. However, CNNs often assume that the source and target dataset follow the same probability distribution and when this assumption is not satisfied their performance degrades significantly. This poses a limitation in medical image analysis, where including information from different imaging modalities can bring large clinical benefits. In this work, we present an unsupervised Structure Aware Cross-modality Domain Adaptation (StAC-DA) framework for medical image segmentation. Methods: StAC-DA implements an image- and feature-level adaptation in a sequential two-step approach. The first step performs an image-level alignment, where images from the source domain are translated to the target domain in pixel space by implementing a CycleGAN-based model. The latter model includes a structure-aware network that preserves the shape of the anatomical structure during translation. The second step consists of a feature-level alignment. A U-Net network with deep supervision is trained with the transformed source domain images and target domain images in an adversarial manner to produce probable segmentations for the target domain. Results: The framework is evaluated on bidirectional cardiac substructure segmentation. StAC-DA outperforms leading unsupervised domain adaptation approaches, being ranked first in the segmentation of the ascending aorta when adapting from Magnetic Resonance Imaging (MRI) to Computed Tomography (CT) domain and from CT to MRI domain. Conclusions: The presented framework overcomes the limitations posed by differing distributions in training and testing datasets. Moreover, the experimental results highlight its potential to improve the accuracy of medical image segmentation across diverse imaging modalities.

2.
Med Eng Phys ; 124: 104104, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38418017

RESUMO

In recent years, research has highlighted the association between increased adipose tissue surrounding the human heart and elevated susceptibility to cardiovascular diseases such as atrial fibrillation and coronary heart disease. However, the manual segmentation of these fat deposits has not been widely implemented in clinical practice due to the substantial workload it entails for medical professionals and the associated costs. Consequently, the demand for more precise and time-efficient quantitative analysis has driven the emergence of novel computational methods for fat segmentation. This study presents a novel deep learning-based methodology that offers autonomous segmentation and quantification of two distinct types of cardiac fat deposits. The proposed approach leverages the pix2pix network, a generative conditional adversarial network primarily designed for image-to-image translation tasks. By applying this network architecture, we aim to investigate its efficacy in tackling the specific challenge of cardiac fat segmentation, despite not being originally tailored for this purpose. The two types of fat deposits of interest in this study are referred to as epicardial and mediastinal fats, which are spatially separated by the pericardium. The experimental results demonstrated an average accuracy of 99.08% and f1-score 98.73 for the segmentation of the epicardial fat and 97.90% of accuracy and f1-score of 98.40 for the mediastinal fat. These findings represent the high precision and overlap agreement achieved by the proposed methodology. In comparison to existing studies, our approach exhibited superior performance in terms of f1-score and run time, enabling the images to be segmented in real time.


Assuntos
Doenças Cardiovasculares , Pericárdio , Humanos , Pericárdio/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Tecido Adiposo Epicárdico , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
3.
Comput Methods Biomech Biomed Engin ; 26(9): 1008-1017, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35862582

RESUMO

The classification of sEMG signals is fundamental in applications that use mechanical prostheses, making it necessary to work with generalist databases that improve the accuracy of those classifications. Therefore, synthetic signal generation can be beneficial in enriching a database to make it more generalist. This work proposes using a variant of generative adversarial networks to produce synthetic biosignals of sEMG. A convolutional neural network (CNN) was used to classify the movements. The results showed good performance with an increase of 4.07% in a set of movement classification accuracy when 200 synthetic samples were included for each movement. We compared our results to other methodologies, such as Magnitude Warping and Scaling. Both methodologies did not have the same performance in the classification.


Assuntos
Membros Artificiais , Redes Neurais de Computação , Eletromiografia/métodos , Movimento
4.
Entropy (Basel) ; 23(1)2020 Dec 24.
Artigo em Inglês | MEDLINE | ID: mdl-33374104

RESUMO

Deep interactive evolution (DeepIE) combines the capacity of interactive evolutionary computation (IEC) to capture a user's preference with the domain-specific robustness of a trained generative adversarial network (GAN) generator, allowing the user to control the GAN output through evolutionary exploration of the latent space. However, the traditional GAN latent space presents feature entanglement, which limits the practicability of possible applications of DeepIE. In this paper, we implement DeepIE within a style-based generator from a StyleGAN model trained on the WikiArt dataset and propose StyleIE, a variation of DeepIE that takes advantage of the secondary disentangled latent space in the style-based generator. We performed two AB/BA crossover user tests that compared the performance of DeepIE against StyleIE for art generation. Self-rated evaluations of the performance were collected through a questionnaire. Findings from the tests suggest that StyleIE and DeepIE perform equally in tasks with open-ended goals with relaxed constraints, but StyleIE performs better in close-ended and more constrained tasks.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA