Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 131
Filtrar
1.
Endocrine ; 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38982023

RESUMEN

BACKGROUND: It was essential to identify individuals at high risk of fragility fracture and prevented them due to the significant morbidity, mortality, and economic burden associated with fragility fracture. The quantitative ultrasound (QUS) showed promise in assessing bone structure characteristics and determining the risk of fragility fracture. AIMS: To evaluate the performance of a multi-channel residual network (MResNet) based on ultrasonic radiofrequency (RF) signal to discriminate fragility fractures retrospectively in postmenopausal women, and compared it with the traditional parameter of QUS, speed of sound (SOS), and bone mineral density (BMD) acquired with dual X-ray absorptiometry (DXA). METHODS: Using QUS, RF signal and SOS were acquired for 246 postmenopausal women. An MResNet was utilized, based on the RF signal, to categorize individuals with an elevated risk of fragility fracture. DXA was employed to obtain BMD at the lumbar, hip, and femoral neck. The fracture history of all adult subjects was gathered. Analyzing the odds ratios (OR) and the area under the receiver operator characteristic curves (AUC) was done to evaluate the effectiveness of various methods in discriminating fragility fracture. RESULTS: Among the 246 postmenopausal women, 170 belonged to the non-fracture group, 50 to the vertebral group, and 26 to the non-vertebral fracture group. MResNet was competent to discriminate any fragility fracture (OR = 2.64; AUC = 0.74), Vertebral fracture (OR = 3.02; AUC = 0.77), and non-vertebral fracture (OR = 2.01; AUC = 0.69). After being modified by clinical covariates, the efficiency of MResNet was further improved to OR = 3.31-4.08, AUC = 0.81-0.83 among all fracture groups, which significantly surpassed QUS-SOS (OR = 1.32-1.36; AUC = 0.60) and DXA-BMD (OR = 1.23-2.94; AUC = 0.63-0.76). CONCLUSIONS: This pilot cross-sectional study demonstrates that the MResNet model based on the ultrasonic RF signal shows promising performance in discriminating fragility fractures in postmenopausal women. When incorporating clinical covariates, the efficiency of the modified MResNet is further enhanced, surpassing the performance of QUS-SOS and DXA-BMD in terms of OR and AUC. These findings highlight the potential of the MResNet as a promising approach for fracture risk assessment. Future research should focus on larger and more diverse populations to validate these results and explore its clinical applications.

2.
Med Image Anal ; 97: 103213, 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38850625

RESUMEN

Multi-modal data can provide complementary information of Alzheimer's disease (AD) and its development from different perspectives. Such information is closely related to the diagnosis, prevention, and treatment of AD, and hence it is necessary and critical to study AD through multi-modal data. Existing learning methods, however, usually ignore the influence of feature heterogeneity and directly fuse features in the last stages. Furthermore, most of these methods only focus on local fusion features or global fusion features, neglecting the complementariness of features at different levels and thus not sufficiently leveraging information embedded in multi-modal data. To overcome these shortcomings, we propose a novel framework for AD diagnosis that fuses gene, imaging, protein, and clinical data. Our framework learns feature representations under the same feature space for different modalities through a feature induction learning (FIL) module, thereby alleviating the impact of feature heterogeneity. Furthermore, in our framework, local and global salient multi-modal feature interaction information at different levels is extracted through a novel dual multilevel graph neural network (DMGNN). We extensively validate the proposed method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and experimental results demonstrate our method consistently outperforms other state-of-the-art multi-modal fusion methods. The code is publicly available on the GitHub website. (https://github.com/xiankantingqianxue/MIA-code.git).

3.
Neural Netw ; 178: 106409, 2024 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-38823069

RESUMEN

Multi-center disease diagnosis aims to build a global model for all involved medical centers. Due to privacy concerns, it is infeasible to collect data from multiple centers for training (i.e., centralized learning). Federated Learning (FL) is a decentralized framework that enables multiple clients (e.g., medical centers) to collaboratively train a global model while retaining patient data locally for privacy. However, in practice, the data across medical centers are not independently and identically distributed (Non-IID), causing two challenging issues: (1) catastrophic forgetting at clients, i.e., the local model at clients will forget the knowledge received from the global model after local training, causing reduced performance; and (2) invalid aggregation at the server, i.e., the global model at the server may not be favorable to some clients after model aggregation, resulting in a slow convergence rate. To mitigate these issues, an innovative Federated learning using Model Projection (FedMoP) is proposed, which guarantees: (1) the loss of local model on global data does not increase after local training without accessing the global data so that the performance will not be degenerated; and (2) the loss of global model on local data does not increase after aggregation without accessing local data so that convergence rate can be improved. Extensive experimental results show that our FedMoP outperforms state-of-the-art FL methods in terms of accuracy, convergence rate and communication cost. In particular, our FedMoP also achieves comparable or even higher accuracy than centralized learning. Thus, our FedMoP can ensure privacy protection while outperforming centralized learning in accuracy and communication cost.

4.
Cancer Imaging ; 24(1): 63, 2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38773670

RESUMEN

BACKGROUND: Accurate segmentation of gastric tumors from CT scans provides useful image information for guiding the diagnosis and treatment of gastric cancer. However, automated gastric tumor segmentation from 3D CT images faces several challenges. The large variation of anisotropic spatial resolution limits the ability of 3D convolutional neural networks (CNNs) to learn features from different views. The background texture of gastric tumor is complex, and its size, shape and intensity distribution are highly variable, which makes it more difficult for deep learning methods to capture the boundary. In particular, while multi-center datasets increase sample size and representation ability, they suffer from inter-center heterogeneity. METHODS: In this study, we propose a new cross-center 3D tumor segmentation method named Hierarchical Class-Aware Domain Adaptive Network (HCA-DAN), which includes a new 3D neural network that efficiently bridges an Anisotropic neural network and a Transformer (AsTr) for extracting multi-scale context features from the CT images with anisotropic resolution, and a hierarchical class-aware domain alignment (HCADA) module for adaptively aligning multi-scale context features across two domains by integrating a class attention map with class-specific information. We evaluate the proposed method on an in-house CT image dataset collected from four medical centers and validate its segmentation performance in both in-center and cross-center test scenarios. RESULTS: Our baseline segmentation network (i.e., AsTr) achieves best results compared to other 3D segmentation models, with a mean dice similarity coefficient (DSC) of 59.26%, 55.97%, 48.83% and 67.28% in four in-center test tasks, and with a DSC of 56.42%, 55.94%, 46.54% and 60.62% in four cross-center test tasks. In addition, the proposed cross-center segmentation network (i.e., HCA-DAN) obtains excellent results compared to other unsupervised domain adaptation methods, with a DSC of 58.36%, 56.72%, 49.25%, and 62.20% in four cross-center test tasks. CONCLUSIONS: Comprehensive experimental results demonstrate that the proposed method outperforms compared methods on this multi-center database and is promising for routine clinical workflows.


Asunto(s)
Imagenología Tridimensional , Redes Neurales de la Computación , Neoplasias Gástricas , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Gástricas/diagnóstico por imagen , Neoplasias Gástricas/patología , Imagenología Tridimensional/métodos , Tomografía Computarizada por Rayos X/métodos , Aprendizaje Profundo
5.
IEEE Trans Med Imaging ; PP2024 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-38607706

RESUMEN

Multimodal neuroimaging provides complementary information critical for accurate early diagnosis of Alzheimer's disease (AD). However, the inherent variability between multimodal neuroimages hinders the effective fusion of multimodal features. Moreover, achieving reliable and interpretable diagnoses in the field of multimodal fusion remains challenging. To address them, we propose a novel multimodal diagnosis network based on multi-fusion and disease-induced learning (MDL-Net) to enhance early AD diagnosis by efficiently fusing multimodal data. Specifically, MDL-Net proposes a multi-fusion joint learning (MJL) module, which effectively fuses multimodal features and enhances the feature representation from global, local, and latent learning perspectives. MJL consists of three modules, global-aware learning (GAL), local-aware learning (LAL), and outer latent-space learning (LSL) modules. GAL via a self-adaptive Transformer (SAT) learns the global relationships among the modalities. LAL constructs local-aware convolution to learn the local associations. LSL module introduces latent information through outer product operation to further enhance feature representation. MDL-Net integrates the disease-induced region-aware learning (DRL) module via gradient weight to enhance interpretability, which iteratively learns weight matrices to identify AD-related brain regions. We conduct the extensive experiments on public datasets and the results confirm the superiority of our proposed method. Our code will be available at: https://github.com/qzf0320/MDL-Net.

6.
BMC Bioinformatics ; 25(1): 141, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38566002

RESUMEN

Accurate and efficient prediction of drug-target interaction (DTI) is critical to advance drug development and reduce the cost of drug discovery. Recently, the employment of deep learning methods has enhanced DTI prediction precision and efficacy, but it still encounters several challenges. The first challenge lies in the efficient learning of drug and protein feature representations alongside their interaction features to enhance DTI prediction. Another important challenge is to improve the generalization capability of the DTI model within real-world scenarios. To address these challenges, we propose CAT-DTI, a model based on cross-attention and Transformer, possessing domain adaptation capability. CAT-DTI effectively captures the drug-target interactions while adapting to out-of-distribution data. Specifically, we use a convolution neural network combined with a Transformer to encode the distance relationship between amino acids within protein sequences and employ a cross-attention module to capture the drug-target interaction features. Generalization to new DTI prediction scenarios is achieved by leveraging a conditional domain adversarial network, aligning DTI representations under diverse distributions. Experimental results within in-domain and cross-domain scenarios demonstrate that CAT-DTI model overall improves DTI prediction performance compared with previous methods.


Asunto(s)
Desarrollo de Medicamentos , Descubrimiento de Drogas , Interacciones Farmacológicas , Secuencia de Aminoácidos , Aminoácidos
7.
EPMA J ; 15(1): 39-51, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38463622

RESUMEN

Purpose: We developed an Infant Retinal Intelligent Diagnosis System (IRIDS), an automated system to aid early diagnosis and monitoring of infantile fundus diseases and health conditions to satisfy urgent needs of ophthalmologists. Methods: We developed IRIDS by combining convolutional neural networks and transformer structures, using a dataset of 7697 retinal images (1089 infants) from four hospitals. It identifies nine fundus diseases and conditions, namely, retinopathy of prematurity (ROP) (mild ROP, moderate ROP, and severe ROP), retinoblastoma (RB), retinitis pigmentosa (RP), Coats disease, coloboma of the choroid, congenital retinal fold (CRF), and normal. IRIDS also includes depth attention modules, ResNet-18 (Res-18), and Multi-Axis Vision Transformer (MaxViT). Performance was compared to that of ophthalmologists using 450 retinal images. The IRIDS employed a five-fold cross-validation approach to generate the classification results. Results: Several baseline models achieved the following metrics: accuracy, precision, recall, F1-score (F1), kappa, and area under the receiver operating characteristic curve (AUC) with best values of 94.62% (95% CI, 94.34%-94.90%), 94.07% (95% CI, 93.32%-94.82%), 90.56% (95% CI, 88.64%-92.48%), 92.34% (95% CI, 91.87%-92.81%), 91.15% (95% CI, 90.37%-91.93%), and 99.08% (95% CI, 99.07%-99.09%), respectively. In comparison, IRIDS showed promising results compared to ophthalmologists, demonstrating an average accuracy, precision, recall, F1, kappa, and AUC of 96.45% (95% CI, 96.37%-96.53%), 95.86% (95% CI, 94.56%-97.16%), 94.37% (95% CI, 93.95%-94.79%), 95.03% (95% CI, 94.45%-95.61%), 94.43% (95% CI, 93.96%-94.90%), and 99.51% (95% CI, 99.51%-99.51%), respectively, in multi-label classification on the test dataset, utilizing the Res-18 and MaxViT models. These results suggest that, particularly in terms of AUC, IRIDS achieved performance that warrants further investigation for the detection of retinal abnormalities. Conclusions: IRIDS identifies nine infantile fundus diseases and conditions accurately. It may aid non-ophthalmologist personnel in underserved areas in infantile fundus disease screening. Thus, preventing severe complications. The IRIDS serves as an example of artificial intelligence integration into ophthalmology to achieve better outcomes in predictive, preventive, and personalized medicine (PPPM / 3PM) in the treatment of infantile fundus diseases. Supplementary Information: The online version contains supplementary material available at 10.1007/s13167-024-00350-y.

8.
IEEE Trans Cybern ; PP2024 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-38324437

RESUMEN

The study of nicotine addiction mechanism is of great significance in both nicotine withdrawal and brain science. The detection of addiction-related brain connectivity using functional magnetic resonance imaging (fMRI) is a critical step in study of this mechanism. However, it is challenging to accurately estimate addiction-related brain connectivity due to the low-signal-to-noise ratio of fMRI and the issue of small sample size. In this work, a prior-embedding graph generative adversarial network (PG-GAN) is proposed to capture addiction-related brain connectivity accurately. By designing a dual-generator-based scheme, the addiction-related connectivity generator is employed to learn the feature map of addiction connection, while the reconstruction generator is used for sample reconstruction. Moreover, a bidirectional mapping mechanism is designed to maintain the consistency of sample distribution in the latent space so that addiction-related brain connectivity can be estimated more accurately. The proposed model utilizes prior knowledge embeddings to reduce the search space so that the model can better understand the latent distribution for the issue of small sample size. Experimental results demonstrate the effectiveness of the proposed PG-GAN.

9.
IEEE Trans Cybern ; 54(6): 3652-3665, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38236677

RESUMEN

Alzheimer's disease (AD) is characterized by alterations of the brain's structural and functional connectivity during its progressive degenerative processes. Existing auxiliary diagnostic methods have accomplished the classification task, but few of them can accurately evaluate the changing characteristics of brain connectivity. In this work, a prior-guided adversarial learning with hypergraph (PALH) model is proposed to predict abnormal brain connections using triple-modality medical images. Concretely, a prior distribution from anatomical knowledge is estimated to guide multimodal representation learning using an adversarial strategy. Also, the pairwise collaborative discriminator structure is further utilized to narrow the difference in representation distribution. Moreover, the hypergraph perceptual network is developed to effectively fuse the learned representations while establishing high-order relations within and between multimodal images. Experimental results demonstrate that the proposed model outperforms other related methods in analyzing and predicting AD progression. More importantly, the identified abnormal connections are partly consistent with previous neuroscience discoveries. The proposed model can evaluate the characteristics of abnormal brain connections at different stages of AD, which is helpful for cognitive disease study and early treatment.


Asunto(s)
Enfermedad de Alzheimer , Encéfalo , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/fisiopatología , Humanos , Encéfalo/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Aprendizaje Automático , Redes Neurales de la Computación , Anciano
10.
ArXiv ; 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38168455

RESUMEN

Effective connectivity estimation plays a crucial role in understanding the interactions and information flow between different brain regions. However, the functional time series used for estimating effective connectivity is derived from certain software, which may lead to large computing errors because of different parameter settings and degrade the ability to model complex causal relationships between brain regions. In this paper, a brain diffuser with hierarchical transformer (BDHT) is proposed to estimate effective connectivity for mild cognitive impairment (MCI) analysis. To our best knowledge, the proposed brain diffuser is the first generative model to apply diffusion models to the application of generating and analyzing multimodal brain networks. Specifically, the BDHT leverages structural connectivity to guide the reverse processes in an efficient way. It makes the denoising process more reliable and guarantees effective connectivity estimation accuracy. To improve denoising quality, the hierarchical denoising transformer is designed to learn multi-scale features in topological space. By stacking the multi-head attention and graph convolutional network, the graph convolutional transformer (GraphConformer) module is devised to enhance structure-function complementarity and improve the ability in noise estimation. Experimental evaluations of the denoising diffusion model demonstrate its effectiveness in estimating effective connectivity. The proposed model achieves superior performance in terms of accuracy and robustness compared to existing approaches. Moreover, the proposed model can identify altered directional connections and provide a comprehensive understanding of parthenogenesis for MCI treatment.

11.
BMC Biol ; 22(1): 1, 2024 01 02.
Artículo en Inglés | MEDLINE | ID: mdl-38167069

RESUMEN

BACKGROUND: Cell senescence is a sign of aging and plays a significant role in the pathogenesis of age-related disorders. For cell therapy, senescence may compromise the quality and efficacy of cells, posing potential safety risks. Mesenchymal stem cells (MSCs) are currently undergoing extensive research for cell therapy, thus necessitating the development of effective methods to evaluate senescence. Senescent MSCs exhibit distinctive morphology that can be used for detection. However, morphological assessment during MSC production is often subjective and uncertain. New tools are required for the reliable evaluation of senescent single cells on a large scale in live imaging of MSCs. RESULTS: We have developed a successful morphology-based Cascade region-based convolution neural network (Cascade R-CNN) system for detecting senescent MSCs, which can automatically locate single cells of different sizes and shapes in multicellular images and assess their senescence state. Additionally, we tested the applicability of the Cascade R-CNN system for MSC senescence and examined the correlation between morphological changes with other senescence indicators. CONCLUSIONS: This deep learning has been applied for the first time to detect senescent MSCs, showing promising performance in both chronic and acute MSC senescence. The system can be a labor-saving and cost-effective option for screening MSC culture conditions and anti-aging drugs, as well as providing a powerful tool for non-invasive and real-time morphological image analysis integrated into cell production.


Asunto(s)
Aprendizaje Profundo , Células Madre Mesenquimatosas , Proliferación Celular , Senescencia Celular , Células Cultivadas
12.
Med Image Anal ; 91: 103014, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37913578

RESUMEN

Cell classification underpins intelligent cervical cancer screening, a cytology examination that effectively decreases both the morbidity and mortality of cervical cancer. This task, however, is rather challenging, mainly due to the difficulty of collecting a training dataset representative sufficiently of the unseen test data, as there are wide variations of cells' appearance and shape at different cancerous statuses. This difficulty makes the classifier, though trained properly, often classify wrongly for cells that are underrepresented by the training dataset, eventually leading to a wrong screening result. To address it, we propose a new learning algorithm, called worse-case boosting, for classifiers effectively learning from under-representative datasets in cervical cell classification. The key idea is to learn more from worse-case data for which the classifier has a larger gradient norm compared to other training data, so these data are more likely to correspond to underrepresented data, by dynamically assigning them more training iterations and larger loss weights for boosting the generalizability of the classifier on underrepresented data. We achieve this idea by sampling worse-case data per the gradient norm information and then enhancing their loss values to update the classifier. We demonstrate the effectiveness of this new learning algorithm on two publicly available cervical cell classification datasets (the two largest ones to the best of our knowledge), and positive results (4% accuracy improvement) yield in the extensive experiments. The source codes are available at: https://github.com/YouyiSong/Worse-Case-Boosting.


Asunto(s)
Neoplasias del Cuello Uterino , Femenino , Humanos , Detección Precoz del Cáncer , Algoritmos , Programas Informáticos
13.
Artículo en Inglés | MEDLINE | ID: mdl-38082801

RESUMEN

Accurate segmentation of gastric tumors from computed tomography (CT) images provides useful image information for guiding the diagnosis and treatment of gastric cancer. Researchers typically collect datasets from multiple medical centers to increase sample size and representation, but this raises the issue of data heterogeneity. To this end, we propose a new cross-center 3D tumor segmentation method named unsupervised scale-aware and boundary-aware domain adaptive network (USBDAN), which includes a new 3D neural network that efficiently bridges an Anisotropic neural network and a Transformer (AsTr) for extracting multi-scale features from the CT images with anisotropic resolution, and a scale-aware and boundary-aware domain alignment (SaBaDA) module for adaptively aligning multi-scale features between two domains and enhancing tumor boundary drawing based on location-related information drawn from each sample across all domains. We evaluate the proposed method on an in-house CT image dataset collected from four medical centers. Our results demonstrate that the proposed method outperforms several state-of-the-art methods.


Asunto(s)
Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagen , Anisotropía , Concienciación , Suministros de Energía Eléctrica , Hospitales
14.
Artículo en Inglés | MEDLINE | ID: mdl-38082863

RESUMEN

12-lead electrocardiogram (ECG) is a widely used method in the diagnosis of cardiovascular disease (CVD). With the increase in the number of CVD patients, the study of accurate automatic diagnosis methods via ECG has become a research hotspot. The use of deep learning-based methods can reduce the influence of human subjectivity and improve the diagnosis accuracy. In this paper, we propose a 12-lead ECG automatic diagnosis method based on channel features and temporal features fusion. Specifically, we design a gated CNN-Transformer network, in which the CNN block is used to extract signal embeddings to reduce data complexity. The dual-branch transformer structure is used to effectively extract channel and temporal features in low-dimensional embeddings, respectively. Finally, the features from the two branches are fused by the gating unit to achieve automatic CVD diagnosis from 12-lead ECG. The proposed end-to-end approach has more competitive performance than other deep learning algorithms, which achieves an overall diagnostic accuracy of 85.3% in the 12-lead ECG dataset of CPSC-2018.


Asunto(s)
Enfermedades Cardiovasculares , Redes Neurales de la Computación , Humanos , Algoritmos , Enfermedades Cardiovasculares/diagnóstico , Electrocardiografía
15.
Artículo en Inglés | MEDLINE | ID: mdl-38083355

RESUMEN

As an early sign of thyroid cancer, thyroid nodules are the most common nodular lesions. As a non-invasive imaging method, ultrasound is widely used in the diagnosis of benign and malignant thyroid nodules. As there is no obvious difference in appearance between the two types of thyroid nodules, and the contrast with the surrounding muscle tissue is too low, it is difficult to distinguish the benign and malignant nodules. Therefore, a dense nodal Swin-Transformer(DST) method for the diagnosis of thyroid nodules is proposed in this paper. Image segmentation is carried out through patch, and feature maps of different sizes are constructed in four stages, which consider different information of each layer of features. In each stage block, a dense connection mechanism is used to make full use of multi-layer features and effectively improve the diagnostic performance. The experimental results of multi-center ultrasound data collected from 17 hospitals show that the accuracy of the proposed method is 87.27%, the sensitivity is 88.63%, and the specific effect is 85.16%, which verifies that the proposed algorithm has the potential to assist clinical practice.


Asunto(s)
Neoplasias de la Tiroides , Nódulo Tiroideo , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Nódulo Tiroideo/patología , Sensibilidad y Especificidad , Diagnóstico Diferencial , Ultrasonografía/métodos
16.
Artículo en Inglés | MEDLINE | ID: mdl-38083477

RESUMEN

Fibromyalgia syndrome (FMS) is a type of rheumatology that seriously affects the normal life of patients. Due to the complex clinical manifestations of FMS, it is challenging to detect FMS. Therefore, an automatic FMS diagnosis model is urgently needed to assist physicians. Brain functional connectivity networks (BFCNs) constructed by resting-state functional magnetic resonance imaging (rs-fMRI) to describe brain functions have been widely used to identify individuals with relevant diseases from normal control (NC). Therefore, we propose a novel model based on BFCN and graph convolutional network (GCN) for automatic FMS diagnosis. Firstly, a novel fused BFCN method is proposed by fusing Pearson's correlation (PC) and low-rank (LR) BFCN, which retains information and reduces data redundancy to construct BFCN. Then we combine the feature of BFCN with non-image information of subjects to obtain nodes and adjacency matrices, which builds a graph with edge attention. Finally, the graph is sent to the GCN layer for FMS diagnosis. Our model is evaluated on the in-house FMS dataset to achieve 82.48% accuracy. The experimental results show that our method outperforms the state-of-the-art competing methods.


Asunto(s)
Fibromialgia , Médicos , Humanos , Fibromialgia/diagnóstico por imagen , Encéfalo/diagnóstico por imagen
17.
Artículo en Inglés | MEDLINE | ID: mdl-38083514

RESUMEN

Contrast-enhanced ultrasound (CEUS) video plays an important role in post-ablation treatment response assessment in patients with hepatocellular carcinoma (HCC). However, the assessment of treatment response using CEUS video is challenging due to issues such as high inter-frame data repeatability, small ablation area and poor imaging quality of CEUS video. To address these issues, we propose a two-stage diagnostic framework for post-ablation treatment response assessment in patients with HCC using CEUS video. The first stage is a location stage, which is used to locate the ablation area. At this stage, we propose a Yolov5-SFT to improve the location results of the ablation area and a similarity comparison module (SCM) to reduce data repeatability. The second stage is an assessment stage, which is used for the evaluation of postoperative efficacy. At this stage, we design an EfficientNet-SK to improve assessment accuracy. The Experimental results on the self-collected data show that the proposed framework outperforms other selected algorithms, and can effectively assist doctors in the assessment of post-ablation treatment response.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/cirugía , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/cirugía , Medios de Contraste , Tomografía Computarizada por Rayos X , Ultrasonografía/métodos
18.
Artículo en Inglés | MEDLINE | ID: mdl-38083611

RESUMEN

In 2019, coronavirus disease (COVID-19) is an acute disease that can rapidly develop into a very serious state. Therefore, it is of great significance to realize automatic COVID-19 diagnosis. However, due to the small difference in the characteristics of computed tomography (CT) between community acquire pneumonia (CP) and COVID-19, the existing model is unsuitable for the three-class classifications of healthy control, CP and COVID-19. The current model rarely optimizes the data from multiple centers. Therefore, we propose a diagnosis model for COVID-19 patients based on graph enhanced 3D convolution neural network (CNN) and cross-center domain feature adaptation. Specifically, we first design a 3D CNN with graph convolution module to enhance the global feature extraction capability of the CNN. Meanwhile, we use the domain adaptive feature alignment method to optimize the feature distance between different centers, which can effectively realize multi-center COVID-19 diagnosis. Our experimental results achieve quite promising COVID-19 diagnosis results, which show that the accuracy in the mixed dataset is 98.05%, and the accuracy in cross-center tasks are 85.29% and 87.53%.


Asunto(s)
Prueba de COVID-19 , COVID-19 , Humanos , COVID-19/diagnóstico , Redes Neurales de la Computación
19.
Artículo en Inglés | MEDLINE | ID: mdl-37971911

RESUMEN

Fusing structural-functional images of the brain has shown great potential to analyze the deterioration of Alzheimer's disease (AD). However, it is a big challenge to effectively fuse the correlated and complementary information from multimodal neuroimages. In this work, a novel model termed cross-modal transformer generative adversarial network (CT-GAN) is proposed to effectively fuse the functional and structural information contained in functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI). The CT-GAN can learn topological features and generate multimodal connectivity from multimodal imaging data in an efficient end-to-end manner. Moreover, the swapping bi-attention mechanism is designed to gradually align common features and effectively enhance the complementary features between modalities. By analyzing the generated connectivity features, the proposed model can identify AD-related brain connections. Evaluations on the public ADNI dataset show that the proposed CT-GAN can dramatically improve prediction performance and detect AD-related brain regions effectively. The proposed model also provides new insights into detecting AD-related abnormal neural circuits.


Asunto(s)
Enfermedad de Alzheimer , Imagen de Difusión Tensora , Humanos , Imagen de Difusión Tensora/métodos , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Aprendizaje
20.
Artículo en Inglés | MEDLINE | ID: mdl-37815971

RESUMEN

Integrating the brain structural and functional connectivity features is of great significance in both exploring brain science and analyzing cognitive impairment clinically. However, it remains a challenge to effectively fuse structural and functional features in exploring the complex brain network. In this paper, a novel brain structure-function fusing-representation learning (BSFL) model is proposed to effectively learn fused representation from diffusion tensor imaging (DTI) and resting-state functional magnetic resonance imaging (fMRI) for mild cognitive impairment (MCI) analysis. Specifically, the decomposition-fusion framework is developed to first decompose the feature space into the union of the uniform and unique spaces for each modality, and then adaptively fuse the decomposed features to learn MCI-related representation. Moreover, a knowledge-aware transformer module is designed to automatically capture local and global connectivity features throughout the brain. Also, a uniform-unique contrastive loss is further devised to make the decomposition more effective and enhance the complementarity of structural and functional features. The extensive experiments demonstrate that the proposed model achieves better performance than other competitive methods in predicting and analyzing MCI. More importantly, the proposed model could be a potential tool for reconstructing unified brain networks and predicting abnormal connections during the degenerative processes in MCI.


Asunto(s)
Disfunción Cognitiva , Imagen de Difusión Tensora , Humanos , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...