Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 76
Filter
1.
Med Image Anal ; 97: 103228, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38850623

ABSTRACT

Accurate landmark detection in medical imaging is essential for quantifying various anatomical structures and assisting in diagnosis and treatment planning. In ultrasound cine, landmark detection is often associated with identifying keyframes, which represent the occurrence of specific events, such as measuring target dimensions at specific temporal phases. Existing methods predominantly treat landmark and keyframe detection as separate tasks without harnessing their underlying correlations. Additionally, owing to the intrinsic characteristics of ultrasound imaging, both tasks are constrained by inter-observer variability, leading to potentially higher levels of uncertainty. In this paper, we propose a Bayesian network to achieve simultaneous keyframe and landmark detection in ultrasonic cine, especially under highly sparse training data conditions. We follow a coarse-to-fine landmark detection architecture and propose an adaptive Bayesian hypergraph for coordinate refinement on the results of heatmap-based regression. In addition, we propose Order Loss for training bi-directional Gated Recurrent Unit to identify keyframes based on the relative likelihoods within the sequence. Furthermore, to exploit the underlying correlation between the two tasks, we use a shared encoder to extract features for both tasks and enhance the detection accuracy through the interaction of temporal and motion information. Experiments on two in-house datasets (multi-view transesophageal and transthoracic echocardiography) and one public dataset (transthoracic echocardiography) demonstrate that our method outperforms state-of-the-art approaches. The mean absolute errors for dimension measurements of the left atrial appendage, aortic annulus, and left ventricle are 2.40 mm, 0.83 mm, and 1.63 mm, respectively. The source code is available at github.com/warmestwind/ABHG.

2.
Med Image Anal ; 96: 103211, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38796945

ABSTRACT

In the medical field, datasets are mostly integrated across sites due to difficult data acquisition and insufficient data at a single site. The domain shift problem caused by the heterogeneous distribution among multi-site data makes autism spectrum disorder (ASD) hard to identify. Recently, domain adaptation has received considerable attention as a promising solution. However, domain adaptation on graph data like brain networks has not been fully studied. It faces two major challenges: (1) complex graph structure; and (2) multiple source domains. To overcome the issues, we propose an end-to-end structure-aware domain adaptation framework for brain network analysis (BrainDAS) using resting-state functional magnetic resonance imaging (rs-fMRI). The proposed approach contains two stages: supervision-guided multi-site graph domain adaptation with dynamic kernel generation and graph classification with attention-based graph pooling. We evaluate our BrainDAS on a public dataset provided by Autism Brain Imaging Data Exchange (ABIDE) which includes 871 subjects from 17 different sites, surpassing state-of-the-art algorithms in several different evaluation settings. Furthermore, our promising results demonstrate the interpretability and generalization of the proposed method. Our code is available at https://github.com/songruoxian/BrainDAS.


Subject(s)
Algorithms , Autism Spectrum Disorder , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Autism Spectrum Disorder/diagnostic imaging , Brain/diagnostic imaging , Nerve Net/diagnostic imaging , Image Processing, Computer-Assisted/methods
3.
IEEE J Biomed Health Inform ; 28(3): 1528-1539, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38446655

ABSTRACT

Colorectal cancer is a prevalent and life-threatening disease, where colorectal cancer liver metastasis (CRLM) exhibits the highest mortality rate. Currently, surgery stands as the most effective curative option for eligible patients. However, due to the insufficient performance of traditional methods and the lack of multi-modality MRI feature complementarity in existing deep learning methods, the prognosis of CRLM surgical resection has not been fully explored. This paper proposes a new method, multi-modal guided complementary network (MGCNet), which employs multi-sequence MRI to predict 1-year recurrence and recurrence-free survival in patients after CRLM resection. In light of the complexity and redundancy of features in the liver region, we designed the multi-modal guided local feature fusion module to utilize the tumor features to guide the dynamic fusion of prognostically relevant local features within the liver. On the other hand, to solve the loss of spatial information during multi-sequence MRI fusion, the cross-modal complementary external attention module designed an external mask branch to establish inter-layer correlation. The results show that the model has accuracy (ACC) of 0.79, the area under the curve (AUC) of 0.84, C-Index of 0.73, and hazard ratio (HR) of 4.0, which is a significant improvement over state-of-the-art methods. Additionally, MGCNet exhibits good interpretability.


Subject(s)
Colorectal Neoplasms , Liver Neoplasms , Humans , Prognosis , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/surgery , Magnetic Resonance Imaging , Colorectal Neoplasms/diagnostic imaging , Colorectal Neoplasms/surgery
4.
Comput Biol Med ; 172: 108261, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38508056

ABSTRACT

Whole heart segmentation (WHS) has significant clinical value for cardiac anatomy, modeling, and analysis of cardiac function. This study aims to address the WHS accuracy on cardiac CT images, as well as the fast inference speed and low graphics processing unit (GPU) memory consumption required by practical clinical applications. Thus, we propose a multi-residual two-dimensional (2D) network integrating spatial correlation for WHS. The network performs slice-by-slice segmentation on three-dimensional cardiac CT images in a 2D encoder-decoder manner. In the network, a convolutional long short-term memory skip connection module is designed to perform spatial correlation feature extraction on the feature maps at different resolutions extracted by the sub-modules of the pre-trained ResNet-based encoder. Moreover, a decoder based on the multi-residual module is designed to analyze the extracted features from the perspectives of multi-scale and channel attention, thereby accurately delineating the various substructures of the heart. The proposed method is verified on a dataset of the multi-modality WHS challenge, an in-house WHS dataset, and a dataset of the abdominal organ segmentation challenge. The dice, Jaccard, average symmetric surface distance, Hausdorff distance, inference time, and maximum GPU memory of the WHS are 0.914, 0.843, 1.066 mm, 15.778 mm, 9.535 s, and 1905 MB, respectively. The proposed network has high accuracy, fast inference speed, minimal GPU memory consumption, strong robustness, and good generalization. It can be deployed to clinical practical applications for WHS and can be effectively extended and applied to other multi-organ segmentation fields. The source code is publicly available at https://github.com/nancy1984yan/MultiResNet-SC.


Subject(s)
Heart , Software , Heart/diagnostic imaging , Tomography, X-Ray Computed
5.
Data Brief ; 53: 110141, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38406254

ABSTRACT

A benchmark histopathological Hematoxylin and Eosin (H&E) image dataset for Cervical Adenocarcinoma in Situ (CAISHI), containing 2240 histopathological images of Cervical Adenocarcinoma in Situ (AIS), is established to fill the current data gap, of which 1010 are images of normal cervical glands and another 1230 are images of cervical AIS. The sampling method is endoscope biopsy. Pathological sections are obtained by H&E staining from Shengjing Hospital, China Medical University. These images have a magnification of 100 and are captured by the Axio Scope. A1 microscope. The size of the image is 3840 × 2160 pixels, and the format is ".png". The collection of CAISHI is subject to an ethical review by China Medical University with approval number 2022PS841K. These images are analyzed at multiple levels, including classification tasks and image retrieval tasks. A variety of computer vision and machine learning methods are used to evaluate the performance of the data. For classification tasks, a variety of classical machine learning classifiers such as k-means, support vector machines (SVM), and random forests (RF), as well as convolutional neural network classifiers such as Residual Network 50 (ResNet50), Vision Transformer (ViT), Inception version 3 (Inception-V3), and Visual Geometry Group Network 16 (VGG-16), are used. In addition, the Siamese network is used to evaluate few-shot learning tasks. In terms of image retrieval functions, color features, texture features, and deep learning features are extracted, and their performances are tested. CAISHI can help with the early diagnosis and screening of cervical cancer. Researchers can use this dataset to develop new computer-aided diagnostic tools that could improve the accuracy and efficiency of cervical cancer screening and advance the development of automated diagnostic algorithms.

6.
IEEE Trans Med Imaging ; PP2024 Feb 19.
Article in English | MEDLINE | ID: mdl-38373127

ABSTRACT

Medical image analysis techniques have been employed in diagnosing and screening clinical diseases. However, both poor medical image quality and illumination style inconsistency increase uncertainty in clinical decision-making, potentially resulting in clinician misdiagnosis. The majority of current image enhancement methods primarily concentrate on enhancing medical image quality by leveraging high-quality reference images, which are challenging to collect in clinical applications. In this study, we address image quality enhancement within a fully self-supervised learning setting, wherein neither high-quality images nor paired images are required. To achieve this goal, we investigate the potential of self-supervised learning combined with domain adaptation to enhance the quality of medical images without the guidance of high-quality medical images. We design a Domain Adaptation Self-supervised Quality Enhancement framework, called DASQE. More specifically, we establish multiple domains at the patch level through a designed rule-based quality assessment scheme and style clustering. To achieve image quality enhancement and maintain style consistency, we formulate the image quality enhancement as a collaborative self-supervised domain adaptation task for disentangling the low-quality factors, medical image content, and illumination style characteristics by exploring intrinsic supervision in the low-quality medical images. Finally, we perform extensive experiments on six benchmark datasets of medical images, and the experimental results demonstrate that DASQE attains state-of-the-art performance. Furthermore, we explore the impact of the proposed method on various clinical tasks, such as retinal fundus vessel/lesion segmentation, nerve fiber segmentation, polyp segmentation, skin lesion segmentation, and disease classification. The results demonstrate that DASQE is advantageous for diverse downstream image analysis tasks.

7.
Heliyon ; 10(1): e23224, 2024 Jan 15.
Article in English | MEDLINE | ID: mdl-38163158

ABSTRACT

Regional wall motion abnormality (RWMA) is a common manifestation of ischemic heart disease detected through echocardiography. Currently, RWMA diagnosis heavily relies on visual assessment by doctors, leading to limitations in experience-based dependence and suboptimal reproducibility among observers. Several RWMA diagnosis models were proposed, while RWMA diagnosis with more refined segments can provide more comprehensive wall motion information to better assist doctors in the diagnosis of ischemic heart disease. In this paper, we proposed the STGA-MS model which consists of three modules, the spatial-temporal grouping attention (STGA) module, the segment feature extraction module, and the multiscale downsampling module, for the diagnosis of RWMA for multiple myocardial segments. The STGA module captures global spatial and temporal information, enhancing the representation of myocardial motion characteristics. The segment feature extraction module focuses on specific segment regions, extracting relevant features. The multiscale downsampling module analyzes myocardial motion deformation across different receptive fields. Experimental results on a 2D transthoracic echocardiography dataset show that the proposed STGA-MS model achieves better performance compared to state-of-the-art models. It holds promise in improving the accuracy and reproducibility of RWMA diagnosis, assisting clinicians in diagnosing ischemic heart disease more reliably.

8.
Comput Methods Programs Biomed ; 245: 108032, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38244339

ABSTRACT

BACKGROUND AND OBJECTIVE: Multi-label Chest X-ray (CXR) images often contain rich label relationship information, which is beneficial to improve classification performance. However, because of the intricate relationships among labels, most existing works fail to effectively learn and make full use of the label correlations, resulting in limited classification performance. In this study, we propose a multi-label learning framework that learns and leverages the label correlations to improve multi-label CXR image classification. METHODS: In this paper, we capture the global label correlations through the self-attention mechanism. Meanwhile, to better utilize label correlations for guiding feature learning, we decompose the image-level features into label-level features. Furthermore, we enhance label-level feature learning in an end-to-end manner by a consistency constraint between global and local label correlations, and a label correlation guided multi-label supervised contrastive loss. RESULTS: To demonstrate the superior performance of our proposed approach, we conduct three times 5-fold cross-validation experiments on the CheXpert dataset. Our approach obtains an average F1 score of 44.6% and an AUC of 76.5%, achieving a 7.7% and 1.3% improvement compared to the state-of-the-art results. CONCLUSION: More accurate label correlations and full utilization of the learned label correlations help learn more discriminative label-level features. Experimental results demonstrate that our approach achieves exceptionally competitive performance compared to the state-of-the-art algorithms.


Subject(s)
Learning , Thorax , Thorax/diagnostic imaging , Algorithms , Research Design
9.
J Aging Phys Act ; 32(1): 8-17, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-37652436

ABSTRACT

OBJECTIVES: To identify frailty trajectories and examine its association with allostatic load (AL) and mediating effect of physical activity (PA). METHODS: This study included 8,082 adults from the English Longitudinal Study of Aging over Waves 4-9. AL was calculated by 14 biological indicators, and a 53-item frailty index was used to evaluate frailty. Frailty trajectories were classified by group-based trajectory modeling, and the mediated effect of PA was tested by causal mediation analysis. RESULTS: Four frailty trajectories were identified: "Robustness" (n = 4,437, 54.9%), "Incident prefrailty" (n = 2,061, 25.5%), "Prefrailty to frailty" (n = 1,136, 14.1%), and "Frailty to severe frailty" (n = 448, 5.5%). High baseline AL was associated with increased odds of "Incident prefrailty," "Prefrailty to frailty," and "Frailty to severe frailty" trajectories. PA demonstrated significant mediated effects in aforementioned associations. CONCLUSIONS: AL is significantly associated with the onset and progression of frailty, and such associations are partially mediated by PA.


Subject(s)
Allostasis , Frailty , Aged , Humans , Longitudinal Studies , Frail Elderly , Exercise
10.
Comput Biol Med ; 167: 107620, 2023 12.
Article in English | MEDLINE | ID: mdl-37922604

ABSTRACT

In recent years, there is been a growing reliance on image analysis methods to bolster dentistry practices, such as image classification, segmentation and object detection. However, the availability of related benchmark datasets remains limited. Hence, we spent six years to prepare and test a bench Oral Implant Image Dataset (OII-DS) to support the work in this research domain. OII-DS is a benchmark oral image dataset consisting of 3834 oral CT imaging images and 15240 oral implant images. It serves the purpose of object detection and image classification. To demonstrate the validity of the OII-DS, for each function, the most representative algorithms and metrics are selected for testing and evaluation. For object detection, five object detection algorithms are adopted to test and four evaluation criteria are used to assess the detection of each of the five objects. Additionally, mean average precision serves as the evaluation metric for multi-objective detection. For image classification, 13 classifiers are used for testing and evaluating each of the five categories by meeting four evaluation criteria. Experimental results affirm the high quality of our data in OII-DS, rendering it suitable for evaluating object detection and image classification methods. Furthermore, OII-DS is openly available at the URL for non-commercial purpose: https://doi.org/10.6084/m9.figshare.22608790.


Subject(s)
Algorithms , Benchmarking , Image Processing, Computer-Assisted/methods
11.
Health Inf Sci Syst ; 11(1): 47, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37810417

ABSTRACT

Accurate differentiation between pulmonary arteries and veins (A/V) holds pivotal importance in the realm of diagnosing and treating pulmonary ailments. This study presents a new approach that leverages grayscale differences between A/V. Distinctions are measured using median and mean grayscale values within the vessel area. Initially, adherent regions are removed based on vessel structure. The trunk regions are segmented using gray level information near the heart region of the lung boundary. Incorrectly segmented vessels are corrected based on connectivity. For distal lung vessels, a similar distance field is established using a graph-cut method. Experimental results show the algorithm's superior segmentation accuracy, achieving 97.26% compared to the CNN-based average accuracy of 91.67%. Error branches are more concentrated, aiding subsequent manual and automatic correction. This demonstrates the algorithm's effective segmentation of pulmonary A/V.

12.
Comput Biol Med ; 165: 107388, 2023 10.
Article in English | MEDLINE | ID: mdl-37696178

ABSTRACT

Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.


Subject(s)
Medicine , Diagnosis, Computer-Assisted , Image Processing, Computer-Assisted , Intestines , Machine Learning
13.
Comput Biol Med ; 165: 107286, 2023 10.
Article in English | MEDLINE | ID: mdl-37633088

ABSTRACT

Accurate myocardial segmentation is crucial for the diagnosis of various heart diseases. However, segmentation results often suffer from topology structural errors, such as broken connections and holes, especially in cases of poor image quality. These errors are unacceptable in clinical diagnosis. We proposed a Topology-Sensitive Weight (TSW) model to keep both pixel-wise accuracy and topological correctness. Specifically, the Position Weighting Update (PWU) strategy with the Boundary-Sensitive Topology (BST) module can guide the model to focus on positions where topological features are sensitive to pixel values. The Myocardial Integrity Topology (MIT) module can serve as a guide for maintaining myocardial integrity. We evaluate the TSW model on the CAMUS dataset and a private echocardiography myocardial segmentation dataset. The qualitative and quantitative experimental results show that the TSW model significantly enhances topological accuracy while maintaining pixel-wise precision.


Subject(s)
Algorithms , Heart Diseases , Humans , Image Processing, Computer-Assisted/methods , Myocardium , Echocardiography
14.
Mater Horiz ; 10(10): 4148-4162, 2023 Oct 02.
Article in English | MEDLINE | ID: mdl-37395527

ABSTRACT

Two-dimensional (2D) molybdenum disulfide exhibits a variety of intriguing behaviors depending on its orientation layers. Therefore, developing a template-free atomic layer orientation controllable growth approach is of great importance. Here, we demonstrate scalable, template-free, well-ordered vertically-oriented MoS2 nanowire arrays (VO-MoS2 NWAs) embedded in an Ag-MoS2 matrix, directly grown on various substrates (Si, Al, and stainless steel) via one-step sputtering. In the meta-structured film, vertically-standing few-layered MoS2 NWAs of almost micron length (∼720 nm) throughout the entire film bulk. While near the surface, MoS2 lamellae are oriented in parallel, which are beneficial for caging the bonds dangling from the basal planes. Owing to the unique T-type topological characteristics, chemically inert Ag@MoS2 nano-scrolls (NSCs) and nano-crystalline Ag (nc-Ag) nanoparticles (NPs) are in situ formed under the sliding shear force. Thus, incommensurate contact between (002) basal planes and nc-Ag NPs is observed. As a result, robust superlubricity (friction coefficient µ = 0.0039) under humid ambient conditions is reached. This study offers an unprecedented strategy for controlling the basal plane orientation of 2D transition metal dichalcogenides (TMDCs) via substrate independence, using a one-step solution-free easily scalable process without the need for a template, which promotes the potential applications of 2D TMDCs in solid superlubricity.

15.
Comput Med Imaging Graph ; 108: 102264, 2023 09.
Article in English | MEDLINE | ID: mdl-37418789

ABSTRACT

Cardiovascular disease is the leading cause of human death worldwide, and acute coronary syndrome (ACS) is a common first manifestation of this. Studies have shown that pericoronary adipose tissue (PCAT) computed tomography (CT) attenuation and atherosclerotic plaque characteristics can be used to predict future adverse ACS events. However, radiomics-based methods have limitations in extracting features of PCAT and atherosclerotic plaques. Therefore, we propose a hybrid deep learning framework capable of extracting coronary CT angiography (CCTA) imaging features of both PCAT and atherosclerotic plaques for ACS prediction. The framework designs a two-stream CNN feature extraction (TSCFE) module to extract the features of PCAT and atherosclerotic plaques, respectively, and a channel feature fusion (CFF) to explore feature correlations between their features. Specifically, a trilinear-based fully-connected (FC) prediction module stepwise maps high-dimensional representations to low-dimensional label spaces. The framework was validated in retrospectively collected suspected coronary artery disease cases examined by CCTA. The prediction accuracy, sensitivity, specificity, and area under curve (AUC) are all higher than the classical image classification networks and state-of-the-art medical image classification methods. The experimental results show that the proposed method can effectively and accurately extract CCTA imaging features of PCAT and atherosclerotic plaques and explore the feature correlations to produce impressive performance. Thus, it has the potential value to be applied in clinical applications for accurate ACS prediction.


Subject(s)
Acute Coronary Syndrome , Coronary Artery Disease , Plaque, Atherosclerotic , Humans , Plaque, Atherosclerotic/diagnostic imaging , Acute Coronary Syndrome/diagnostic imaging , Retrospective Studies , Coronary Angiography/methods , Coronary Artery Disease/diagnostic imaging , Computed Tomography Angiography/methods , Adipose Tissue/diagnostic imaging , Coronary Vessels
16.
Med Image Anal ; 87: 102834, 2023 07.
Article in English | MEDLINE | ID: mdl-37207524

ABSTRACT

Traditional medical image segmentation methods based on deep learning require experts to provide extensive manual delineations for model training. Few-shot learning aims to reduce the dependence on the scale of training data but usually shows poor generalizability to the new target. The trained model tends to favor the training classes rather than being absolutely class-agnostic. In this work, we propose a novel two-branch segmentation network based on unique medical prior knowledge to alleviate the above problem. Specifically, we explicitly introduce a spatial branch to provide the spatial information of the target. In addition, we build a segmentation branch based on the classical encoder-decoder structure in supervised learning and integrate prototype similarity and spatial information as prior knowledge. To achieve effective information integration, we propose an attention-based fusion module (AF) that enables the content interaction of decoder features and prior knowledge. Experiments on an echocardiography dataset and an abdominal MRI dataset show that the proposed model achieves substantial improvements over state-of-the-art methods. Moreover, some results are comparable to those of the fully supervised model. The source code is available at github.com/warmestwind/RAPNet.


Subject(s)
Echocardiography , Software , Humans , Image Processing, Computer-Assisted
17.
Med Biol Eng Comput ; 61(9): 2467-2480, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37184591

ABSTRACT

3D vessel extraction has great significance in the diagnosis of vascular diseases. However, accurate extraction of vessels from computed tomography angiography (CTA) data is challenging. For one thing, vessels in different body parts have a wide range of scales and large curvatures; for another, the intensity distributions of vessels in different CTA data vary considerably. Besides, surrounding interfering tissue, like bones or veins with similar intensity, also seriously affects vessel extraction. Considering all the above imaging and structural features of vessels, we propose a new scale-adaptive hybrid parametric tracker (SAHPT) to extract arbitrary vessels of different body parts. First, a geometry-intensity parametric model is constructed to calculate the geometry-intensity response. While geometry parameters are calculated to adapt to the variation in scale, intensity parameters can also be estimated to meet non-uniform intensity distributions. Then, a gradient parametric model is proposed to calculate the gradient response based on a multiscale symmetric normalized gradient filter which can effectively separate the target vessel from surrounding interfering tissue. Last, a hybrid parametric model that combines the geometry-intensity and gradient parametric models is constructed to evaluate how well it fits a local image patch. In the extraction process, a multipath spherical sampling strategy is used to solve the problem of anatomical complexity. We have conducted many quantitative experiments using the synthetic and clinical CTA data, asserting its superior performance compared to traditional or deep learning-based baselines.


Subject(s)
Algorithms , Angiography , Angiography/methods , Tomography, X-Ray Computed/methods , Computed Tomography Angiography
18.
IEEE J Biomed Health Inform ; 27(8): 4154-4165, 2023 08.
Article in English | MEDLINE | ID: mdl-37159311

ABSTRACT

The less training data and insufficient supervision limit the performance of the deep supervised models for brain disease diagnosis. It is significant to construct a learning framework that can capture more information in limited data and insufficient supervision. To address these issues, we focus on self-supervised learning and aim to generalize the self-supervised learning to the brain networks, which are non-Euclidean graph data. More specifically, we propose an ensemble masked graph self-supervised framework named BrainGSLs, which incorporates 1) a local topological-aware encoder that takes the partially visible nodes as input and learns these latent representations, 2) a node-edge bi-decoder that reconstructs the masked edges by the representations of both the masked and visible nodes, 3) a signal representation learning module for capturing temporal representations from BOLD signals and 4) a classifier used for the classification. We evaluate our model on three real medical clinical applications: diagnosis of Autism Spectrum Disorder (ASD), diagnosis of Bipolar Disorder (BD) and diagnosis of Major Depressive Disorder (MDD). The results suggest that the proposed self-supervised training has led to remarkable improvement and outperforms state-of-the-art methods. Moreover, our method is able to identify the biomarkers associated with the diseases, which is consistent with the previous studies. We also explore the correlation of these three diseases and find the strong association between ASD and BD. To the best of our knowledge, our work is the first attempt of applying the idea of self-supervised learning with masked autoencoder on the brain network analysis.


Subject(s)
Autism Spectrum Disorder , Depressive Disorder, Major , Humans , Autism Spectrum Disorder/diagnostic imaging , Brain/diagnostic imaging , Knowledge , Supervised Machine Learning
19.
Comput Biol Med ; 159: 106886, 2023 06.
Article in English | MEDLINE | ID: mdl-37062255

ABSTRACT

The extraction of vessels from computed tomography angiography (CTA) is significant in diagnosing and evaluating vascular diseases. However, due to the anatomical complexity, wide intensity distribution, and small volume proportion of vessels, vessel extraction is laborious and time-consuming, and it is easy to lead to error-prone diagnostic results in clinical practice. This study proposes a novel comprehensive vessel extraction framework, called the Local Iterative-based Vessel Extraction Network (LIVE-Net), to achieve 3D vessel segmentation while tracking vessel centerlines. LIVE-Net contains dual dataflow pathways that work alternately: an iterative tracking network and a local segmentation network. The former can generate the fine-grain direction and radius prediction of a vascular patch by using the attention-embedded atrous pyramid network (aAPN), and the latter can achieve 3D vascular lumen segmentation by constructing the multi-order self-attention U-shape network (MOSA-UNet). LIVE-Net is trained and evaluated on two datasets: the MICCAI 2008 Coronary Artery Tracking Challenge (CAT08) dataset and head and neck CTA dataset from the clinic. Experimental results of both tracking and segmentation show that our proposed LIVE-Net exhibits superior performance compared with other state-of-the-art (SOTA) networks. In the CAT08 dataset, the tracked centerlines have an average overlap of 95.2%, overlap until first error of 91.2%, overlap with the clinically relevant vessels of 98.3%, and error distance inside of 0.21 mm. The corresponding tracking overlap metrics in the head and neck CTA dataset are 96.7%, 91.0%, and 99.8%, respectively. In addition, the results of the consistent experiment also show strong clinical correspondence. For the segmentation of bilateral carotid and vertebral arteries, our method can not only achieve better accuracy with an average dice similarity coefficient (DSC) of 90.03%, Intersection over Union (IoU) of 81.97%, and 95% Hausdorff distance (95%HD) of 3.42 mm , but higher efficiency with an average time of 67.25 s , even three times faster compared to some methods applied in full field view. Both the tracking and segmentation results prove the potential clinical utility of our network.


Subject(s)
Computed Tomography Angiography , Tomography, X-Ray Computed , Coronary Vessels , Carotid Arteries , Image Processing, Computer-Assisted/methods
20.
Comput Biol Med ; 156: 106705, 2023 04.
Article in English | MEDLINE | ID: mdl-36863190

ABSTRACT

Left ventricular ejection fraction (LVEF) is essential for evaluating left ventricular systolic function. However, its clinical calculation requires the physician to interactively segment the left ventricle and obtain the mitral annulus and apical landmarks. This process is poorly reproducible and error prone. In this study, we propose a multi-task deep learning network EchoEFNet. The network use ResNet50 with dilated convolution as the backbone to extract high-dimensional features while maintaining spatial features. The branching network used our designed multi-scale feature fusion decoder to segment the left ventricle and detect landmarks simultaneously. The LVEF was then calculated automatically and accurately using the biplane Simpson's method. The model was tested for performance on the public dataset CAMUS and private dataset CMUEcho. The experimental results showed that the geometrical metrics and percentage of correct keypoints of EchoEFNet outperformed other deep learning methods. The correlation between the predicted LVEF and true values on the CAMUS and CMUEcho datasets was 0.854 and 0.916, respectively.


Subject(s)
Deep Learning , Ventricular Function, Left , Stroke Volume , Echocardiography/methods , Heart Ventricles/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...