Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38768004

RESUMO

Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38082913

RESUMO

Computer-aided diagnostic methods, such as automatic and precise liver tumor detection, have a significant impact on healthcare. In recent years, deep learning-based liver tumor detection methods in multi-phase computed tomography (CT) images have achieved noticeable performance. Deep learning frameworks require a substantial amount of annotated training data but obtaining enough training data with high quality annotations is a major issue in medical imaging. Additionally, deep learning frameworks experience domain shift problems when they are trained using one dataset (source domain) and applied to new test data (target domain). To address the lack of training data and domain shift issues in multiphase CT images, here, we present an adversarial learning-based strategy to mitigate the domain gap across different phases of multiphase CT scans. We introduce to use Fourier phase component of CT images in order to improve the semantic information and more reliably identify the tumor tissues. Our approach eliminates the requirement for distinct annotations for each phase of CT scans. The experiment results show that our proposed method performs noticeably better than conventional training and other methods.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Neoplasias Hepáticas/diagnóstico por imagem
3.
Plant Cell Physiol ; 64(11): 1262-1278, 2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-37861079

RESUMO

One of the fundamental questions in plant developmental biology is how cell proliferation and cell expansion coordinately determine organ growth and morphology. An amenable system to address this question is the Arabidopsis root tip, where cell proliferation and elongation occur in spatially separated domains, and cell morphologies can easily be observed using a confocal microscope. While past studies revealed numerous elements of root growth regulation including gene regulatory networks, hormone transport and signaling, cell mechanics and environmental perception, how cells divide and elongate under possible constraints from cell lineages and neighboring cell files has not been analyzed quantitatively. This is mainly due to the technical difficulties in capturing cell division and elongation dynamics at the tip of growing roots, as well as an extremely labor-intensive task of tracing the lineages of frequently dividing cells. Here, we developed a motion-tracking confocal microscope and an Artificial Intelligence (AI)-assisted image-processing pipeline that enables semi-automated quantification of cell division and elongation dynamics at the tip of vertically growing Arabidopsis roots. We also implemented a data sonification tool that facilitates human recognition of cell division synchrony. Using these tools, we revealed previously unnoted lineage-constrained dynamics of cell division and elongation, and their contribution to the root zonation boundaries.


Assuntos
Arabidopsis , Humanos , Arabidopsis/genética , Microscopia , Raízes de Plantas , Inteligência Artificial , Meristema , Divisão Celular
4.
Bioengineering (Basel) ; 10(8)2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37627784

RESUMO

Multi-phase computed tomography (CT) images have gained significant popularity in the diagnosis of hepatic disease. There are several challenges in the liver segmentation of multi-phase CT images. (1) Annotation: due to the distinct contrast enhancements observed in different phases (i.e., each phase is considered a different domain), annotating all phase images in multi-phase CT images for liver or tumor segmentation is a task that consumes substantial time and labor resources. (2) Poor contrast: some phase images may have poor contrast, making it difficult to distinguish the liver boundary. In this paper, we propose a boundary-enhanced liver segmentation network for multi-phase CT images with unsupervised domain adaptation. The first contribution is that we propose DD-UDA, a dual discriminator-based unsupervised domain adaptation, for liver segmentation on multi-phase images without multi-phase annotations, effectively tackling the annotation problem. To improve accuracy by reducing distribution differences between the source and target domains, we perform domain adaptation at two levels by employing two discriminators, one at the feature level and the other at the output level. The second contribution is that we introduce an additional boundary-enhanced decoder to the encoder-decoder backbone segmentation network to effectively recognize the boundary region, thereby addressing the problem of poor contrast. In our study, we employ the public LiTS dataset as the source domain and our private MPCT-FLLs dataset as the target domain. The experimental findings validate the efficacy of our proposed methods, producing substantially improved results when tested on each phase of the multi-phase CT image even without the multi-phase annotations. As evaluated on the MPCT-FLLs dataset, the existing baseline (UDA) method achieved IoU scores of 0.785, 0.796, and 0.772 for the PV, ART, and NC phases, respectively, while our proposed approach exhibited superior performance, surpassing both the baseline and other state-of-the-art methods. Notably, our method achieved remarkable IoU scores of 0.823, 0.811, and 0.800 for the PV, ART, and NC phases, respectively, emphasizing its effectiveness in achieving accurate image segmentation.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1536-1539, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085648

RESUMO

Automatic and efficient liver tumor detection in multi-phase CT images is essential in computer-aided diagnosis of liver tumors. Nowadays, deep learning has been widely used in medical applications. Normally, deep learning-based AI systems need a large quantity of training data, but in the medical field, acquiring sufficient training data with high-quality annotations is a significant challenge. To solve the lack of training data issue, domain adaptation-based methods have recently been developed as a technique to bridge the domain gap across datasets with different feature characteristics and data distributions. This paper presents a domain adaptation-based method for detecting liver tumors in multi-phase CT images. We adopt knowledge for model learning from PV phase images to ART and NC phase images. Clinical Relevance- To minimize the domain gap we employ an adversarial learning scheme with the maximum square loss for mid-level output feature maps using an anchorless detector. Experiments show that our proposed method performs much better for various CT-phase images than normal training.


Assuntos
Aclimatação , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Compostos Radiofarmacêuticos , Tomografia Computadorizada por Raios X
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2097-2100, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086312

RESUMO

Contrast-enhanced computed tomography (CE-CT) images are used extensively for the diagnosis of liver cancer in clinical practice. Compared with the non-contrast CT (NC-CT) images (CT scans without injection), the CE-CT images are obtained after injecting the contrast, which will increase physical burden of patients. To handle the limitation, we proposed an improved conditional generative adversarial network (improved cGAN) to generate CE-CT images from non-contrast CT images. In the improved cGAN, we incorporate a pyramid pooling module and an elaborate feature fusion module to the generator to improve the capability of encoder in capturing multi-scale semantic features and prevent the dilution of information in the process of decoding. We evaluate the performance of our proposed method on a contrast-enhanced CT dataset including three phases of CT images, (i.e., non-contrast image, CE-CT images in arterial and portal venous phases). Experimental results suggest that the proposed method is superior to existing GAN-based models in quantitative and qualitative results.


Assuntos
Artérias , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 198-202, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086447

RESUMO

According to the 2016 World Health Organization(WHO) Classification scheme for gliomas, isocitrate dehydrogenase(IDH) is a very important basis for diagnosis. There is a strong relationship between IDH mutation status and glioma prognosis. Therefore, it is important to predict the IDH mutation status for preoperatively treating glioma. In the past decade, there has been an increase in the use of machine learning, particularly deep learning, for medical diagnosis. To date, many methods using either deep learning or radiomics have been proposed for predicting glioma IDH mutation status. In this study, we proposed an intra- and inter-modality fusion model, which first fuses both Magnetic Resonance Imaging-based (MRI-based) radiomics with deep learning features in each modality (intra-modality fusion) and then the prediction results from each modality are fused by using an inter-modality regression model, to improve the IDH status prediction accuracy. The effectiveness of the proposed method is validated via our private glioma data set from the First Affiliated Hospital of Zhengzhou University (FHZU) in Zhengzhou, China. Our proposed method is superior to current state-of-the-art methods with an accuracy of 0.77, precision of 0.77, recall of 0.77, and F1 score of 0.77, thereby exhibiting an 8% increase in accuracy to predict the IDH mutation status for glioma treatment.


Assuntos
Neoplasias Encefálicas , Glioma , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/patologia , Glioma/diagnóstico por imagem , Glioma/genética , Glioma/patologia , Humanos , Isocitrato Desidrogenase/genética , Imageamento por Ressonância Magnética/métodos , Mutação
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1552-1555, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36083929

RESUMO

Multiphase computed tomography (CT) images are widely used for the diagnosis of liver disease. Since each phase has different contrast enhancement (i.e., different domain), the multiphase CT images should be annotated for all phases to perform liver or tumor segmentation, which is a time-consuming and labor-expensive task. In this paper, we propose a dual discriminator-based unsupervised domain adaptation (DD-UDA) for liver segmentation on multiphase CT images without annotations. Our framework consists of three modules: a task-specific generator and two discriminators. We have performed domain adaptation at two levels: one is at the feature level, and the other is at the output level, to improve accuracy by reducing the difference in distributions between the source and target domains. Experimental results using public data (PV phase only) as the source domain and private multiphase CT data as the target domain show the effectiveness of our proposed DD-UDA method. Clinical relevance- This study helps to efficiently and accurately segment the liver on multiphase CT images, which is an important preprocessing step for diagnosis and surgical support. By using the proposed DD-UDA method, the segmentation accuracy has improved from 5%, 8%, and 6% respectively, for all phases of CT images with comparison to those without UDA.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 447-450, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086485

RESUMO

Non-small cell lung cancer (NSCLC) is a malignant tumor with high morbidity and mortality, with a high recurrence rate after surgery, which directly affects the life and health of patients. Recently, many studies are based on Computed Tomography (CT) images. They are cheap but have low accuracy. In contrast, the use of gene expression data to predict the recurrence of NSCLC has high accuracy. However, the acquisition of gene data is expensive and invasive, and cannot meet the recurrence prediction requirement of all patients. In this paper, we proposed a low-cost, high-accuracy residual multilayer perceptrons (ResMLP) recurrence prediction method. First, several proposed ResMLP modules are applied to construct a deep regression estimation model. Then, we build a mapping function of mixed features (handcrafted features and deep features) and gene data via this model. Finally, the recurrence prediction task is realized, by utilizing the gene estimation data obtained from the regression model to learn the information representation related to recurrence. The experimental results show that the proposed method has strong generalization ability and can reach 86.38% prediction accuracy. Clinical Relevance- This study improved the preoperative recurrence of NSCLC prediction accuracy from 78.61% by the conventional method to 86.38% by our proposed method using only the CT image.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/genética , Progressão da Doença , Genótipo , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/genética , Recidiva Local de Neoplasia/patologia , Redes Neurais de Computação
10.
IEEE J Biomed Health Inform ; 26(8): 3988-3998, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35213319

RESUMO

Organ segmentation is one of the most important step for various medical image analysis tasks. Recently, semi-supervised learning (SSL) has attracted much attentions by reducing labeling cost. However, most of the existing SSLs neglected the prior shape and position information specialized in the medical images, leading to unsatisfactory localization and non-smooth of objects. In this paper, we propose a novel atlas-based semi-supervised segmentation network with multi-task learning for medical organs, named MTL-ABS3Net, which incorporates the anatomical priors and makes full use of unlabeled data in a self-training and multi-task learning manner. The MTL-ABS3Net consists of two components: an Atlas-Based Semi-Supervised Segmentation Network (ABS3Net) and Reconstruction-Assisted Module (RAM). Specifically, the ABS3Net improves the existing SSLs by utilizing atlas prior, which generates credible pseudo labels in a self-training manner; while the RAM further assists the segmentation network by capturing the anatomical structures from the original images in a multi-task learning manner. Better reconstruction quality is achieved by using MS-SSIM loss function, which further improves the segmentation accuracy. Experimental results from the liver and spleen datasets demonstrated that the performance of our method was significantly improved compared to existing state-of-the-art methods.


Assuntos
Abdome , Aprendizado de Máquina Supervisionado , Humanos , Processamento de Imagem Assistida por Computador/métodos , Baço/diagnóstico por imagem
11.
Front Radiol ; 2: 856460, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37492657

RESUMO

Hepatocellular carcinoma (HCC) is a primary liver cancer that produces a high mortality rate. It is one of the most common malignancies worldwide, especially in Asia, Africa, and southern Europe. Although surgical resection is an effective treatment, patients with HCC are at risk of recurrence after surgery. Preoperative early recurrence prediction for patients with liver cancer can help physicians develop treatment plans and will enable physicians to guide patients in postoperative follow-up. However, the conventional clinical data based methods ignore the imaging information of patients. Certain studies have used radiomic models for early recurrence prediction in HCC patients with good results, and the medical images of patients have been shown to be effective in predicting the recurrence of HCC. In recent years, deep learning models have demonstrated the potential to outperform the radiomics-based models. In this paper, we propose a prediction model based on deep learning that contains intra-phase attention and inter-phase attention. Intra-phase attention focuses on important information of different channels and space in the same phase, whereas inter-phase attention focuses on important information between different phases. We also propose a fusion model to combine the image features with clinical data. Our experiment results prove that our fusion model has superior performance over the models that use clinical data only or the CT image only. Our model achieved a prediction accuracy of 81.2%, and the area under the curve was 0.869.

12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3309-3312, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891948

RESUMO

Convolutional neural networks have become popular in medical image segmentation, and one of their most notable achievements is their ability to learn discriminative features using large labeled datasets. Two-dimensional (2D) networks are accustomed to extracting multiscale features with deep convolutional neural network extractors, i.e., ResNet-101. However, 2D networks are inefficient in extracting spatial features from volumetric images. Although most of the 2D segmentation networks can be extended to three-dimensional (3D) networks, extended 3D methods are resource and time intensive. In this paper, we propose an efficient and accurate network for fully automatic 3D segmentation. We designed a 3D multiple-contextual extractor (MCE) to simulate multiscale feature extraction and feature fusion to capture rich global contextual dependencies from different feature levels. We also designed a light 3D ResU-Net for efficient volumetric image segmentation. The proposed multiple-contextual extractor and light 3D ResU-Net constituted a complete segmentation network. By feeding the multiple-contextual features to the light 3D ResU-Net, we realized 3D medical image segmentation with high efficiency and accuracy. To validate the 3D segmentation performance of our proposed method, we evaluated the proposed network in the context of semantic segmentation on a private spleen dataset and public liver dataset. The spleen dataset contains 50 patients' CT scans, and the liver dataset contains 131 patients' CT scans.


Assuntos
Processamento de Imagem Assistida por Computador , Semântica , Humanos , Imageamento Tridimensional , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3561-3564, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892008

RESUMO

Non-small cell lung cancer (NSCLC) is a type of lung cancer that has a high recurrence rate after surgery. Precise prediction of preoperative prognosis for NSCLC recurrence tends to contribute to the suitable preparation for treatment. Currently, many studied have been conducted to predict the recurrence of NSCLC based on Computed Tomography-images (CT images) or genetic data. The CT image is not expensive but inaccurate. The gene data is more expensive but has high accuracy. In this study, we proposed a genotype-guided radiomics method called GGR and GGR_Fusion to make a higher accuracy prediction model with requires only CT images. The GGR is a two-step method which is consists of two models: the gene estimation model using deep learning and the recurrence prediction model using estimated genes. We further propose an improved performance model based on the GGR model called GGR_Fusion to improve the accuracy. The GGR_Fusion uses the extracted features from the gene estimation model to enhance the recurrence prediction model. The experiments showed that the prediction performance can be improved significantly from 78.61% accuracy, AUC=0.66 (existing radiomics method), 79.09% accuracy, AUC=0.68 (deep learning method) to 83.28% accuracy, AUC=0.77 by the proposed GGR and 84.39% accuracy, AUC=0.79 by the proposed GGR_Fusion.Clinical Relevance-This study improved the preoperative recurrence of NSCLC prediction accuracy from 78.61% by the conventional method to 84.39% by our proposed method using only the CT image.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/genética , Genótipo , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/genética , Tomografia Computadorizada por Raios X
14.
IEEE Trans Med Imaging ; 40(12): 3519-3530, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34129495

RESUMO

Organ segmentation from medical images is one of the most important pre-processing steps in computer-aided diagnosis, but it is a challenging task because of limited annotated data, low-contrast and non-homogenous textures. Compared with natural images, organs in the medical images have obvious anatomical prior knowledge (e.g., organ shape and position), which can be used to improve the segmentation accuracy. In this paper, we propose a novel segmentation framework which integrates the medical image anatomical prior through loss into the deep learning models. The proposed prior loss function is based on probabilistic atlas, which is called as deep atlas prior (DAP). It includes prior location and shape information of organs, which are important prior information for accurate organ segmentation. Further, we combine the proposed deep atlas prior loss with the conventional likelihood losses such as Dice loss and focal loss into an adaptive Bayesian loss in a Bayesian framework, which consists of a prior and a likelihood. The adaptive Bayesian loss dynamically adjusts the ratio of the DAP loss and the likelihood loss in the training epoch for better learning. The proposed loss function is universal and can be combined with a wide variety of existing deep segmentation models to further enhance their performance. We verify the significance of our proposed framework with some state-of-the-art models, including fully-supervised and semi-supervised segmentation models on a public dataset (ISBI LiTS 2017 Challenge) for liver segmentation and a private dataset for spleen segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Teorema de Bayes , Fígado , Baço
15.
IEEE J Biomed Health Inform ; 25(7): 2363-2373, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34033549

RESUMO

COVID-19 pneumonia is a disease that causes an existential health crisis in many people by directly affecting and damaging lung cells. The segmentation of infected areas from computed tomography (CT) images can be used to assist and provide useful information for COVID-19 diagnosis. Although several deep learning-based segmentation methods have been proposed for COVID-19 segmentation and have achieved state-of-the-art results, the segmentation accuracy is still not high enough (approximately 85%) due to the variations of COVID-19 infected areas (such as shape and size variations) and the similarities between COVID-19 and non-COVID-infected areas. To improve the segmentation accuracy of COVID-19 infected areas, we propose an interactive attention refinement network (Attention RefNet). The interactive attention refinement network can be connected with any segmentation network and trained with the segmentation network in an end-to-end fashion. We propose a skip connection attention module to improve the important features in both segmentation and refinement networks and a seed point module to enhance the important seeds (positions) for interactive refinement. The effectiveness of the proposed method was demonstrated on public datasets (COVID-19CTSeg and MICCAI) and our private multicenter dataset. The segmentation accuracy was improved to more than 90%. We also confirmed the generalizability of the proposed network on our multicenter dataset. The proposed method can still achieve high segmentation accuracy.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Bases de Dados Factuais , Humanos , Pulmão/diagnóstico por imagem
16.
IEEE Trans Image Process ; 30: 4840-4854, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33945478

RESUMO

Deep learning-based super-resolution (SR) techniques have generally achieved excellent performance in the computer vision field. Recently, it has been proven that three-dimensional (3D) SR for medical volumetric data delivers better visual results than conventional two-dimensional (2D) processing. However, deepening and widening 3D networks increases training difficulty significantly due to the large number of parameters and small number of training samples. Thus, we propose a 3D convolutional neural network (CNN) for SR of magnetic resonance (MR) and computer tomography (CT) volumetric data called ParallelNet using parallel connections. We construct a parallel connection structure based on the group convolution and feature aggregation to build a 3D CNN that is as wide as possible with a few parameters. As a result, the model thoroughly learns more feature maps with larger receptive fields. In addition, to further improve accuracy, we present an efficient version of ParallelNet (called VolumeNet), which reduces the number of parameters and deepens ParallelNet using a proposed lightweight building block module called the Queue module. Unlike most lightweight CNNs based on depthwise convolutions, the Queue module is primarily constructed using separable 2D cross-channel convolutions. As a result, the number of network parameters and computational complexity can be reduced significantly while maintaining accuracy due to full channel fusion. Experimental results demonstrate that the proposed VolumeNet significantly reduces the number of model parameters and achieves high precision results compared to state-of-the-art methods in tasks of brain MR image SR, abdomen CT image SR, and reconstruction of super-resolution 7T-like images from their 3T counterparts.


Assuntos
Aprendizado Profundo , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Encéfalo/diagnóstico por imagem , Humanos
17.
Med Phys ; 48(7): 3752-3766, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33950526

RESUMO

PURPOSE: Liver tumor segmentation is a crucial prerequisite for computer-aided diagnosis of liver tumors. In the clinical diagnosis of liver tumors, radiologists usually examine multiphase CT images as these images provide abundant and complementary information of tumors. However, most known automatic segmentation methods extract tumor features from CT images merely of a single phase, in which valuable multiphase information is ignored. Therefore, it is highly demanded to develop a method effectively incorporating multiphase information for automatic and accurate liver tumor segmentation. METHODS: In this paper, we propose a phase attention residual network (PA-ResSeg) to model multiphase features for accurate liver tumor segmentation. A phase attention (PA) is newly proposed to additionally exploit the images of arterial (ART) phase to facilitate the segmentation of portal venous (PV) phase. The PA block consists of an intraphase attention (intra-PA) module and an interphase attention (inter-PA) module to capture channel-wise self-dependencies and cross-phase interdependencies, respectively. Thus, it enables the network to learn more representative multiphase features by refining the PV features according to the channel dependencies and recalibrating the ART features based on the learned interdependencies between phases. We propose a PA-based multiscale fusion (MSF) architecture to embed the PA blocks in the network at multiple levels along the encoding path to fuse multiscale features from multiphase images. Moreover, a 3D boundary-enhanced loss (BE-loss) is proposed for training to make the network more sensitive to boundaries. RESULTS: To evaluate the performance of our proposed PA-ResSeg, we conducted experiments on a multiphase CT dataset of focal liver lesions (MPCT-FLLs). Experimental results show the effectiveness of the proposed method by achieving a dice per case (DPC) of 0.7787, a dice global (DG) of 0.8682, a volumetric overlap error (VOE) of 0.3328, and a relative volume difference (RVD) of 0.0443 on the MPCT-FLLs. Furthermore, to validate the effectiveness and robustness of PA-ResSeg, we conducted extra experiments on another multiphase liver tumor dataset and obtained a DPC of 0.8290, a DG of 0.9132, a VOE of 0.2637, and a RVD of 0.0163. The proposed method shows its robustness and generalization capability in different datasets and different backbones. CONCLUSIONS: The study demonstrates that our method can effectively model information from multiphase CT images to segment liver tumors and outperforms other state-of-the-art methods. The PA-based MSF method can learn more representative multiphase features at multiple scales and thereby improve the segmentation performance. Besides, the proposed 3D BE-loss is conducive to tumor boundary segmentation by enforcing the network focus on boundary regions and marginal slices. Experimental results evaluated by quantitative metrics demonstrate the superiority of our PA-ResSeg over the best-known methods.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Atenção , Progressão da Doença , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X
18.
BMC Bioinformatics ; 22(1): 91, 2021 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-33637042

RESUMO

BACKGROUND: To effectively detect and investigate various cell-related diseases, it is essential to understand cell behaviour. The ability to detection mitotic cells is a fundamental step in diagnosing cell-related diseases. Convolutional neural networks (CNNs) have been successfully applied to object detection tasks, however, when applied to mitotic cell detection, most existing methods generate high false-positive rates due to the complex characteristics that differentiate normal cells from mitotic cells. Cell size and orientation variations in each stage make detecting mitotic cells difficult in 2D approaches. Therefore, effective extraction of the spatial and temporal features from mitotic data is an important and challenging task. The computational time required for detection is another major concern for mitotic detection in 4D microscopic images. RESULTS: In this paper, we propose a backbone feature extraction network named full scale connected recurrent deep layer aggregation (RDLA++) for anchor-free mitotic detection. We utilize a 2.5D method that includes 3D spatial information extracted from several 2D images from neighbouring slices that form a multi-stream input. CONCLUSIONS: Our proposed technique addresses the scale variation problem and can efficiently extract spatial and temporal features from 4D microscopic images, resulting in improved detection accuracy and reduced computation time compared with those of other state-of-the-art methods.


Assuntos
Microscopia , Redes Neurais de Computação , Fenômenos Fisiológicos Celulares
19.
Artigo em Inglês | MEDLINE | ID: mdl-31144644

RESUMO

Mitosis detection is one of the challenging steps in biomedical imaging research, which can be used to observe the cell behavior. Most of the already existing methods that are applied in detecting mitosis usually contain many nonmitotic events (normal cell and background) in the result (false positives, FPs). In order to address such a problem, in this study, we propose to apply 2.5-dimensional (2.5D) networks called CasDetNet_CLSTM, which can accurately detect mitotic events in 4D microscopic images. This CasDetNet_CLSTM involves a 2.5D faster region-based convolutional neural network (Faster R-CNN) as the first network, and a convolutional long short-term memory (CLSTM) network as the second network. The first network is used to select candidate cells using the information from nearby slices, whereas the second network uses temporal information to eliminate FPs and refine the result of the first network. Our experiment shows that the precision and recall of our networks yield better results than those of other state-of-the-art methods.


Assuntos
Imageamento Tridimensional/métodos , Microscopia/métodos , Mitose/fisiologia , Redes Neurais de Computação , Células Cultivadas , Aprendizado Profundo , Humanos
20.
IEEE J Biomed Health Inform ; 24(8): 2327-2336, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-31902784

RESUMO

Segmentation and quantification of each subtype of emphysema is helpful to monitor chronic obstructive pulmonary disease. Due to the nature of emphysema (diffuse pulmonary disease), it is very difficult for experts to allocate semantic labels to every pixel in the CT images. In practice, partially annotating is a better choice for the radiologists to reduce their workloads. In this paper, we propose a new end-to-end trainable semi-supervised framework for semantic segmentation of emphysema with partial annotations, in which a segmentation network is trained from both annotated and unannotated areas. In addition, we present a new loss function, referred to as Fisher loss, to enhance the discriminative power of the model and successfully integrate it into our proposed framework. Our experimental results show that the proposed methods have superior performance over the baseline supervised approach (trained with only annotated areas) and outperform the state-of-the-art methods for emphysema segmentation.


Assuntos
Enfisema/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Aprendizado Profundo , Enfisema/patologia , Feminino , Humanos , Pulmão/diagnóstico por imagem , Pulmão/patologia , Masculino , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...