Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38768004

RESUMO

Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38083412

RESUMO

Compared to non-contrast computed tomography (NC-CT) scans, contrast-enhanced (CE) CT scans provide more abundant information about focal liver lesions (FLLs), which play a crucial role in the FLLs diagnosis. However, CE-CT scans require patient to inject contrast agent into the body, which increase the physical and economic burden of the patient. In this paper, we propose a spatial attention-guided generative adversarial network (SAG-GAN), which can directly obtain corresponding CE-CT images from the patient's NC-CT images. In the SAG-GAN, we devise a spatial attention-guided generator, which utilize a lightweight spatial attention module to highlight synthesis task-related areas in NC-CT image and neglect unrelated areas. To assess the performance of our approach, we test it on two tasks: synthesizing CE-CT images in arterial phase and portal venous phase. Both qualitative and quantitative results demonstrate that SAG-GAN is superior to existing GANs-based image synthesis methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos
3.
IEEE J Biomed Health Inform ; 27(10): 4878-4889, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37585324

RESUMO

Accurate segmentation of the hepatic vein can improve the precision of liver disease diagnosis and treatment. Since the hepatic venous system is a small target and sparsely distributed, with various and diverse morphology, data labeling is difficult. Therefore, automatic hepatic vein segmentation is extremely challenging. We propose a lightweight contextual and morphological awareness network and design a novel morphology aware module based on attention mechanism and a 3D reconstruction module. The morphology aware module can obtain the slice similarity awareness mapping, which can enhance the continuous area of the hepatic veins in two adjacent slices through attention weighting. The 3D reconstruction module connects the 2D encoder and the 3D decoder to obtain the learning ability of 3D context with a very small amount of parameters. Compared with other SOTA methods, using the proposed method demonstrates an enhancement in the dice coefficient with few parameters on the two datasets. A small number of parameters can reduce hardware requirements and potentially have stronger generalization, which is an advantage in clinical deployment.


Assuntos
Veias Hepáticas , Processamento de Imagem Assistida por Computador , Humanos , Veias Hepáticas/diagnóstico por imagem
4.
Bioengineering (Basel) ; 10(8)2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37627784

RESUMO

Multi-phase computed tomography (CT) images have gained significant popularity in the diagnosis of hepatic disease. There are several challenges in the liver segmentation of multi-phase CT images. (1) Annotation: due to the distinct contrast enhancements observed in different phases (i.e., each phase is considered a different domain), annotating all phase images in multi-phase CT images for liver or tumor segmentation is a task that consumes substantial time and labor resources. (2) Poor contrast: some phase images may have poor contrast, making it difficult to distinguish the liver boundary. In this paper, we propose a boundary-enhanced liver segmentation network for multi-phase CT images with unsupervised domain adaptation. The first contribution is that we propose DD-UDA, a dual discriminator-based unsupervised domain adaptation, for liver segmentation on multi-phase images without multi-phase annotations, effectively tackling the annotation problem. To improve accuracy by reducing distribution differences between the source and target domains, we perform domain adaptation at two levels by employing two discriminators, one at the feature level and the other at the output level. The second contribution is that we introduce an additional boundary-enhanced decoder to the encoder-decoder backbone segmentation network to effectively recognize the boundary region, thereby addressing the problem of poor contrast. In our study, we employ the public LiTS dataset as the source domain and our private MPCT-FLLs dataset as the target domain. The experimental findings validate the efficacy of our proposed methods, producing substantially improved results when tested on each phase of the multi-phase CT image even without the multi-phase annotations. As evaluated on the MPCT-FLLs dataset, the existing baseline (UDA) method achieved IoU scores of 0.785, 0.796, and 0.772 for the PV, ART, and NC phases, respectively, while our proposed approach exhibited superior performance, surpassing both the baseline and other state-of-the-art methods. Notably, our method achieved remarkable IoU scores of 0.823, 0.811, and 0.800 for the PV, ART, and NC phases, respectively, emphasizing its effectiveness in achieving accurate image segmentation.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2097-2100, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086312

RESUMO

Contrast-enhanced computed tomography (CE-CT) images are used extensively for the diagnosis of liver cancer in clinical practice. Compared with the non-contrast CT (NC-CT) images (CT scans without injection), the CE-CT images are obtained after injecting the contrast, which will increase physical burden of patients. To handle the limitation, we proposed an improved conditional generative adversarial network (improved cGAN) to generate CE-CT images from non-contrast CT images. In the improved cGAN, we incorporate a pyramid pooling module and an elaborate feature fusion module to the generator to improve the capability of encoder in capturing multi-scale semantic features and prevent the dilution of information in the process of decoding. We evaluate the performance of our proposed method on a contrast-enhanced CT dataset including three phases of CT images, (i.e., non-contrast image, CE-CT images in arterial and portal venous phases). Experimental results suggest that the proposed method is superior to existing GAN-based models in quantitative and qualitative results.


Assuntos
Artérias , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3561-3564, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892008

RESUMO

Non-small cell lung cancer (NSCLC) is a type of lung cancer that has a high recurrence rate after surgery. Precise prediction of preoperative prognosis for NSCLC recurrence tends to contribute to the suitable preparation for treatment. Currently, many studied have been conducted to predict the recurrence of NSCLC based on Computed Tomography-images (CT images) or genetic data. The CT image is not expensive but inaccurate. The gene data is more expensive but has high accuracy. In this study, we proposed a genotype-guided radiomics method called GGR and GGR_Fusion to make a higher accuracy prediction model with requires only CT images. The GGR is a two-step method which is consists of two models: the gene estimation model using deep learning and the recurrence prediction model using estimated genes. We further propose an improved performance model based on the GGR model called GGR_Fusion to improve the accuracy. The GGR_Fusion uses the extracted features from the gene estimation model to enhance the recurrence prediction model. The experiments showed that the prediction performance can be improved significantly from 78.61% accuracy, AUC=0.66 (existing radiomics method), 79.09% accuracy, AUC=0.68 (deep learning method) to 83.28% accuracy, AUC=0.77 by the proposed GGR and 84.39% accuracy, AUC=0.79 by the proposed GGR_Fusion.Clinical Relevance-This study improved the preoperative recurrence of NSCLC prediction accuracy from 78.61% by the conventional method to 84.39% by our proposed method using only the CT image.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/genética , Genótipo , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/genética , Tomografia Computadorizada por Raios X
8.
Phys Med Biol ; 66(20)2021 10 08.
Artigo em Inglês | MEDLINE | ID: mdl-34555816

RESUMO

Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.


Assuntos
Neoplasias , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias/diagnóstico por imagem , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos
9.
Med Phys ; 48(7): 3752-3766, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33950526

RESUMO

PURPOSE: Liver tumor segmentation is a crucial prerequisite for computer-aided diagnosis of liver tumors. In the clinical diagnosis of liver tumors, radiologists usually examine multiphase CT images as these images provide abundant and complementary information of tumors. However, most known automatic segmentation methods extract tumor features from CT images merely of a single phase, in which valuable multiphase information is ignored. Therefore, it is highly demanded to develop a method effectively incorporating multiphase information for automatic and accurate liver tumor segmentation. METHODS: In this paper, we propose a phase attention residual network (PA-ResSeg) to model multiphase features for accurate liver tumor segmentation. A phase attention (PA) is newly proposed to additionally exploit the images of arterial (ART) phase to facilitate the segmentation of portal venous (PV) phase. The PA block consists of an intraphase attention (intra-PA) module and an interphase attention (inter-PA) module to capture channel-wise self-dependencies and cross-phase interdependencies, respectively. Thus, it enables the network to learn more representative multiphase features by refining the PV features according to the channel dependencies and recalibrating the ART features based on the learned interdependencies between phases. We propose a PA-based multiscale fusion (MSF) architecture to embed the PA blocks in the network at multiple levels along the encoding path to fuse multiscale features from multiphase images. Moreover, a 3D boundary-enhanced loss (BE-loss) is proposed for training to make the network more sensitive to boundaries. RESULTS: To evaluate the performance of our proposed PA-ResSeg, we conducted experiments on a multiphase CT dataset of focal liver lesions (MPCT-FLLs). Experimental results show the effectiveness of the proposed method by achieving a dice per case (DPC) of 0.7787, a dice global (DG) of 0.8682, a volumetric overlap error (VOE) of 0.3328, and a relative volume difference (RVD) of 0.0443 on the MPCT-FLLs. Furthermore, to validate the effectiveness and robustness of PA-ResSeg, we conducted extra experiments on another multiphase liver tumor dataset and obtained a DPC of 0.8290, a DG of 0.9132, a VOE of 0.2637, and a RVD of 0.0163. The proposed method shows its robustness and generalization capability in different datasets and different backbones. CONCLUSIONS: The study demonstrates that our method can effectively model information from multiphase CT images to segment liver tumors and outperforms other state-of-the-art methods. The PA-based MSF method can learn more representative multiphase features at multiple scales and thereby improve the segmentation performance. Besides, the proposed 3D BE-loss is conducive to tumor boundary segmentation by enforcing the network focus on boundary regions and marginal slices. Experimental results evaluated by quantitative metrics demonstrate the superiority of our PA-ResSeg over the best-known methods.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Atenção , Progressão da Doença , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X
10.
Sensors (Basel) ; 21(7)2021 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-33800532

RESUMO

Hyperspectral image (HSI) super-resolution (SR) is a challenging task due to its ill-posed nature, and has attracted extensive attention by the research community. Previous methods concentrated on leveraging various hand-crafted image priors of a latent high-resolution hyperspectral (HR-HS) image to regularize the degradation model of the observed low-resolution hyperspectral (LR-HS) and HR-RGB images. Different optimization strategies for searching a plausible solution, which usually leads to a limited reconstruction performance, were also exploited. Recently, deep-learning-based methods evolved for automatically learning the abundant image priors in a latent HR-HS image. These methods have made great progress for HS image super resolution. Current deep-learning methods have faced difficulties in designing more complicated and deeper neural network architectures for boosting the performance. They also require large-scale training triplets, such as the LR-HS, HR-RGB, and their corresponding HR-HS images for neural network training. These training triplets significantly limit their applicability to real scenarios. In this work, a deep unsupervised fusion-learning framework for generating a latent HR-HS image using only the observed LR-HS and HR-RGB images without previous preparation of any other training triplets is proposed. Based on the fact that a convolutional neural network architecture is capable of capturing a large number of low-level statistics (priors) of images, the automatic learning of underlying priors of spatial structures and spectral attributes in a latent HR-HS image using only its corresponding degraded observations is promoted. Specifically, the parameter space of a generative neural network used for learning the required HR-HS image to minimize the reconstruction errors of the observations using mathematical relations between data is investigated. Moreover, special convolutional layers for approximating the degradation operations between observations and the latent HR-HS image are specifically to construct an end-to-end unsupervised learning framework for HS image super-resolution. Experiments on two benchmark HS datasets, including the CAVE and Harvard, demonstrate that the proposed method can is capable of producing very promising results, even under a large upscaling factor. Furthermore, it can outperform other unsupervised state-of-the-art methods by a large margin, and manifests its superiority and efficiency.

11.
IEEE J Biomed Health Inform ; 24(8): 2327-2336, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-31902784

RESUMO

Segmentation and quantification of each subtype of emphysema is helpful to monitor chronic obstructive pulmonary disease. Due to the nature of emphysema (diffuse pulmonary disease), it is very difficult for experts to allocate semantic labels to every pixel in the CT images. In practice, partially annotating is a better choice for the radiologists to reduce their workloads. In this paper, we propose a new end-to-end trainable semi-supervised framework for semantic segmentation of emphysema with partial annotations, in which a segmentation network is trained from both annotated and unannotated areas. In addition, we present a new loss function, referred to as Fisher loss, to enhance the discriminative power of the model and successfully integrate it into our proposed framework. Our experimental results show that the proposed methods have superior performance over the baseline supervised approach (trained with only annotated areas) and outperform the state-of-the-art methods for emphysema segmentation.


Assuntos
Enfisema/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Aprendizado Profundo , Enfisema/patologia , Feminino , Humanos , Pulmão/diagnóstico por imagem , Pulmão/patologia , Masculino , Tomografia Computadorizada por Raios X
12.
Sensors (Basel) ; 19(24)2019 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-31817912

RESUMO

Hyperspectral imaging is capable of acquiring the rich spectral information of scenes and has great potential for understanding the characteristics of different materials in many applications ranging from remote sensing to medical imaging. However, due to hardware limitations, the existed hyper-/multi-spectral imaging devices usually cannot obtain high spatial resolution. This study aims to generate a high resolution hyperspectral image according to the available low resolution hyperspectral and high resolution RGB images. We propose a novel hyperspectral image superresolution method via non-negative sparse representation of reflectance spectra with a data guided sparsity constraint. The proposed method firstly learns the hyperspectral dictionary from the low resolution hyperspectral image and then transforms it into the RGB one with the camera response function, which is decided by the physical property of the RGB imaging camera. Given the RGB vector and the RGB dictionary, the sparse representation of each pixel in the high resolution image is calculated with the guidance of a sparsity map, which measures pixel material purity. The sparsity map is generated by analyzing the local content similarity of a focused pixel in the available high resolution RGB image and quantifying the spectral mixing degree motivated by the fact that the pixel spectrum of a pure material should have sparse representation of the spectral dictionary. Since the proposed method adaptively adjusts the sparsity in the spectral representation based on the local content of the available high resolution RGB image, it can produce more robust spectral representation for recovering the target high resolution hyperspectral image. Comprehensive experiments on two public hyperspectral datasets and three real remote sensing images validate that the proposed method achieves promising performances compared to the existing state-of-the-art methods.

13.
Comput Med Imaging Graph ; 75: 74-83, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31220699

RESUMO

Extraction or segmentation of organ vessels is an important task for surgical planning and computer-aided diagnoses. This is a challenging task due to the extremely small size of the vessel structure, low SNR, and varying contrast in medical image data. We propose an automatic and robust vessel segmentation approach that uses a multi-pathways deep learning network. The proposed method trains a deep network for binary classification based on extracted training patches on three planes (sagittal, coronal, and transverse planes) centered on the focused voxels. Thus, it is expected to provide a more reliable recognition performance by exploring the 3D structure. Furthermore, due to the large variety of medical data device values, we transform a raw medical image into a probability map as input to the network. Then, we extract vessels based on the proposed network, which is robust and sufficiently general to handle images with different contrast obtained by various imaging systems. The proposed deep network provides a vessel probability map for voxels in the target medical data, which is used in a post-process to generate the final segmentation result. To validate the effectiveness and efficiency of the proposed method, we conducted experiments with 20 data (public datasets) with different contrast levels and different device value ranges. The results demonstrate impressive performance in comparison with the state-of-the-art methods. We propose the first 3D liver vessel segmentation network using deep learning. Using a multi-pathways network, segmentation results can be improved, and the probability map as input is robust against intensity changes in clinical data.


Assuntos
Vasos Sanguíneos/diagnóstico por imagem , Fígado/diagnóstico por imagem , Redes Neurais de Computação , Algoritmos , Conjuntos de Dados como Assunto , Aprendizado Profundo , Diagnóstico por Computador , Humanos , Imageamento Tridimensional
14.
Neural Regen Res ; 14(5): 868-875, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30688273

RESUMO

Idiopathic rapid eye movement sleep behavior disorder (iRBD) is often a precursor to neurodegenerative disease. However, voxel-based morphological studies evaluating structural abnormalities in the brains of iRBD patients are relatively rare. This study aimed to explore cerebral structural alterations using magnetic resonance imaging and to determine their association with clinical parameters in iRBD patients. Brain structural T1-weighted MRI scans were acquired from 19 polysomnogram-confirmed iRBD patients (male:female 16:3; mean age 66.6 ± 7.0 years) and 20 age-matched healthy controls (male:female 5:15; mean age 63.7 ± 5.9 years). Gray matter volume (GMV) data were analyzed based on Statistical Parametric Mapping 8, using a voxel-based morphometry method and two-sample t-test and multiple regression analysis. Compared with controls, iRBD patients had increased GMV in the middle temporal gyrus and cerebellar posterior lobe, but decreased GMV in the Rolandic operculum, postcentral gyrus, insular lobe, cingulate gyrus, precuneus, rectus gyrus, and superior frontal gyrus. iRBD duration was positively correlated with GMV in the precuneus, cuneus, superior parietal gyrus, postcentral gyrus, posterior cingulate gyrus, hippocampus, lingual gyrus, middle occipital gyrus, middle temporal gyrus, and cerebellum posterior lobe. Furthermore, phasic chin electromyographic activity was positively correlated with GMV in the hippocampus, precuneus, fusiform gyrus, precentral gyrus, superior frontal gyrus, cuneus, inferior parietal lobule, angular gyrus, superior parietal gyrus, paracentral lobule, and cerebellar posterior lobe. There were no significant negative correlations of brain GMV with disease duration or electromyographic activity in iRBD patients. These findings expand the spectrum of known gray matter modifications in iRBD patients and provide evidence of a correlation between brain dysfunction and clinical manifestations in such patients. The protocol was approved by the Ethics Committee of Huashan Hospital (approval No. KY2013-336) on January 6, 2014. This trial was registered in the ISRCTN registry (ISRCTN18238599).

15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 2789-2792, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946472

RESUMO

In this paper, we present an automatic approach to paranasal sinus segmentation in computed tomography (CT) images. The proposed method combines a probabilistic atlas and a fully convolutional network (FCN). The probabilistic atlas was used to automatically localize the paranasal sinus and determine its bounding box. The FCN was then used to automatically segment the paranasal sinus in the bounding box. Comparing our proposed method with the conventional FCN (without probabilistic atlas) and the state-of-the-art method using active contour with group similarity, the proposed method demonstrated an improvement in the paranasal sinus segmentation. The segmentation accuracy (Dice coefficient) was about 0.83 even for the case with unclear boundary.


Assuntos
Tomografia Computadorizada por Raios X
16.
Artigo em Inglês | MEDLINE | ID: mdl-30010566

RESUMO

Fusing a low-resolution hyperspectral image with the corresponding high-resolution multispectral image to obtain a high-resolution hyperspectral image is an important technique for capturing comprehensive scene information in both spatial and spectral domains. Existing approaches adopt sparsity promoting strategy, and encode the spectral information of each pixel independently, which results in noisy sparse representation. We propose a novel hyperspectral image super-resolution method via a self-similarity constrained sparse representation. We explore the similar patch structures across the whole image and the pixels with close appearance in local regions to create globalstructure groups and local-spectral super-pixels. By forcing the similarity of the sparse representations for pixels belonging to the same group and super-pixel, we alleviate the effect of the outliers in the learned sparse coding. Experiment results on benchmark datasets validate that the proposed method outperforms the stateof- the-art methods in both quantitative metrics and visual effect.

17.
Int J Comput Assist Radiol Surg ; 13(1): 151-164, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29105019

RESUMO

PURPOSE: The bag of visual words (BoVW) model is a powerful tool for feature representation that can integrate various handcrafted features like intensity, texture, and spatial information. In this paper, we propose a novel BoVW-based method that incorporates texture and spatial information for the content-based image retrieval to assist radiologists in clinical diagnosis. METHODS: This paper presents a texture-specific BoVW method to represent focal liver lesions (FLLs). Pixels in the region of interest (ROI) are classified into nine texture categories using the rotation-invariant uniform local binary pattern method. The BoVW-based features are calculated for each texture category. In addition, a spatial cone matching (SCM)-based representation strategy is proposed to describe the spatial information of the visual words in the ROI. In a pilot study, eight radiologists with different clinical experience performed diagnoses for 20 cases with and without the top six retrieved results. A total of 132 multiphase computed tomography volumes including five pathological types were collected. RESULTS: The texture-specific BoVW was compared to other BoVW-based methods using the constructed dataset of FLLs. The results show that our proposed model outperforms the other three BoVW methods in discriminating different lesions. The SCM method, which adds spatial information to the orderless BoVW model, impacted the retrieval performance. In the pilot trial, the average diagnosis accuracy of the radiologists was improved from 66 to 80% using the retrieval system. CONCLUSION: The preliminary results indicate that the texture-specific features and the SCM-based BoVW features can effectively characterize various liver lesions. The retrieval system has the potential to improve the diagnostic accuracy and the confidence of the radiologists.


Assuntos
Aumento da Imagem , Neoplasias Hepáticas/diagnóstico por imagem , Fígado/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Projetos Piloto
18.
Int J Biomed Imaging ; 2017: 1413297, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28293255

RESUMO

Characterization and individual trait analysis of the focal liver lesions (FLL) is a challenging task in medical image processing and clinical site. The character analysis of a unconfirmed FLL case would be expected to benefit greatly from the accumulated FLL cases with experts' analysis, which can be achieved by content-based medical image retrieval (CBMIR). CBMIR mainly includes discriminated feature extraction and similarity calculation procedures. Bag-of-Visual-Words (BoVW) (codebook-based model) has been proven to be effective for different classification and retrieval tasks. This study investigates an improved codebook model for the fined-grained medical image representation with the following three advantages: (1) instead of SIFT, we exploit the local patch (structure) as the local descriptor, which can retain all detailed information and is more suitable for the fine-grained medical image applications; (2) in order to more accurately approximate any local descriptor in coding procedure, the sparse coding method, instead of K-means algorithm, is employed for codebook learning and coded vector calculation; (3) we evaluate retrieval performance of focal liver lesions (FLL) using multiphase computed tomography (CT) scans, in which the proposed codebook model is separately learned for each phase. The effectiveness of the proposed method is confirmed by our experiments on FLL retrieval.

19.
Comput Biol Med ; 67: 146-60, 2015 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-26551453

RESUMO

Accurate segmentation of abdominal organs is a key step in developing a computer-aided diagnosis (CAD) system. Probabilistic atlas based on human anatomical structure, used as a priori information in a Bayes framework, has been widely used for organ segmentation. How to register the probabilistic atlas to the patient volume is the main challenge. Additionally, there is the disadvantage that the conventional probabilistic atlas may cause a bias toward the specific patient study because of the single reference. Taking these into consideration, a template matching framework based on an iterative probabilistic atlas for liver and spleen segmentation is presented in this paper. First, a bounding box based on human anatomical localization, which refers to the statistical geometric location of the organ, is detected for the candidate organ. Then, the probabilistic atlas is used as a template to find the organ in this bounding box by using template matching technology. We applied our method to 60 datasets including normal and pathological cases. For the liver, the Dice/Tanimoto volume overlaps were 0.930/0.870, the root-mean-squared error (RMSE) was 2.906mm. For the spleen, quantification led to 0.922 Dice/0.857 Tanimoto overlaps, 1.992mm RMSE. The algorithm is robust in segmenting normal and abnormal spleens and livers, such as the presence of tumors and large morphological changes. Comparing our method with conventional and recently developed atlas-based methods, our results show an improvement in the segmentation accuracy for multi-organs (p<0.00001).


Assuntos
Fígado/diagnóstico por imagem , Modelos Anatômicos , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Abdominal/métodos , Baço/diagnóstico por imagem , Adulto , Idoso , Algoritmos , Simulação por Computador , Feminino , Humanos , Imageamento Tridimensional/métodos , Masculino , Pessoa de Meia-Idade , Modelos Biológicos , Modelos Estatísticos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnica de Subtração , Tomografia Computadorizada por Raios X/métodos
20.
Comput Med Imaging Graph ; 45: 75-83, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26363253

RESUMO

PURPOSE: Assessing the treated region with locoregional therapy (LT) provides valuable information for predicting hepatocellular carcinoma (HCC) recurrence. The commonly used of assessment method is inefficient because it only compares two-dimensional CT images manually. In our previous work, we automatically aligned the two CT volumes to evaluate the therapeutic efficiency using registration algorithms. The non-rigid registration is applied to capture local deformation, however, it usually destroys internal structure. Taking these into consideration, this paper proposes a novel non-rigid registration approach for evaluating LT of HCC to maintain the image integrity. METHOD: In our registration algorithm, a global affine transformation combined with localized cubic B-spline is used to estimate the significant non-rigid motions of two livers. The proposed method extends a classical non-rigid registration based on mutual information (MI) that uses an anatomical structure term to constrain the local deformation. The energy function can be defined based on the total one associated with the anatomical structure and deformation information. Optimal transformation is obtained by finding the equilibrium state in which the total energy is minimized, indicating that the anatomical landmarks have found their correspondences. Thus, we can use the same transformation to automatically transform the ablative region to the optimal position. RESULTS: Registration accuracy is evaluated using the clinical data. Improved results are obtained with respect to all criteria in our proposed method (MI-LC) than those in the MI-based non-rigid registration. The landmark distance error (LDE) of MI-LC is decreased by an average of 3.93mm compared to the case of MI-based registration. Moreover, it could be found regardless of how many landmarks applied in our proposed method, a significant reduction in LDE values using registrations based on MI-LC compared with those based on MI is confirmed. CONCLUSION: Our proposed approach can guarantee the continuity, the accuracy and the smoothness of structures by constraining the anatomical features. The results clearly indicate that our method can retain the local deformation of the image. In addition, it assures the anatomical structure stability.


Assuntos
Antineoplásicos/administração & dosagem , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/tratamento farmacológico , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/tratamento farmacológico , Técnica de Subtração , Algoritmos , Monitoramento de Medicamentos/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/métodos , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...