Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Biol Med ; 174: 108461, 2024 May.
Article in English | MEDLINE | ID: mdl-38626509

ABSTRACT

BACKGROUND: Positron emission tomography (PET) is extensively employed for diagnosing and staging various tumors, including liver cancer, lung cancer, and lymphoma. Accurate subtype classification of tumors plays a crucial role in formulating effective treatment plans for patients. Notably, lymphoma comprises subtypes like diffuse large B-cell lymphoma and Hodgkin's lymphoma, while lung cancer encompasses adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. Similarly, liver cancer consists of subtypes such as cholangiocarcinoma and hepatocellular carcinoma. Consequently, the subtype classification of tumors based on PET images holds immense clinical significance. However, in clinical practice, the number of cases available for each subtype is often limited and imbalanced. Therefore, the primary challenge lies in achieving precise subtype classification using a small dataset. METHOD: This paper presents a novel approach for tumor subtype classification in small datasets using RA-DL (Radiomics-DeepLearning) attention. To address the limited sample size, Support Vector Machines (SVM) is employed as the classifier for tumor subtypes instead of deep learning methods. Emphasizing the importance of texture information in tumor subtype recognition, radiomics features are extracted from the tumor regions during the feature extraction stage. These features are compressed using an autoencoder to reduce redundancy. In addition to radiomics features, deep features are also extracted from the tumors to leverage the feature extraction capabilities of deep learning. In contrast to existing methods, our proposed approach utilizes the RA-DL-Attention mechanism to guide the deep network in extracting complementary deep features that enhance the expressive capacity of the final features while minimizing redundancy. To address the challenges of limited and imbalanced data, our method avoids using classification labels during deep feature extraction and instead incorporates 2D Region of Interest (ROI) segmentation and image reconstruction as auxiliary tasks. Subsequently, all lesion features of a single patient are aggregated into a feature vector using a multi-instance aggregation layer. RESULT: Validation experiments were conducted on three PET datasets, specifically the liver cancer dataset, lung cancer dataset, and lymphoma dataset. In the context of lung cancer, our proposed method achieved impressive performance with Area Under Curve (AUC) values of 0.82, 0.84, and 0.83 for the three-classification task. For the binary classification task of lymphoma, our method demonstrated notable results with AUC values of 0.95 and 0.75. Moreover, in the binary classification task of liver tumor, our method exhibited promising performance with AUC values of 0.84 and 0.86. CONCLUSION: The experimental results clearly indicate that our proposed method outperforms alternative approaches significantly. Through the extraction of complementary radiomics features and deep features, our method achieves a substantial improvement in tumor subtype classification performance using small PET datasets.


Subject(s)
Positron-Emission Tomography , Support Vector Machine , Humans , Positron-Emission Tomography/methods , Neoplasms/diagnostic imaging , Neoplasms/classification , Databases, Factual , Deep Learning , Image Interpretation, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/classification , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/classification , Radiomics
2.
Comput Biol Med ; 159: 106956, 2023 06.
Article in English | MEDLINE | ID: mdl-37116241

ABSTRACT

Radiotherapy is the traditional treatment of early nasopharyngeal carcinoma (NPC). Automatic accurate segmentation of risky lesions in the nasopharynx is crucial in radiotherapy. U-Net has been proved its effective medical image segmentation ability. However, the great difference in the structure and size of nasopharynx among different patients requires a network that pays more attention to multi-scale information. In this paper, we propose a multi-scale sensitive U-Net (MSU-Net) based on pixel-edge-region level collaborative loss (LCo-PER) for NPC segmentation task. A series of novel feature fusion modules based on spatial continuity and multi-scale semantic are proposed for extracting multi-level features while efficiently searching for all size lesions. A spatial continuity information extraction module (SCIEM) is proposed for effectively using the spatial continuity information of context slices to search small lesions. And a multi-scale semantic feature extraction module (MSFEM) is proposed for extracting features of different receptive fields. LCo-PER is proposed for the network training which makes network model could take into account the size of different lesions. The global Dice, Precision, Recall and IOU of the testing set are 84.50%, 97.48%, 84.33% and 82.41%, respectively. The results show that our method is better than the other state-of-the-art methods for NPC segmentation which obtain higher accuracy and effective segmentation performance.


Subject(s)
Information Storage and Retrieval , Magnetic Resonance Imaging , Humans , Semantics , Nasopharynx/diagnostic imaging , Image Processing, Computer-Assisted
3.
IEEE J Biomed Health Inform ; 27(5): 2465-2476, 2023 05.
Article in English | MEDLINE | ID: mdl-37027631

ABSTRACT

Positron emission tomography-computed tomography (PET/CT) is an essential imaging instrument for lymphoma diagnosis and prognosis. PET/CT image based automatic lymphoma segmentation is increasingly used in the clinical community. U-Net-like deep learning methods have been widely used for PET/CT in this task. However, their performance is limited by the lack of sufficient annotated data, due to the existence of tumor heterogeneity. To address this issue, we propose an unsupervised image generation scheme to improve the performance of another independent supervised U-Net for lymphoma segmentation by capturing metabolic anomaly appearance (MAA). Firstly, we propose an anatomical-metabolic consistency generative adversarial network (AMC-GAN) as an auxiliary branch of U-Net. Specifically, AMC-GAN learns normal anatomical and metabolic information representations using co-aligned whole-body PET/CT scans. In the generator of AMC-GAN, we propose a complementary attention block to enhance the feature representation of low-intensity areas. Then, the trained AMC-GAN is used to reconstruct the corresponding pseudo-normal PET scans to capture MAAs. Finally, combined with the original PET/CT images, MAAs are used as the prior information for improving the performance of lymphoma segmentation. Experiments are conducted on a clinical dataset containing 191 normal subjects and 53 patients with lymphomas. The results demonstrate that the anatomical-metabolic consistency representations obtained from unlabeled paired PET/CT scans can be helpful for more accurate lymphoma segmentation, which suggest the potential of our approach to support physician diagnosis in practical clinical applications.


Subject(s)
Lymphoma , Neoplasms , Humans , Positron Emission Tomography Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Lymphoma/diagnostic imaging , Positron-Emission Tomography/methods
4.
Comput Med Imaging Graph ; 106: 102217, 2023 06.
Article in English | MEDLINE | ID: mdl-36958076

ABSTRACT

Segmenting the liver and tumor regions using CT scans is crucial for the subsequent treatment in clinical practice and radiotherapy. Recently, liver and tumor segmentation techniques based on U-Net have gained popularity. However, there are numerous varieties of liver tumors, and they differ greatly in terms of their shapes and textures. It is unreasonable to regard all liver tumors as one class for learning. Meanwhile, texture information is crucial for the identification of liver tumors. We propose a plug-and-play Texture-based Auto Pseudo Label (TAPL) module to take use of the texture information of tumors and enable the neural network actively learn the texture differences between various tumors to increase the segmentation accuracy, especially for small tumors. The TPAL module consists of two parts, texture enhancement and texture-based pseudo label generator. To highlight the regions where the texture varies significantly, we enhance the textured areas of the CT image. Based on their texture information, tumors are automatically divided into several classes by the texture-based pseudo label generator. The multi-class tumors produced by the neural network during the prediction step are combined into a single tumor label, which is then used as the outcome of the segmentation. Experiments on clinical dataset and public dataset Lits2017 show that the proposed algorithm outperforms single liver tumor label segmentation methods and is more friendly to small tumors.


Subject(s)
Deep Learning , Liver Neoplasms , Humans , Liver Neoplasms/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed/methods
5.
Comput Biol Med ; 157: 106726, 2023 05.
Article in English | MEDLINE | ID: mdl-36924732

ABSTRACT

Deep learning-based methods have become the dominant methodology in medical image processing with the advancement of deep learning in natural image classification, detection, and segmentation. Deep learning-based approaches have proven to be quite effective in single lesion recognition and segmentation. Multiple-lesion recognition is more difficult than single-lesion recognition due to the little variation between lesions or the too wide range of lesions involved. Several studies have recently explored deep learning-based algorithms to solve the multiple-lesion recognition challenge. This paper includes an in-depth overview and analysis of deep learning-based methods for multiple-lesion recognition developed in recent years, including multiple-lesion recognition in diverse body areas and recognition of whole-body multiple diseases. We discuss the challenges that still persist in the multiple-lesion recognition tasks by critically assessing these efforts. Finally, we outline existing problems and potential future research areas, with the hope that this review will help researchers in developing future approaches that will drive additional advances.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Algorithms
6.
Comput Biol Med ; 153: 106538, 2023 02.
Article in English | MEDLINE | ID: mdl-36646023

ABSTRACT

The tumor image segmentation is an important basis for doctors to diagnose and formulate treatment planning. PET-CT is an extremely important technology for recognizing the systemic situation of diseases due to the complementary advantages of their respective modal information. However, current PET-CT tumor segmentation methods generally focus on the fusion of PET and CT features. The fusion of features will weaken the characteristics of the modality itself. Therefore, enhancing the modal features of the lesions can obtain optimized feature sets, which is extremely necessary to improve the segmentation results. This paper proposed an attention module that integrates the PET-CT diagnostic visual field and the modality characteristics of the lesion, that is, the multiple receptive-field lesion attention module. This paper made full use of the spatial domain, frequency domain, and channel attention, and proposed a large receptive-field lesion localization module and a small receptive-field lesion enhancement module, which together constitute the multiple receptive-field lesion attention module. In addition, a network embedded with a multiple receptive-field lesion attention module has been proposed for tumor segmentation. This paper conducted experiments on a private liver tumor dataset as well as two publicly available datasets, the soft tissue sarcoma dataset, and the head and neck tumor segmentation dataset. The experimental results showed that the proposed method achieves excellent performance on multiple datasets, and has a significant improvement compared with DenseUNet, and the tumor segmentation results on the above three PET/CT datasets were improved by 7.25%, 6.5%, 5.29% in Dice per case. Compared with the latest PET-CT liver tumor segmentation research, the proposed method improves by 8.32%.


Subject(s)
Liver Neoplasms , Positron Emission Tomography Computed Tomography , Humans , Image Processing, Computer-Assisted
7.
Comput Biol Med ; 153: 106534, 2023 02.
Article in English | MEDLINE | ID: mdl-36608464

ABSTRACT

Lymphoma segmentation plays an important role in the diagnosis and treatment of lymphocytic tumor. Most current existing automatic segmentation methods are difficult to give precise tumor boundary and location. Semi-automatic methods are usually combined with manually added features such as bounding box or points to locate the tumor. Inspired by this, we propose a cruciform structure guided and boundary-optimized lymphoma segmentation network(CGBS-Net). The method uses a cruciform structure extracted based on PET images as an additional input to the network, while using a boundary gradient loss function to optimize the boundary of the tumor. Our method is divided into two main stages: In the first stage, we use the proposed axial context-based cruciform structure extraction (CCE) method to extract the cruciform structures of all tumor slices. In the second stage, we use PET/CT and the corresponding cruciform structure as input in the designed network (CGBO-Net) to extract tumor structure and boundary information. The Dice, Precision, Recall, IOU and RVD are 90.7%, 89.4%, 92.5%, 83.1% and 4.5%, respectively. Validate on the lymphoma dataset and publicly available head and neck data, our proposed approach is better than the other state-of-the-art semi-segmentation methods, which produces promising segmentation results.


Subject(s)
Lymphoma , Neoplasms , Humans , Positron Emission Tomography Computed Tomography/methods , Tomography, X-Ray Computed , Lymphoma/diagnostic imaging , Head , Image Processing, Computer-Assisted/methods
9.
Phys Med Biol ; 66(20)2021 10 08.
Article in English | MEDLINE | ID: mdl-34555816

ABSTRACT

Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.


Subject(s)
Neoplasms , Positron Emission Tomography Computed Tomography , Humans , Image Processing, Computer-Assisted , Neoplasms/diagnostic imaging , Neural Networks, Computer , Positron Emission Tomography Computed Tomography/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...