Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Article in English | MEDLINE | ID: mdl-37021898

ABSTRACT

Precise classification of histopathological images is crucial to computer-aided diagnosis in clinical practice. Magnification-based learning networks have attracted considerable attention for their ability to improve performance in histopathological classification. However, the fusion of pyramids of histopathological images at different magnifications is an under-explored area. In this paper, we proposed a novel deep multi-magnification similarity learning (DSML) approach that can be useful for the interpretation of multi-magnification learning framework and easy to visualize feature representation from low-dimension (e.g., cell-level) to high-dimension (e.g., tissue-level), which has overcome the difficulty of understanding cross-magnification information propagation. It uses a similarity cross entropy loss function designation to simultaneously learn the similarity of the information among cross-magnifications. In order to verify the effectiveness of DMSL, experiments with different network backbones and different magnification combinations were designed, and its ability to interpret was also investigated through visualization. Our experiments were performed on two different histopathological datasets: a clinical nasopharyngeal carcinoma and a public breast cancer BCSS2021 dataset. The results show that our method achieved outstanding performance in classification with a higher value of area under curve, accuracy, and F-score than other comparable methods. Moreover, the reasons behind multi-magnification effectiveness were discussed.

2.
IEEE J Biomed Health Inform ; 27(7): 3258-3269, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37099476

ABSTRACT

Anatomical resection (AR) based on anatomical sub-regions is a promising method of precise surgical resection, which has been proven to improve long-term survival by reducing local recurrence. The fine-grained segmentation of an organ's surgical anatomy (FGS-OSA), i.e., segmenting an organ into multiple anatomic regions, is critical for localizing tumors in AR surgical planning. However, automatically obtaining FGS-OSA results in computer-aided methods faces the challenges of appearance ambiguities among sub-regions (i.e., inter-sub-region appearance ambiguities) caused by similar HU distributions in different sub-regions of an organ's surgical anatomy, invisible boundaries, and similarities between anatomical landmarks and other anatomical information. In this paper, we propose a novel fine-grained segmentation framework termed the "anatomic relation reasoning graph convolutional network" (ARR-GCN), which incorporates prior anatomic relations into the framework learning. In ARR-GCN, a graph is constructed based on the sub-regions to model the class and their relations. Further, to obtain discriminative initial node representations of graph space, a sub-region center module is designed. Most importantly, to explicitly learn the anatomic relations, the prior anatomic-relations among the sub-regions are encoded in the form of an adjacency matrix and embedded into the intermediate node representations to guide framework learning. The ARR-GCN was validated on two FGS-OSA tasks: i) liver segments segmentation, and ii) lung lobes segmentation. Experimental results on both tasks outperformed other state-of-the-art segmentation methods and yielded promising performances by ARR-GCN for suppressing ambiguities among sub-regions.


Subject(s)
Liver , Humans , Liver/anatomy & histology , Liver/diagnostic imaging , Liver/surgery , Neoplasms
3.
Radiother Oncol ; 170: 198-204, 2022 05.
Article in English | MEDLINE | ID: mdl-35351537

ABSTRACT

BACKGROUND AND PURPOSE: Geometric information such as distance information is essential for dose calculations in radiotherapy. However, state-of-the-art dose prediction methods use only binary masks without distance information. This study aims to develop a dose prediction deep learning method for nasopharyngeal carcinoma radiotherapy by taking advantage of the distance information as well as the mask information. MATERIALS AND METHODS: A novel transformation method based on boundary distance was proposed to facilitate the prediction of dose distributions. Radiotherapy datasets of 161 nasopharyngeal carcinoma patients were retrospectively collected, including binary masks of organs-at-risk (OARs) and targets, planning CT, and clinical plans. The patients were randomly divided into 130, 11 and 20 cases for training, validating, and testing the models, respectively. Furthermore, 40 patients from an external cohort were used to test the generalizability of the models. RESULTS: The proposed method shows superior performance. The predicted dose error and dose-volume histogram (DVH) error of our method were 7.51% and 11.6% lower than the mask-based method, respectively. For the inverse planning, compared with mask-based methods, our method provided similar performances on the GTVnx and OARs and outperformed on the GTVnd and the CTV, the pass rates of which increased from 89.490% and 90.016% to 96.694% and 91.189%, respectively. CONCLUSION: The preliminary results on nasopharyngeal carcinoma radiotherapy cases showed that our proposed distance-guided method for dose prediction achieved better performance than mask-based methods. Further studies with more patients and on other cancer sites are warranted to fully validate the proposed method.


Subject(s)
Deep Learning , Nasopharyngeal Neoplasms , Radiotherapy, Intensity-Modulated , Humans , Nasopharyngeal Carcinoma/radiotherapy , Nasopharyngeal Neoplasms/pathology , Nasopharyngeal Neoplasms/radiotherapy , Organs at Risk/pathology , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Retrospective Studies
4.
Am J Pathol ; 192(3): 553-563, 2022 03.
Article in English | MEDLINE | ID: mdl-34896390

ABSTRACT

Visual inspection of hepatocellular carcinoma cancer regions by experienced pathologists in whole-slide images (WSIs) is a challenging, labor-intensive, and time-consuming task because of the large scale and high resolution of WSIs. Therefore, a weakly supervised framework based on a multiscale attention convolutional neural network (MSAN-CNN) was introduced into this process. Herein, patch-based images with image-level normal/tumor annotation (rather than images with pixel-level annotation) were fed into a classification neural network. To further improve the performances of cancer region detection, multiscale attention was introduced into the classification neural network. A total of 100 cases were obtained from The Cancer Genome Atlas and divided into 70 training and 30 testing data sets that were fed into the MSAN-CNN framework. The experimental results showed that this framework significantly outperforms the single-scale detection method according to the area under the curve and accuracy, sensitivity, and specificity metrics. When compared with the diagnoses made by three pathologists, MSAN-CNN performed better than a junior- and an intermediate-level pathologist, and slightly worse than a senior pathologist. Furthermore, MSAN-CNN provided a very fast detection time compared with the pathologists. Therefore, a weakly supervised framework based on MSAN-CNN has great potential to assist pathologists in the fast and accurate detection of cancer regions of hepatocellular carcinoma on WSIs.


Subject(s)
Carcinoma, Hepatocellular , Liver Neoplasms , Attention , Humans , Neural Networks, Computer , Pathologists
5.
Quant Imaging Med Surg ; 11(12): 4709-4720, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34888183

ABSTRACT

BACKGROUND: In the radiotherapy of nasopharyngeal carcinoma (NPC), magnetic resonance imaging (MRI) is widely used to delineate tumor area more accurately. While MRI offers the higher soft tissue contrast, patient positioning and couch correction based on bony image fusion of computed tomography (CT) is also necessary. There is thus an urgent need to obtain a high image contrast between bone and soft tissue to facilitate target delineation and patient positioning for NPC radiotherapy. In this paper, our aim is to develop a novel image conversion between the CT and MRI modalities to obtain clear bone and soft tissue images simultaneously, here called bone-enhanced MRI (BeMRI). METHODS: Thirty-five patients were retrospectively selected for this study. All patients underwent clinical CT simulation and 1.5T MRI within the same week in Shenzhen Second People's Hospital. To synthesize BeMRI, two deep learning networks, U-Net and CycleGAN, were constructed to transform MRI to synthetic CT (sCT) images. Each network used 28 patients' images as the training set, while the remaining 7 patients were used as the test set (~1/5 of all datasets). The bone structure from the sCT was then extracted by the threshold-based method and embedded in the corresponding part of the MRI image to generate the BeMRI image. To evaluate the performance of these networks, the following metrics were applied: mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). RESULTS: In our experiments, both deep learning models achieved good performance and were able to effectively extract bone structure from MRI. Specifically, the supervised U-Net model achieved the best results with the lowest overall average MAE of 125.55 (P<0.05) and produced the highest SSIM of 0.89 and PSNR of 23.84. These results indicate that BeMRI can display bone structure in higher contrast than conventional MRI. CONCLUSIONS: A new image modality BeMRI, which is a composite image of CT and MRI, was proposed. With high image contrast of both bone structure and soft tissues, BeMRI will facilitate tumor localization and patient positioning and eliminate the need to frequently check between separate MRI and CT images during NPC radiotherapy.

6.
Comput Methods Programs Biomed ; 200: 105818, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33218708

ABSTRACT

BACKGROUND AND OBJECTIVE: Automatic functional region annotation of liver should be very useful for preoperative planning of liver resection in the clinical domain. However, many traditional computer-aided annotation methods based on anatomical landmarks or the vascular tree often fail to extract accurate liver segments. Furthermore, these methods are difficult to fully automate and thus remain time-consuming. To address these issues, in this study we aim to develop a fully-automated approach for functional region annotation of liver using deep learning based on 2.5D class-aware deep neural networks with spatial adaptation. METHODS: 112 CT scans were fed into our 2.5D class-aware deep neural network with spatial adaptation for automatic functional region annotation of liver. The proposed model was built upon the ResU-net architecture, which adaptively selected a stack of adjacent CT slices as input and, generating masks corresponding to the center slice, automatically annotated the liver functional region from abdominal CT images. Furthermore, to minimize the problem of class-level ambiguity between different slices, the anatomy class-specific information was used. RESULTS: The final algorithm performance for automatic functional region annotation of liver showed large overlap with that of manual reference segmentation. The dice similarity coefficient of hepatic segments achieved high scores and an average dice score of 0.882. The entire calculation time was quite fast (~5 s) compared to manual annotation (~2.5 hours). CONCLUSION: The proposed models described in this paper offer a feasible solution for fully-automated functional region annotation of liver from CT images. The experimental results demonstrated that the proposed method can attain a high average dice score and low computational time. Therefore, this work should allow for improved liver surgical resection planning by our precise segmentation and simple fully-automated method.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Algorithms , Liver/diagnostic imaging , Tomography, X-Ray Computed
7.
Am J Pathol ; 190(8): 1691-1700, 2020 08.
Article in English | MEDLINE | ID: mdl-32360568

ABSTRACT

The pathologic diagnosis of nasopharyngeal carcinoma (NPC) by different pathologists is often inefficient and inconsistent. We have therefore introduced a deep learning algorithm into this process and compared the performance of the model with that of three pathologists with different levels of experience to demonstrate its clinical value. In this retrospective study, a total of 1970 whole slide images of 731 cases were collected and divided into training, validation, and testing sets. Inception-v3, which is a state-of-the-art convolutional neural network, was trained to classify images into three categories: chronic nasopharyngeal inflammation, lymphoid hyperplasia, and NPC. The mean area under the curve (AUC) of the deep learning model is 0.936 based on the testing set, and its AUCs for the three image categories are 0.905, 0.972, and 0.930, respectively. In the comparison with the three pathologists, the model outperforms the junior and intermediate pathologists, and has only a slightly lower performance than the senior pathologist when considered in terms of accuracy, specificity, sensitivity, AUC, and consistency. To our knowledge, this is the first study about the application of deep learning to NPC pathologic diagnosis. In clinical practice, the deep learning model can potentially assist pathologists by providing a second opinion on their NPC diagnoses.


Subject(s)
Deep Learning , Diagnosis, Computer-Assisted , Nasopharyngeal Carcinoma/diagnosis , Nasopharyngeal Neoplasms/diagnosis , Databases, Factual , Humans , Nasopharyngeal Carcinoma/pathology , Nasopharyngeal Neoplasms/pathology , Neural Networks, Computer , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...