Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-37021898

ABSTRACT

Precise classification of histopathological images is crucial to computer-aided diagnosis in clinical practice. Magnification-based learning networks have attracted considerable attention for their ability to improve performance in histopathological classification. However, the fusion of pyramids of histopathological images at different magnifications is an under-explored area. In this paper, we proposed a novel deep multi-magnification similarity learning (DSML) approach that can be useful for the interpretation of multi-magnification learning framework and easy to visualize feature representation from low-dimension (e.g., cell-level) to high-dimension (e.g., tissue-level), which has overcome the difficulty of understanding cross-magnification information propagation. It uses a similarity cross entropy loss function designation to simultaneously learn the similarity of the information among cross-magnifications. In order to verify the effectiveness of DMSL, experiments with different network backbones and different magnification combinations were designed, and its ability to interpret was also investigated through visualization. Our experiments were performed on two different histopathological datasets: a clinical nasopharyngeal carcinoma and a public breast cancer BCSS2021 dataset. The results show that our method achieved outstanding performance in classification with a higher value of area under curve, accuracy, and F-score than other comparable methods. Moreover, the reasons behind multi-magnification effectiveness were discussed.

2.
IEEE J Biomed Health Inform ; 27(7): 3258-3269, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37099476

ABSTRACT

Anatomical resection (AR) based on anatomical sub-regions is a promising method of precise surgical resection, which has been proven to improve long-term survival by reducing local recurrence. The fine-grained segmentation of an organ's surgical anatomy (FGS-OSA), i.e., segmenting an organ into multiple anatomic regions, is critical for localizing tumors in AR surgical planning. However, automatically obtaining FGS-OSA results in computer-aided methods faces the challenges of appearance ambiguities among sub-regions (i.e., inter-sub-region appearance ambiguities) caused by similar HU distributions in different sub-regions of an organ's surgical anatomy, invisible boundaries, and similarities between anatomical landmarks and other anatomical information. In this paper, we propose a novel fine-grained segmentation framework termed the "anatomic relation reasoning graph convolutional network" (ARR-GCN), which incorporates prior anatomic relations into the framework learning. In ARR-GCN, a graph is constructed based on the sub-regions to model the class and their relations. Further, to obtain discriminative initial node representations of graph space, a sub-region center module is designed. Most importantly, to explicitly learn the anatomic relations, the prior anatomic-relations among the sub-regions are encoded in the form of an adjacency matrix and embedded into the intermediate node representations to guide framework learning. The ARR-GCN was validated on two FGS-OSA tasks: i) liver segments segmentation, and ii) lung lobes segmentation. Experimental results on both tasks outperformed other state-of-the-art segmentation methods and yielded promising performances by ARR-GCN for suppressing ambiguities among sub-regions.


Subject(s)
Liver , Humans , Liver/anatomy & histology , Liver/diagnostic imaging , Liver/surgery , Neoplasms
3.
Am J Pathol ; 192(3): 553-563, 2022 03.
Article in English | MEDLINE | ID: mdl-34896390

ABSTRACT

Visual inspection of hepatocellular carcinoma cancer regions by experienced pathologists in whole-slide images (WSIs) is a challenging, labor-intensive, and time-consuming task because of the large scale and high resolution of WSIs. Therefore, a weakly supervised framework based on a multiscale attention convolutional neural network (MSAN-CNN) was introduced into this process. Herein, patch-based images with image-level normal/tumor annotation (rather than images with pixel-level annotation) were fed into a classification neural network. To further improve the performances of cancer region detection, multiscale attention was introduced into the classification neural network. A total of 100 cases were obtained from The Cancer Genome Atlas and divided into 70 training and 30 testing data sets that were fed into the MSAN-CNN framework. The experimental results showed that this framework significantly outperforms the single-scale detection method according to the area under the curve and accuracy, sensitivity, and specificity metrics. When compared with the diagnoses made by three pathologists, MSAN-CNN performed better than a junior- and an intermediate-level pathologist, and slightly worse than a senior pathologist. Furthermore, MSAN-CNN provided a very fast detection time compared with the pathologists. Therefore, a weakly supervised framework based on MSAN-CNN has great potential to assist pathologists in the fast and accurate detection of cancer regions of hepatocellular carcinoma on WSIs.


Subject(s)
Carcinoma, Hepatocellular , Liver Neoplasms , Attention , Humans , Neural Networks, Computer , Pathologists
4.
Comput Methods Programs Biomed ; 200: 105818, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33218708

ABSTRACT

BACKGROUND AND OBJECTIVE: Automatic functional region annotation of liver should be very useful for preoperative planning of liver resection in the clinical domain. However, many traditional computer-aided annotation methods based on anatomical landmarks or the vascular tree often fail to extract accurate liver segments. Furthermore, these methods are difficult to fully automate and thus remain time-consuming. To address these issues, in this study we aim to develop a fully-automated approach for functional region annotation of liver using deep learning based on 2.5D class-aware deep neural networks with spatial adaptation. METHODS: 112 CT scans were fed into our 2.5D class-aware deep neural network with spatial adaptation for automatic functional region annotation of liver. The proposed model was built upon the ResU-net architecture, which adaptively selected a stack of adjacent CT slices as input and, generating masks corresponding to the center slice, automatically annotated the liver functional region from abdominal CT images. Furthermore, to minimize the problem of class-level ambiguity between different slices, the anatomy class-specific information was used. RESULTS: The final algorithm performance for automatic functional region annotation of liver showed large overlap with that of manual reference segmentation. The dice similarity coefficient of hepatic segments achieved high scores and an average dice score of 0.882. The entire calculation time was quite fast (~5 s) compared to manual annotation (~2.5 hours). CONCLUSION: The proposed models described in this paper offer a feasible solution for fully-automated functional region annotation of liver from CT images. The experimental results demonstrated that the proposed method can attain a high average dice score and low computational time. Therefore, this work should allow for improved liver surgical resection planning by our precise segmentation and simple fully-automated method.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Algorithms , Liver/diagnostic imaging , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL
...