Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Ultrasound Med Biol ; 50(4): 509-519, 2024 04.
Article in English | MEDLINE | ID: mdl-38267314

ABSTRACT

OBJECTIVE: The main objective of this study was to build a rich and high-quality thyroid ultrasound image database (TUD) for computer-aided diagnosis (CAD) systems to support accurate diagnosis and prognostic modeling of thyroid disorders. Because most of the raw thyroid ultrasound images contain artificial markers, which seriously affect the robustness of CAD systems because of their strong prior location information, we propose a marker mask inpainting (MMI) method to erase artificial markers and improve image quality. METHODS: First, a set of thyroid ultrasound images were collected from the General Hospital of the Northern Theater Command. Then, two modules were designed in MMI, namely, the marker detection (MD) module and marker erasure (ME) module. The MD module detects all markers in the image and stores them in a binary mask. According to the binary mask, the ME module erases the markers and generates an unmarked image. Finally, a new TUD based on the marked images and unmarked images was built. The TUD is carefully annotated and statistically analyzed by professional physicians to ensure accuracy and consistency. Moreover, several normal thyroid gland images and some ancillary information on benign and malignant nodules are provided. RESULTS: Several typical segmentation models were evaluated on the TUD. The experimental results revealed that our TUD can facilitate the development of more accurate CAD systems for the analysis of thyroid nodule-related lesions in ultrasound images. The effectiveness of our MMI method was determined in quantitative experiments. CONCLUSION: The rich and high-quality resource TUD promotes the development of more effective diagnostic and treatment methods for thyroid diseases. Furthermore, MMI for erasing artificial markers and generating unmarked images is proposed to improve the quality of thyroid ultrasound images. Our TUD database is available at https://github.com/NEU-LX/TUD-Datebase.


Subject(s)
Thyroid Nodule , Humans , Thyroid Nodule/pathology , Diagnosis, Computer-Assisted/methods , Ultrasonography/methods , Research
2.
Bioengineering (Basel) ; 10(8)2023 Aug 12.
Article in English | MEDLINE | ID: mdl-37627842

ABSTRACT

Colorectal cancer (CRC) is a prevalent gastrointestinal tumour with high incidence and mortality rates. Early screening for CRC can improve cure rates and reduce mortality. Recently, deep convolution neural network (CNN)-based pathological image diagnosis has been intensively studied to meet the challenge of time-consuming and labour-intense manual analysis of high-resolution whole slide images (WSIs). Despite the achievements made, deep CNN-based methods still suffer from some limitations, and the fundamental problem is that they cannot capture global features. To address this issue, we propose a hybrid deep learning framework (RGSB-UNet) for automatic tumour segmentation in WSIs. The framework adopts a UNet architecture that consists of the newly-designed residual ghost block with switchable normalization (RGS) and the bottleneck transformer (BoT) for downsampling to extract refined features, and the transposed convolution and 1 × 1 convolution with ReLU for upsampling to restore the feature map resolution to that of the original image. The proposed framework combines the advantages of the spatial-local correlation of CNNs and the long-distance feature dependencies of BoT, ensuring its capacity of extracting more refined features and robustness to varying batch sizes. Additionally, we consider a class-wise dice loss (CDL) function to train the segmentation network. The proposed network achieves state-of-the-art segmentation performance under small batch sizes. Experimental results on DigestPath2019 and GlaS datasets demonstrate that our proposed model produces superior evaluation scores and state-of-the-art segmentation results.

3.
Bioengineering (Basel) ; 10(3)2023 Mar 22.
Article in English | MEDLINE | ID: mdl-36978784

ABSTRACT

Nuclei segmentation and classification are two basic and essential tasks in computer-aided diagnosis of digital pathology images, and those deep-learning-based methods have achieved significant success. Unfortunately, most of the existing studies accomplish the two tasks by splicing two related neural networks directly, resulting in repetitive computation efforts and a redundant-and-large neural network. Thus, this paper proposes a lightweight deep learning framework (GSN-HVNET) with an encoder-decoder structure for simultaneous segmentation and classification of nuclei. The decoder consists of three branches outputting the semantic segmentation of nuclei, the horizontal and vertical (HV) distances of nuclei pixels to their mass centers, and the class of each nucleus, respectively. The instance segmentation results are obtained by combing the outputs of the first and second branches. To reduce the computational cost and improve the network stability under small batch sizes, we propose two newly designed blocks, Residual-Ghost-SN (RGS) and Dense-Ghost-SN (DGS). Furthermore, considering the practical usage in pathological diagnosis, we redefine the classification principle of the CoNSeP dataset. Experimental results demonstrate that the proposed model outperforms other state-of-the-art models in terms of segmentation and classification accuracy by a significant margin while maintaining high computational efficiency.

4.
Comput Biol Med ; 157: 106736, 2023 05.
Article in English | MEDLINE | ID: mdl-36958238

ABSTRACT

BACKGROUND AND OBJECTIVE: Abundant labeled data drives the model training for better performance, but collecting sufficient labels is still challenging. To alleviate the pressure of label collection, semi-supervised learning merges unlabeled data into training process. However, the joining of unlabeled data (e.g., data from different hospitals with different acquisition parameters) will change the original distribution. Such a distribution shift leads to a perturbation in the training process, potentially leading to a confirmation bias. In this paper, we investigate distribution shift and develop methods to increase the robustness of our models, with the goal of improving performance in semi-supervised semantic segmentation of medical images. We study distribution shift and increase model robustness to it, for improving practical performance in semi-supervised segmentation over medical images. METHODS: To alleviate the issue of distribution shift, we introduce adversarial training into the co-training process. We simulate perturbations caused by the distribution shift via adversarial perturbations and introduce the adversarial perturbation to attack the supervised training to improve the robustness against the distribution shift. Benefiting from label guidance, supervised training does not collapse under adversarial attacks. For co-training, two sub-models are trained from two views (over two disjoint subsets of the dataset) to extract different kinds of knowledge independently. Co-training outperforms single-model by integrating both views of knowledge to avoid confirmation bias. RESULTS: For practicality, we conduct extensive experiments on challenging medical datasets. Experimental results show desirable improvements to state-of-the-art counterparts (Yu and Wang, 2019; Peng et al., 2020; Perone et al., 2019). We achieve a DSC score of 87.37% with only 20% of labels on the ACDC dataset, almost same to using 100% of labels. On the SCGM dataset with more distribution shift, we achieve a DSC score of 78.65% with 6.5% of labels, surpassing 10.30% over Peng et al. (2020). Our evaluative results show superior robustness against distribution shifts in medical scenarios. CONCLUSION: Empirical results show the effectiveness of our work for handling distribution shift in medical scenarios.


Subject(s)
Hospitals , Semantics , Supervised Machine Learning , Image Processing, Computer-Assisted
5.
Bioengineering (Basel) ; 11(1)2023 Dec 23.
Article in English | MEDLINE | ID: mdl-38247893

ABSTRACT

Semantic segmentation of Signet Ring Cells (SRC) plays a pivotal role in the diagnosis of SRC carcinoma based on pathological images. Deep learning-based methods have demonstrated significant promise in computer-aided diagnosis over the past decade. However, many existing approaches rely heavily on stacking layers, leading to repetitive computational tasks and unnecessarily large neural networks. Moreover, the lack of available ground truth data for SRCs hampers the advancement of segmentation techniques for these cells. In response, this paper introduces an efficient and accurate deep learning framework (RGGC-UNet), which is a UNet framework including our proposed residual ghost block with ghost coordinate attention, featuring an encoder-decoder structure tailored for the semantic segmentation of SRCs. We designed a novel encoder using the residual ghost block with proposed ghost coordinate attention. Benefiting from the utilization of ghost block and ghost coordinate attention in the encoder, the computational overhead of our model is effectively minimized. For practical application in pathological diagnosis, we have enriched the DigestPath 2019 dataset with fully annotated mask labels of SRCs. Experimental outcomes underscore that our proposed model significantly surpasses other leading-edge models in segmentation accuracy while ensuring computational efficiency.

6.
Comput Biol Med ; 150: 106173, 2022 11.
Article in English | MEDLINE | ID: mdl-36257278

ABSTRACT

Automatic polyp segmentation can help physicians to effectively locate polyps (a.k.a. region of interests) in clinical practice, in the way of screening colonoscopy images assisted by neural networks (NN). However, two significant bottlenecks hinder its effectiveness, disappointing physicians' expectations. (1) Changeable polyps in different scaling, orientation, and illumination, bring difficulty in accurate segmentation. (2) Current works building on a dominant decoder-encoder network tend to overlook appearance details (e.g., textures) for a tiny polyp, degrading the accuracy to differentiate polyps. For alleviating the bottlenecks, we investigate a hybrid semantic network (HSNet) that adopts both advantages of Transformer and convolutional neural networks (CNN), aiming at improving polyp segmentation. Our HSNet contains a cross-semantic attention module (CSA), a hybrid semantic complementary module (HSC), and a multi-scale prediction module (MSP). Unlike previous works on segmenting polyps, we newly insert the CSA module, which can fill the gap between low-level and high-level features via an interactive mechanism that exchanges two types of semantics from different NN attentions. By a dual-branch structure of Transformer and CNN, we newly design an HSC module, for capturing both long-range dependencies and local details of appearance. Besides, the MSP module can learn weights for fusing stage-level prediction masks of a decoder. Experimentally, we compared our work with 10 state-of-the-art works, including both recent and classical works, showing improved accuracy (via 7 evaluative metrics) over 5 benchmark datasets, e.g., it achieves 0.926/0.877 mDic/mIoU on Kvasir-SEG, 0.948/0.905 mDic/mIoU on ClinicDB, 0.810/0.735 mDic/mIoU on ColonDB, 0.808/0.74 mDic/mIoU on ETIS, and 0.903/0.839 mDic/mIoU on Endoscene. The proposed model is available at (https://github.com/baiboat/HSNet).


Subject(s)
Benchmarking , Semantic Web , Colonoscopy , Learning , Neural Networks, Computer , Image Processing, Computer-Assisted
7.
Comput Biol Med ; 149: 106034, 2022 10.
Article in English | MEDLINE | ID: mdl-36058068

ABSTRACT

In medical scenarios, obtaining pixel-level annotations for medical images is expensive and time-consuming, even if considering its importance for automating segmentation tasks. Due to the scarcity of labels in the training phase, semi-supervised methods are widely applied for various medical tasks. To better utilize the unlabeled data, several works have explored the method of uncertainty estimation and exhibited huge success. Despite their impressive performance, we believe that the underlying information of the unlabeled data has been largely unexplored. Meanwhile, there is an extreme foreground-background class imbalance during the training phase of semantic segmentation, which may cause a vast number of easily classified samples to overwhelm the loss during training and lead to a model collapse. In this paper, we proposed uncertainty teacher with dense focal loss, a method that can take good advantage of unlabeled data simultaneously and address the class imbalance problem, based on Deep Co-Training. On one hand, the uncertainty teacher framework is presented to better utilize the unlabeled data by introducing a novel method to regularize uncertainty in the right direction, and the uncertainty is estimated by Monte Carlo Sampling. On the other hand, the dense focal loss is proposed to help solve the class imbalance problem between different classes of samples in medical image segmentation and effectively convert the multi-variate entropy into a multiple binary entropy. We implemented our method on three challenging public medical datasets and experimental results have shown desirable improvements to state-of-the-art.


Subject(s)
Deep Learning , Neural Networks, Computer , Entropy , Image Processing, Computer-Assisted/methods , Uncertainty
8.
Comput Biol Med ; 149: 106051, 2022 10.
Article in English | MEDLINE | ID: mdl-36055155

ABSTRACT

Semi-supervised learning has made significant strides in the medical domain since it alleviates the heavy burden of collecting abundant pixel-wise annotated data for semantic segmentation tasks. Existing semi-supervised approaches enhance the ability to extract features from unlabeled data with prior knowledge obtained from limited labeled data. However, due to the scarcity of labeled data, the features extracted by the models are limited in supervised learning, and the quality of predictions for unlabeled data also cannot be guaranteed. Both will impede consistency training. To this end, we proposed a novel uncertainty-aware scheme to make models learn regions purposefully. Specifically, we employ Monte Carlo Sampling as an estimation method to attain an uncertainty map, which can serve as a weight for losses to force the models to focus on the valuable region according to the characteristics of supervised learning and unsupervised learning. Simultaneously, in the backward process, we joint unsupervised and supervised losses to accelerate the convergence of the network via enhancing the gradient flow between different tasks. Quantitatively, we conduct extensive experiments on three challenging medical datasets. Experimental results show desirable improvements to state-of-the-art counterparts.


Subject(s)
Image Processing, Computer-Assisted , Supervised Machine Learning , Image Processing, Computer-Assisted/methods , Uncertainty
SELECTION OF CITATIONS
SEARCH DETAIL
...