Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Multimed Tools Appl ; 81(17): 24265-24300, 2022.
Article in English | MEDLINE | ID: mdl-35342326

ABSTRACT

Cervical cell classification has important clinical significance in cervical cancer screening at early stages. However, there are fewer public cervical cancer smear cell datasets, the weights of each classes' samples are unbalanced, the image quality is uneven, and the classification research results based on CNN tend to overfit. To solve the above problems, we propose a cervical cell image generation model based on taming transformers (CCG-taming transformers) to provide high-quality cervical cancer datasets with sufficient samples and balanced weights, we improve the encoder structure by introducing SE-block and MultiRes-block to improve the ability to extract information from cervical cancer cells images; we introduce Layer Normlization to standardize the data, which is convenient for the subsequent non-linear processing of the data by the ReLU activation function in feed forward; we also introduce SMOTE-Tomek Links to balance the source data set and the number of samples and weights of the images we use Tokens-to-Token Vision Transformers (T2T-ViT) combing transfer learning to classify the cervical cancer smear cell image dataset to improve the classification performance. Classification experiments using the model proposed in this paper are performed on three public cervical cancer datasets, the classification accuracy in the liquid-based cytology Pap smear dataset (4-class), SIPAKMeD (5-class), and Herlev (7-class) are 98.79%, 99.58%, and 99.88%, respectively. The quality of the images we generated on these three data sets is very close to the source data set, the final averaged inception score (IS), Fréchet inception distance (FID), Recall and Precision are 3.75, 0.71, 0.32 and 0.65 respectively. Our method improves the accuracy of cervical cancer smear cell classification, provides more cervical cell sample images for cervical cancer-related research, and assists gynecologists to judge and diagnose different types of cervical cancer cells and analyze cervical cancer cells at different stages, which are difficult to distinguish. This paper applies the transformer to the generation and recognition of cervical cancer cell images for the first time. Supplementary Information: The online version contains supplementary material available at 10.1007/s11042-022-12670-0.

2.
Med Biol Eng Comput ; 59(9): 1815-1832, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34304370

ABSTRACT

Melanoma is one of the most dangerous skin cancers. The current melanoma segmentation is mainly based on FCNs (fully connected networks) and U-Net. Nevertheless, these two kinds of neural networks are prone to parameter redundancy, and the gradient of neural networks disappears that occurs when the neural network backpropagates as the neural network gets deeper, which will reduce the Jaccard index of the skin lesion image segmentation model. To solve the above problems and improve the survival rate of melanoma patients, an improved skin lesion segmentation model based on deformable 3D convolution and ResU-NeXt++ (D3DC- ResU-NeXt++) is proposed in this paper. The new modules in D3DC-ResU-NeXt++ can replace ordinary modules in the existing 2D convolutional neural networks (CNNs) that can be trained efficiently through standard backpropagation with high segmentation accuracy. In particular, we introduce a new data preprocessing method with dilation, crop operation, resizing, and hair removal (DCRH), which improves the Jaccard index of skin lesion image segmentation. Because rectified Adam (RAdam) does not easily fall into a local optimal solution and can converge quickly in segmentation model training, we also introduce RAdam as the training optimizer. The experiments show that our model has excellent performance on the segmentation of the ISIC2018 Task I dataset, and the Jaccard index achieves 86.84%. The proposed method improves the Jaccard index of segmentation of skin lesion images and can also assist dermatological doctors in determining and diagnosing the types of skin lesions and the boundary between lesions and normal skin, so as to improve the survival rate of skin cancer patients. Overview of the proposed model. An improved skin lesion segmentation model based on deformable 3D convolution and ResU-NeXt++ (D3DC- ResU-NeXt++) is proposed in this paper. D3DC-ResU-NeXt++ has strong spatial geometry processing capabilities, it is used to segment the skin lesion sample image; DCRH and transfer learning are used to preprocess the data set and D3DC-ResU-NeXt++ respectively, which can highlight the difference between the lesion area and the normal skin, and enhance the segmentation efficiency and robustness of the neural network; RAdam is used to speed up the convergence speed of neural network and improve the efficiency of segmentation.


Subject(s)
Melanoma , Skin Neoplasms , Algorithms , Dermoscopy , Humans , Image Processing, Computer-Assisted , Melanoma/diagnostic imaging , Neural Networks, Computer , Skin Neoplasms/diagnostic imaging
3.
Med Biol Eng Comput ; 58(6): 1251-1264, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32221797

ABSTRACT

In medicine, white blood cells (WBCs) play an important role in the human immune system. The different types of WBC abnormalities are related to different diseases so that the total number and classification of WBCs are critical for clinical diagnosis and therapy. However, the traditional method of white blood cell classification is to segment the cells, extract features, and then classify them. Such method depends on the good segmentation, and the accuracy is not high. Moreover, the insufficient data or unbalanced samples can cause the low classification accuracy of model by using deep learning in medical diagnosis. To solve these problems, this paper proposes a new blood cell image classification framework which is based on a deep convolutional generative adversarial network (DC-GAN) and a residual neural network (ResNet). In particular, we introduce a new loss function which is improved the discriminative power of the deeply learned features. The experiments show that our model has a good performance on the classification of WBC images, and the accuracy reaches 91.7%. Graphical Abstract Overview of the proposed method, we use the deep convolution generative adversarial networks (DC-GAN) to generate new samples that are used as supplementary input to a ResNet, the transfer learning method is used to initialize the parameters of the network, the output of the DC-GAN and the parameters are applied the final classification network. In particular, we introduced a modified loss function for classification to increase inter-class variations and decrease intra-class differences.


Subject(s)
Image Processing, Computer-Assisted/methods , Leukocytes/cytology , Blood Cells/cytology , Deep Learning , Humans , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...