Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 150: 106095, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36179516

RESUMO

A clinically comparable multi-tasking computerized deep U-Net-based model is demonstrated in this paper. It intends to offer clinical gland morphometric information and cancer grade classification to be provided as referential opinions for pathologists in order to abate human errors. It embraces enhanced feature learning capability that aids in extraction of potent multi-scale features; efficacious semantic gap recovery during feature concatenation; and successful interception of resolution-degradation and vanishing gradient problems while performing moderate computations. It is proposed by integrating three unique novel structural components namely Hybrid Convolutional Learning Units in the encoder and decoder, Attention Learning Units in skip connection, and Multi-Scalar Dilated Transitional Unit as the transitional layer in the traditional U-Net architecture. These units are composed of the amalgamated phenomenon of multi-level convolutional learning through conventional, atrous, residual, depth-wise, and point-wise convolutions which are further incorporated with target-specific attention learning and enlarged effectual receptive field size. Also, pre-processing techniques of patch-sampling, augmentation (color and morphological), stain-normalization, etc. are employed to burgeon its generalizability. To build network invariance towards digital variability, exhaustive experiments are conducted using three public datasets (Colorectal Adenocarcinoma Gland (CRAG), Gland Segmentation (GlaS) challenge, and Lung Colon-25000 (LC-25K) dataset)) and then its robustness is verified using an in-house private dataset of Hospital Colon (HosC). For the cancer classification, the proposed model achieved results of Accuracy (CRAG(95%), GlaS(97.5%), LC-25K(99.97%), HosC(99.45%)), Precision (CRAG(0.9678), GlaS(0.9768), LC-25K(1), HosC(1)), F1-score (CRAG(0.968), GlaS(0.977), LC 25K(0.9997), HosC(0.9965)), and Recall (CRAG(0.9677), GlaS(0.9767), LC-25K(0.9994), HosC(0.9931)). For the gland detection and segmentation, the proposed model achieved competitive results of F1-score (CRAG(0.924), GlaS(Test A(0.949), Test B(0.918)), LC-25K(0.916), HosC(0.959)); Object-Dice Index (CRAG(0.959), GlaS(Test A(0.956), Test B(0.909)), LC-25K(0.929), HosC(0.922)), and Object-Hausdorff Distance (CRAG(90.47), GlaS(Test A(23.17), Test B(71.53)), LC-25K(96.28), HosC(85.45)). In addition, the activation mappings for testing the interpretability of the classification decision-making process are reported by utilizing techniques of Local Interpretable Model-Agnostic Explanations, Occlusion Sensitivity, and Gradient-Weighted Class Activation Mappings. This is done to provide further evidence about the model's self-learning capability of the comparable patterns considered relevant by pathologists without any pre-requisite for annotations. These activation mapping visualization outcomes are evaluated by proficient pathologists, and they delivered these images with a class-path validation score of (CRAG(9.31), GlaS(9.25), LC-25K(9.05), and HosC(9.85)). Furthermore, the seg-path validation score of (GlaS (Test A(9.40), Test B(9.25)), CRAG(9.27), LC-25K(9.01), HosC(9.19)) given by multiple pathologists is included for the final segmented outcomes to substantiate the clinical relevance and suitability for facilitation at the clinical level. The proposed model will aid pathologists to formulate an accurate diagnosis by providing a referential opinion during the morphology assessment of histopathology images. It will reduce unintentional human error in cancer diagnosis and consequently will enhance patient survival rate.


Assuntos
Adenocarcinoma , Neoplasias Colorretais , Humanos , Aprendizagem , Relevância Clínica , Processamento de Imagem Assistida por Computador
2.
Comput Biol Med ; 147: 105680, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35671654

RESUMO

A clinically comparable Convolutional Neural Network framework-based technique for performing automated classification of cancer grades and tissue structures in hematoxylin and eosin-stained colon histopathological images is proposed in this paper. It comprised of Enhanced Convolutional Learning Modules (ECLMs), multi-level Attention Learning Module (ALM), and Transitional Modules (TMs). The ECLMs perform a dual mechanism to extract multi-level discriminative spatial features and model cross-channel correlations with fewer computations and effectual avoidance of vanishing gradient issues. The ALM performs focus-refinement through the channel-wise elemental attention learning to accentuate the discriminative channels of the features maps specifically belonging to the important pathological regions and the scale-wise attention learning to facilitate recalibration of features maps at diverse scales. The TMs concatenate the output of these two modules, infuse deep multi-scalar features and eliminate resolution degradation issues. Varied pre-processing techniques are further employed to improvise the generalizability of the proposed network. For performance evaluation, four diverse publicly available datasets (Gland Segmentation challenge(GlaS), Lung Colon(LC)-25000, Kather_Colorectal_Cancer_Texture_Images (Kather-5k), and NCT_HE_CRC_100K(NCT-100k)) and a private dataset Hospital Colon(HosC) are used that further aids in building network invariance against digital variability that exists in real-clinical data. Also, multiple pathologists are involved at every stage of the proposed research and their verification and approval are taken for each step outcome. For the cancer grade classification, the proposed model achieves competitive results for GlaS (Accuracy(97.5%), Precision(97.67%), F1-Score(97.67%), and Recall(97.67%)), LC-25000 (Accuracy(100%), Precision(100%), F1-Score(100%), and Recall(100%)), and HosC(Accuracy(99.45%), Precision(100%), F1-Score(99.65%), and Recall(99.31%)), and while for the tissue structure classification, it achieves results for Kather-5k(Accuracy(98.83%), Precision(98.86%), F1-Score(98.85%), and Recall(98.85%)) and NCT-100k(Accuracy(97.7%), Precision(97.69%), F1-Score(97.71%), and Recall(97.73%)). Furthermore, the reported activation mappings of Gradient-Weighted Class Activation Mappings(Grad-CAM), Occlusion Sensitivity, and Local Interpretable Model-Agnostic Explanations (LIME) evidence that the proposed model can itself learn the similar patterns considered pertinent by the pathologists exclusive of any prerequisite for annotations. In addition, these visualization results are inspected by multiple expert pathologists and provided with a validation score as (GlaS(9.251), LC-25000(9.045), Kather-5k(9.248), NCT-100k(9.262), and HosC(9.853)). This model will provide a secondary referential diagnosis for the pathologists to ease their load and aid them in devising an accurate diagnosis and treatment plan.


Assuntos
Neoplasias Colorretais , Redes Neurais de Computação , Atenção , Humanos
3.
Heliyon ; 7(10): e08134, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34632133

RESUMO

COVID-19 pandemic has posed serious risk of contagion to humans. There is a need to find reliable non-contact tests like vocal correlates of COVID-19 infection. Thirty-six Asian ethnic volunteers 16 (8M & 8F) infected subjects and 20 (10M &10F) non-infected controls participated in this study by vocalizing vowels /a/, /e/, /i/, /o/, /u/. Voice correlates of 16 COVID-19 positive patients were compared during infection and after recovery with 20 non-infected controls. Compared to non-infected controls, significantly higher values of energy intensity for /o/ (p = 0.048); formant F1 for /o/ (p = 0.014); and formant F3 for /u/ (p = 0.032) were observed in male patients, while higher values of Jitter (local, abs) for /o/ (p = 0.021) and Jitter (ppq5) for /a/ (p = 0.014) were observed in female patients. However, formant F2 for /u/ (p = 0.018), mean pitch F0 for /e/, /i/ and /o/ (p = 0.033; 0.036; 0.047) decreased for female patients under infection. Compared to recovered conditions, HNR for /e/ (p = 0.014) was higher in male patients under infection, while Jitter (rap) for /a/ (p = 0.041); Jitter (ppq5) for /a/ (p = 0.032); Shimmer (local, dB) for /i/ (p = 0.024); Shimmer (apq5) for /u/ (p = 0.019); and formant F4 for vowel /o/ (p = 0.022) were higher in female patients under infection. However, HNR for /e/ (p = 0.041); and formant F1 for /o/ (p = 0.002) were lower in female patients compared to their recovered conditions. Obtained results support the hypothesis since changes in voice parameters were observed in the infected patients which can be correlated to a combination of acoustic measures like fundamental frequency, formant characteristics, HNR, and voice perturbations like jitter and shimmer for different vowels. Thus, voice analysis can be used for scanning and prognosis of COVID-19 infection. Based on the findings of this study, a mobile application can be developed to analyze human voice in real-time to detect COVID-19 symptoms for remedial measures and necessary action.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...