Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Mod Pathol ; 34(11): 2028-2035, 2021 11.
Article in English | MEDLINE | ID: mdl-34112957

ABSTRACT

Sarcomatoid mesothelioma is an aggressive malignancy that can be challenging to distinguish from benign spindle cell mesothelial proliferations based on biopsy, and this distinction is crucial to patient treatment and prognosis. A novel deep learning based classifier may be able to aid pathologists in making this critical diagnostic distinction. SpindleMesoNET was trained on cases of malignant sarcomatoid mesothelioma and benign spindle cell mesothelial proliferations. Performance was assessed through cross-validation on the training set, on an independent set of challenging cases referred for expert opinion ('referral' test set), and on an externally stained set from outside institutions ('externally stained' test set). SpindleMesoNET predicted the benign or malignant status of cases with AUC's of 0.932, 0.925, and 0.989 on the cross-validation, referral and external test sets, respectively. The accuracy of SpindleMesoNET on the referral set cases (92.5%) was comparable to the average accuracy of 3 experienced pathologists on the same slide set (91.7%). We conclude that SpindleMesoNET can accurately distinguish sarcomatoid mesothelioma from benign spindle cell mesothelial proliferations. A deep learning system of this type holds potential for future use as an ancillary test in diagnostic pathology.


Subject(s)
Deep Learning/classification , Mesothelioma, Malignant/diagnosis , Mesothelioma/diagnosis , Pleural Neoplasms/diagnosis , Area Under Curve , Cell Proliferation , Diagnosis, Differential , Humans , Image Processing, Computer-Assisted , Mesothelioma/classification , Mesothelioma, Malignant/classification , Neural Networks, Computer , Pleural Neoplasms/classification , Prognosis , ROC Curve , Sensitivity and Specificity
2.
IEEE Trans Neural Netw Learn Syst ; 32(5): 1810-1820, 2021 05.
Article in English | MEDLINE | ID: mdl-33872157

ABSTRACT

Coronavirus disease (COVID-19) has been the main agenda of the whole world ever since it came into sight. X-ray imaging is a common and easily accessible tool that has great potential for COVID-19 diagnosis and prognosis. Deep learning techniques can generally provide state-of-the-art performance in many classification tasks when trained properly over large data sets. However, data scarcity can be a crucial obstacle when using them for COVID-19 detection. Alternative approaches such as representation-based classification [collaborative or sparse representation (SR)] might provide satisfactory performance with limited size data sets, but they generally fall short in performance or speed compared to the neural network (NN)-based methods. To address this deficiency, convolution support estimation network (CSEN) has recently been proposed as a bridge between representation-based and NN approaches by providing a noniterative real-time mapping from query sample to ideally SR coefficient support, which is critical information for class decision in representation-based techniques. The main premises of this study can be summarized as follows: 1) A benchmark X-ray data set, namely QaTa-Cov19, containing over 6200 X-ray images is created. The data set covering 462 X-ray images from COVID-19 patients along with three other classes; bacterial pneumonia, viral pneumonia, and normal. 2) The proposed CSEN-based classification scheme equipped with feature extraction from state-of-the-art deep NN solution for X-ray images, CheXNet, achieves over 98% sensitivity and over 95% specificity for COVID-19 recognition directly from raw X-ray images when the average performance of 5-fold cross validation over QaTa-Cov19 data set is calculated. 3) Having such an elegant COVID-19 assistive diagnosis performance, this study further provides evidence that COVID-19 induces a unique pattern in X-rays that can be discriminated with high accuracy.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Neural Networks, Computer , X-Rays , COVID-19/classification , Deep Learning/classification , Diagnosis, Differential , Humans , Pneumonia, Bacterial/classification , Pneumonia, Bacterial/diagnostic imaging , Pneumonia, Viral/classification , Pneumonia, Viral/diagnostic imaging , Tomography, X-Ray Computed/classification
3.
J Neuropathol Exp Neurol ; 80(4): 306-312, 2021 03 22.
Article in English | MEDLINE | ID: mdl-33570124

ABSTRACT

This study aimed to develop a deep learning-based image classification model that can differentiate tufted astrocytes (TA), astrocytic plaques (AP), and neuritic plaques (NP) based on images of tissue sections stained with phospho-tau immunohistochemistry. Phospho-tau-immunostained slides from the motor cortex were scanned at 20× magnification. An automated deep learning platform, Google AutoML, was used to create a model for distinguishing TA in progressive supranuclear palsy (PSP) from AP in corticobasal degeneration (CBD) and NP in Alzheimer disease (AD). A total of 1500 images of representative tau lesions were captured from 35 PSP, 27 CBD, and 33 AD patients. Of those, 1332 images were used for training, and 168 images for cross-validation. We tested the model using 100 additional test images taken from 20 patients of each disease. In cross-validation, precision and recall for each individual lesion type were 100% and 98.0% for TA, 98.5% and 98.5% for AP, and 98.0% and 100% for NP, respectively. In a test set, all images of TA and NP were correctly predicted. Only eleven images of AP were predicted to be TA or NP. Our data indicate the potential usefulness of deep learning-based image classification methods to assist in differential diagnosis of tauopathies.


Subject(s)
Astrocytes/classification , Astrocytes/pathology , Deep Learning/classification , Plaque, Amyloid/classification , Plaque, Amyloid/pathology , Humans , Proof of Concept Study
4.
Neural Netw ; 136: 126-140, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33485098

ABSTRACT

With the rapid increase of data availability, time series classification (TSC) has arisen in a wide range of fields and drawn great attention of researchers. Recently, hundreds of TSC approaches have been developed, which can be classified into two categories: traditional and deep learning based TSC methods. However, it remains challenging to improve accuracy and model generalization ability. Therefore, we investigate a novel end-to-end model based on deep learning named as Multi-scale Attention Convolutional Neural Network (MACNN) to solve the TSC problem. We first apply the multi-scale convolution to capture different scales of information along the time axis by generating different scales of feature maps. Then an attention mechanism is proposed to enhance useful feature maps and suppress less useful ones by learning the importance of each feature map automatically. MACNN addresses the limitation of single-scale convolution and equal weight feature maps. We conduct a comprehensive evaluation of 85 UCR standard datasets and the experimental results show that our proposed approach achieves the best performance and outperforms the other traditional and deep learning based methods by a large margin.


Subject(s)
Databases, Factual/classification , Deep Learning/classification , Neural Networks, Computer , Humans , Time Factors
5.
Neural Netw ; 133: 177-192, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33220642

ABSTRACT

Echo State Networks (ESNs) are efficient recurrent neural networks (RNNs) which have been successfully applied to time series modeling tasks. However, ESNs are unable to capture the history information far from the current time step, since the echo state at the present step of ESNs mostly impacted by the previous one. Thus, ESN may have difficulty in capturing the long-term dependencies of temporal data. In this paper, we propose an end-to-end model named Echo Memory-Augmented Network (EMAN) for time series classification. An EMAN consists of an echo memory-augmented encoder and a multi-scale convolutional learner. First, the time series is fed into the reservoir of an ESN to produce the echo states, which are all collected into an echo memory matrix along with the time steps. After that, we design an echo memory-augmented mechanism employing the sparse learnable attention to the echo memory matrix to obtain the Echo Memory-Augmented Representations (EMARs). In this way, the input time series is encoded into the EMARs with enhancing the temporal memory of the ESN. We then use multi-scale convolutions with the max-over-time pooling to extract the most discriminative features from the EMARs. Finally, a fully-connected layer and a softmax layer calculate the probability distribution on categories. Experiments conducted on extensive time series datasets show that EMAN is state-of-the-art compared to existing time series classification methods. The visualization analysis also demonstrates the effectiveness of enhancing the temporal memory of the ESN.


Subject(s)
Deep Learning/classification , Neural Networks, Computer , Memory , Nonlinear Dynamics , Time Factors
6.
Neural Comput ; 32(7): 1408-1429, 2020 07.
Article in English | MEDLINE | ID: mdl-32433898

ABSTRACT

The multispike tempotron (MST) is a powersul, single spiking neuron model that can solve complex supervised classification tasks. It is also internally complex, computationally expensive to evaluate, and unsuitable for neuromorphic hardware. Here we aim to understand whether it is possible to simplify the MST model while retaining its ability to learn and process information. To this end, we introduce a family of generalized neuron models (GNMs) that are a special case of the spike response model and much simpler and cheaper to simulate than the MST. We find that over a wide range of parameters, the GNM can learn at least as well as the MST does. We identify the temporal autocorrelation of the membrane potential as the most important ingredient of the GNM that enables it to classify multiple spatiotemporal patterns. We also interpret the GNM as a chemical system, thus conceptually bridging computation by neural networks with molecular information processing. We conclude the letter by proposing alternative training approaches for the GNM, including error trace learning and error backpropagation.


Subject(s)
Action Potentials/physiology , Deep Learning/classification , Neurons/physiology , Animals , Humans
7.
Am J Ophthalmol ; 216: 140-146, 2020 08.
Article in English | MEDLINE | ID: mdl-32247778

ABSTRACT

PURPOSE: We sought to assess the performance of deep learning approaches for differentiating nonglaucomatous optic neuropathy with disc pallor (NGON) vs glaucomatous optic neuropathy (GON) on color fundus photographs by the use of image recognition. DESIGN: Development of an Artificial Intelligence Classification algorithm. METHODS: This single-institution analysis included 3815 fundus images from the picture archiving and communication system of Seoul National University Bundang Hospital consisting of 2883 normal optic disc images, 446 NGON images, and 486 GON images. The presence of NGON and GON was interpreted by 2 expert neuro-ophthalmologists and had corroborated evidence on visual field testing and optical coherence tomography. Images were preprocessed in size and color enhancement before input. We applied the convolutional neural network (CNN) of ResNet-50 architecture. The area under the precision-recall curve (average precision) was evaluated for the efficacy of deep learning algorithms to assess the performance of classifying NGON and GON. RESULTS: The diagnostic accuracy of the ResNet-50 model to detect GON among NGON images showed a sensitivity of 93.4% and specificity of 81.8%. The area under the precision-recall curve for differentiating NGON vs GON showed an average precision value of 0.874. False positive cases were found with extensive areas of peripapillary atrophy and tilted optic discs. CONCLUSION: Artificial intelligence-based deep learning algorithms for detecting optic disc diseases showed excellent performance in differentiating NGON and GON on color fundus photographs, necessitating further research for clinical application.


Subject(s)
Deep Learning , Glaucoma/diagnosis , Optic Disk/pathology , Optic Nerve Diseases/diagnosis , Algorithms , Area Under Curve , Deep Learning/classification , Female , Humans , Intraocular Pressure/physiology , Male , Neural Networks, Computer , Photography , ROC Curve , Reproducibility of Results , Sensitivity and Specificity , Visual Field Tests
8.
J Glaucoma ; 29(4): 287-294, 2020 04.
Article in English | MEDLINE | ID: mdl-32053552

ABSTRACT

PRéCIS:: A spectral-domain optical coherence tomography (SD-OCT) based deep learning system detected glaucomatous structural change with high sensitivity and specificity. It outperformed the clinical diagnostic parameters in discriminating glaucomatous eyes from healthy eyes. PURPOSE: The purpose of this study was to assess the performance of a deep learning classifier for the detection of glaucomatous change based on SD-OCT. METHODS: Three hundred fifty image sets of ganglion cell-inner plexiform layer (GCIPL) and retinal nerve fiber layer (RNFL) SD-OCT for 86 glaucomatous eyes and 307 SD-OCT image sets of 196 healthy participants were recruited and split into training (197 eyes) and test (85 eyes) datasets based on a patient-wise split. The bottleneck features extracted from the GCIPL thickness map, GCIPL deviation map, RNFL thickness map, and RNFL deviation map were used as predictors for the deep learning classifier. The area under the receiver operating characteristic curve (AUC) was calculated and compared with those of conventional glaucoma diagnostic parameters including SD-OCT thickness profile and standard automated perimetry (SAP) to evaluate the accuracy of discrimination for each algorithm. RESULTS: In the test dataset, this deep learning system achieved an AUC of 0.990 [95% confidence interval (CI), 0.975-1.000] with a sensitivity of 94.7% and a specificity of 100.0%, which was significantly larger than the AUCs with all of the optical coherence tomography and SAP parameters: 0.949 (95% CI, 0.921-0.976) with average GCIPL thickness (P=0.006), 0.938 (95% CI, 0.905-0.971) with average RNFL thickness (P=0.003), and 0.889 (0.844-0.934) with mean deviation of SAP (P<0.001; DeLong test). CONCLUSION: An SD-OCT-based deep learning system can detect glaucomatous structural change with high sensitivity and specificity.


Subject(s)
Deep Learning/classification , Glaucoma/classification , Glaucoma/diagnostic imaging , Tomography, Optical Coherence , Adult , Aged , Area Under Curve , Female , Glaucoma/physiopathology , Humans , Intraocular Pressure/physiology , Male , Middle Aged , Nerve Fibers/pathology , Optic Disk/pathology , ROC Curve , Retinal Ganglion Cells/pathology , Sensitivity and Specificity , Visual Fields/physiology
9.
IEEE Trans Neural Netw Learn Syst ; 31(8): 2857-2867, 2020 08.
Article in English | MEDLINE | ID: mdl-31170082

ABSTRACT

In the postgenome era, many problems in bioinformatics have arisen due to the generation of large amounts of imbalanced data. In particular, the computational classification of precursor microRNA (pre-miRNA) involves a high imbalance in the classes. For this task, a classifier is trained to identify RNA sequences having the highest chance of being miRNA precursors. The big issue is that well-known pre-miRNAs are usually just a few in comparison to the hundreds of thousands of candidate sequences in a genome, which results in highly imbalanced data. This imbalance has a strong influence on most standard classifiers and, if not properly addressed, the classifier is not able to work properly in a real-life scenario. This work provides a comparative assessment of recent deep neural architectures for dealing with the large imbalanced data issue in the classification of pre-miRNAs. We present and analyze recent architectures in a benchmark framework with genomes of animals and plants, with increasing imbalance ratios up to 1:2000. We also propose a new graphical way for comparing classifiers performance in the context of high-class imbalance. The comparative results obtained show that, at a very high imbalance, deep belief neural networks can provide the best performance.


Subject(s)
Computational Biology/classification , Computational Biology/methods , Databases, Factual/classification , Deep Learning/classification , Neural Networks, Computer , Plants/classification , Animals , Elasticity , Humans
10.
Neural Netw ; 118: 208-219, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31299625

ABSTRACT

Multimodal emotion understanding enables AI systems to interpret human emotions. With accelerated video surge, emotion understanding remains challenging due to inherent data ambiguity and diversity of video content. Although deep learning has made a considerable progress in big data feature learning, they are viewed as deterministic models used in a "black-box" manner which does not have capabilities to represent inherent ambiguities with data. Since the possibility theory of fuzzy logic focuses on knowledge representation and reasoning under uncertainty, we intend to incorporate the concepts of fuzzy logic into deep learning framework. This paper presents a novel convolutional neuro-fuzzy network, which is an integration of convolutional neural networks in fuzzy logic domain to extract high-level emotion features from text, audio, and visual modalities. The feature sets extracted by fuzzy convolutional layers are compared with those of convolutional neural networks at the same level using t-distributed Stochastic Neighbor Embedding. This paper demonstrates a multimodal emotion understanding framework with an adaptive neural fuzzy inference system that can generate new rules to classify emotions. For emotion understanding of movie clips, we concatenate audio, visual, and text features extracted using the proposed convolutional neuro-fuzzy network to train adaptive neural fuzzy inference system. In this paper, we go one step further to explain how deep learning arrives at a conclusion that can guide us to an interpretable AI. To identify which visual/text/audio aspects are important for emotion understanding, we use direct linear non-Gaussian additive model to explain the relevance in terms of causal relationships between features of deep hidden layers. The critical features extracted are input to the proposed multimodal framework to achieve higher accuracy.


Subject(s)
Deep Learning , Emotions , Fuzzy Logic , Motion Pictures , Neural Networks, Computer , Algorithms , Deep Learning/classification , Emotions/physiology , Humans , Motion Pictures/classification , Photic Stimulation/methods
11.
J Neural Eng ; 16(6): 066004, 2019 10 16.
Article in English | MEDLINE | ID: mdl-31341093

ABSTRACT

OBJECTIVE: Learning the structures and unknown correlations of a motor imagery electroencephalogram (MI-EEG) signal is important for its classification. It is also a major challenge to obtain good classification accuracy from the increased number of classes and increased variability from different people. In this study, a four-class MI task is investigated. APPROACH: An end-to-end novel hybrid deep learning scheme is developed to decode the MI task from EEG data. The proposed algorithm consists of two parts: a. A one-versus-rest filter bank common spatial pattern is adopted to preprocess and pre-extract the features of the four-class MI signal. b. A hybrid deep network based on the convolutional neural network and long-term short-term memory network is proposed to extract and learn the spatial and temporal features of the MI signal simultaneously. MAIN RESULTS: The main contribution of this paper is to propose a hybrid deep network framework to improve the classification accuracy of the four-class MI-EEG signal. The hybrid deep network is a subject-independent shared neural network, which means it can be trained by using the training data from all subjects to form one model. SIGNIFICANCE: The classification performance obtained by the proposed algorithm on brain-computer interface (BCI) competition IV dataset 2a in terms of accuracy is 83% and Cohen's kappa value is 0.80. Finally, the shared hybrid deep network is evaluated by every subject respectively, and the experimental results illustrate that the shared neural network has satisfactory accuracy. Thus, the proposed algorithm could be of great interest for real-life BCIs.


Subject(s)
Brain/physiology , Deep Learning/classification , Electroencephalography/classification , Imagination/physiology , Movement/physiology , Neural Networks, Computer , Deep Learning/trends , Electroencephalography/trends , Humans
12.
PLoS Comput Biol ; 15(4): e1006972, 2019 04.
Article in English | MEDLINE | ID: mdl-30964861

ABSTRACT

Hierarchical processing is pervasive in the brain, but its computational significance for learning under uncertainty is disputed. On the one hand, hierarchical models provide an optimal framework and are becoming increasingly popular to study cognition. On the other hand, non-hierarchical (flat) models remain influential and can learn efficiently, even in uncertain and changing environments. Here, we show that previously proposed hallmarks of hierarchical learning, which relied on reports of learned quantities or choices in simple experiments, are insufficient to categorically distinguish hierarchical from flat models. Instead, we present a novel test which leverages a more complex task, whose hierarchical structure allows generalization between different statistics tracked in parallel. We use reports of confidence to quantitatively and qualitatively arbitrate between the two accounts of learning. Our results support the hierarchical learning framework, and demonstrate how confidence can be a useful metric in learning theory.


Subject(s)
Deep Learning/classification , Learning/classification , Adult , Brain , Choice Behavior/classification , Cognition/physiology , Female , Humans , Male , Uncertainty , Young Adult
13.
J Neural Eng ; 16(3): 031001, 2019 06.
Article in English | MEDLINE | ID: mdl-30808014

ABSTRACT

OBJECTIVE: Electroencephalography (EEG) analysis has been an important tool in neuroscience with applications in neuroscience, neural engineering (e.g. Brain-computer interfaces, BCI's), and even commercial applications. Many of the analytical tools used in EEG studies have used machine learning to uncover relevant information for neural classification and neuroimaging. Recently, the availability of large EEG data sets and advances in machine learning have both led to the deployment of deep learning architectures, especially in the analysis of EEG signals and in understanding the information it may contain for brain functionality. The robust automatic classification of these signals is an important step towards making the use of EEG more practical in many applications and less reliant on trained professionals. Towards this goal, a systematic review of the literature on deep learning applications to EEG classification was performed to address the following critical questions: (1) Which EEG classification tasks have been explored with deep learning? (2) What input formulations have been used for training the deep networks? (3) Are there specific deep learning network structures suitable for specific types of tasks? APPROACH: A systematic literature review of EEG classification using deep learning was performed on Web of Science and PubMed databases, resulting in 90 identified studies. Those studies were analyzed based on type of task, EEG preprocessing methods, input type, and deep learning architecture. MAIN RESULTS: For EEG classification tasks, convolutional neural networks, recurrent neural networks, deep belief networks outperform stacked auto-encoders and multi-layer perceptron neural networks in classification accuracy. The tasks that used deep learning fell into five general groups: emotion recognition, motor imagery, mental workload, seizure detection, event related potential detection, and sleep scoring. For each type of task, we describe the specific input formulation, major characteristics, and end classifier recommendations found through this review. SIGNIFICANCE: This review summarizes the current practices and performance outcomes in the use of deep learning for EEG classification. Practical suggestions on the selection of many hyperparameters are provided in the hope that they will promote or guide the deployment of deep learning to EEG datasets in future research.


Subject(s)
Brain/physiology , Deep Learning/classification , Electroencephalography/classification , Neural Networks, Computer , Animals , Brain-Computer Interfaces/classification , Humans , Psychomotor Performance/physiology
14.
Neuroimage Clin ; 21: 101645, 2019.
Article in English | MEDLINE | ID: mdl-30584016

ABSTRACT

We built and validated a deep learning algorithm predicting the individual diagnosis of Alzheimer's disease (AD) and mild cognitive impairment who will convert to AD (c-MCI) based on a single cross-sectional brain structural MRI scan. Convolutional neural networks (CNNs) were applied on 3D T1-weighted images from ADNI and subjects recruited at our Institute (407 healthy controls [HC], 418 AD, 280 c-MCI, 533 stable MCI [s-MCI]). CNN performance was tested in distinguishing AD, c-MCI and s-MCI. High levels of accuracy were achieved in all the classifications, with the highest rates achieved in the AD vs HC classification tests using both the ADNI dataset only (99%) and the combined ADNI + non-ADNI dataset (98%). CNNs discriminated c-MCI from s-MCI patients with an accuracy up to 75% and no difference between ADNI and non-ADNI images. CNNs provide a powerful tool for the automatic individual patient diagnosis along the AD continuum. Our method performed well without any prior feature engineering and regardless the variability of imaging protocols and scanners, demonstrating that it is exploitable by not-trained operators and likely to be generalizable to unseen patient data. CNNs may accelerate the adoption of structural MRI in routine practice to help assessment and management of patients.


Subject(s)
Alzheimer Disease/classification , Cognitive Dysfunction/classification , Deep Learning/classification , Disease Progression , Magnetic Resonance Imaging/classification , Neural Networks, Computer , Aged , Aged, 80 and over , Alzheimer Disease/diagnostic imaging , Cognitive Dysfunction/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging/methods , Male , Middle Aged
15.
J Acoust Soc Am ; 144(3): EL196, 2018 09.
Article in English | MEDLINE | ID: mdl-30424627

ABSTRACT

In this paper, the effectiveness of deep learning for automatic classification of grouper species by their vocalizations has been investigated. In the proposed approach, wavelet denoising is used to reduce ambient ocean noise, and a deep neural network is then used to classify sounds generated by different species of groupers. Experimental results for four species of groupers show that the proposed approach achieves a classification accuracy of around 90% or above in all of the tested cases, a result that is significantly better than the one obtained by a previously reported method for automatic classification of grouper calls.


Subject(s)
Deep Learning/classification , Neural Networks, Computer , Sound , Vocalization, Animal/physiology , Animals , Fishes
16.
BMC Bioinformatics ; 19(1): 173, 2018 05 16.
Article in English | MEDLINE | ID: mdl-29769044

ABSTRACT

BACKGROUND: There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. RESULTS: Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. CONCLUSION: Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.


Subject(s)
Artificial Intelligence/standards , Deep Learning/classification , Machine Learning/standards , Neural Networks, Computer , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...