Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Mod Pathol ; 36(6): 100129, 2023 06.
Article in English | MEDLINE | ID: mdl-36931041

ABSTRACT

We examined the performance of deep learning models on the classification of thyroid fine-needle aspiration biopsies using microscope images captured in 2 ways: with a high-resolution scanner and with a mobile phone camera. Our training set consisted of images from 964 whole-slide images captured with a high-resolution scanner. Our test set consisted of 100 slides; 20 manually selected regions of interest (ROIs) from each slide were captured in 2 ways as mentioned above. Applying a baseline machine learning algorithm trained on scanner ROIs resulted in performance deterioration when applied to the smartphone ROIs (97.8% area under the receiver operating characteristic curve [AUC], CI = [95.4%, 100.0%] for scanner images vs 89.5% AUC, CI = [82.3%, 96.6%] for mobile images, P = .019). Preliminary analysis via histogram matching showed that the baseline model was overly sensitive to slight color variations in the images (specifically, to color differences between mobile and scanner images). Adding color augmentation during training reduces this sensitivity and narrows the performance gap between mobile and scanner images (97.6% AUC, CI = [95.0%, 100.0%] for scanner images vs 96.0% AUC, CI = [91.8%, 100.0%] for mobile images, P = .309), with both modalities on par with human pathologist performance (95.6% AUC, CI = [91.6%, 99.5%]) for malignancy prediction (P = .398 for pathologist vs scanner and P = .875 for pathologist vs mobile). For indeterminate cases (pathologist-assigned Bethesda category of 3, 4, or 5), color augmentations confer some improvement (88.3% AUC, CI = [73.7%, 100.0%] for the baseline model vs 96.2% AUC, CI = [90.9%, 100.0%] with color augmentations, P = .158). In addition, we found that our model's performance levels off after 15 ROIs, a promising indication that ROI data collection would not be time-consuming for our diagnostic system. Finally, we showed that the model has sensible Bethesda category (TBS) predictions (increasing risk malignancy rate with predicted TBS category, with 0% malignancy for predicted TBS 2 and 100% malignancy for TBS 6).


Subject(s)
Cytology , Thyroid Neoplasms , Humans , Smartphone , Thyroid Neoplasms/diagnosis , Thyroid Neoplasms/pathology , Machine Learning
2.
Arch Pathol Lab Med ; 146(7): 872-878, 2022 07 01.
Article in English | MEDLINE | ID: mdl-34669924

ABSTRACT

CONTEXT.­: The use of whole slide images (WSIs) in diagnostic pathology presents special challenges for the cytopathologist. Informative areas on a direct smear from a thyroid fine-needle aspiration biopsy (FNAB) smear may be spread across a large area comprising blood and dead space. Manually navigating through these areas makes screening and evaluation of FNA smears on a digital platform time-consuming and laborious. We designed a machine learning algorithm that can identify regions of interest (ROIs) on thyroid fine-needle aspiration biopsy WSIs. OBJECTIVE.­: To evaluate the ability of the machine learning algorithm and screening software to identify and screen for a subset of informative ROIs on a thyroid FNA WSI that can be used for final diagnosis. DESIGN.­: A representative slide from each of 109 consecutive thyroid fine-needle aspiration biopsies was scanned. A cytopathologist reviewed each WSI and recorded a diagnosis. The machine learning algorithm screened and selected a subset of 100 ROIs from each WSI to present as an image gallery to the same cytopathologist after a washout period of 117 days. RESULTS.­: Concordance between the diagnoses using WSIs and those using the machine learning algorithm-generated ROI image gallery was evaluated using pairwise weighted κ statistics. Almost perfect concordance was seen between the 2 methods with a κ score of 0.924. CONCLUSIONS.­: Our results show the potential of the screening software as an effective screening tool with the potential to reduce cytopathologist workloads.


Subject(s)
Software , Thyroid Gland , Algorithms , Biopsy, Fine-Needle/methods , Humans , Machine Learning , Thyroid Gland/diagnostic imaging , Thyroid Gland/pathology
4.
Med Image Anal ; 67: 101814, 2021 01.
Article in English | MEDLINE | ID: mdl-33049578

ABSTRACT

We consider machine-learning-based thyroid-malignancy prediction from cytopathology whole-slide images (WSI). Multiple instance learning (MIL) approaches, typically used for the analysis of WSIs, divide the image (bag) into patches (instances), which are used to predict a single bag-level label. These approaches perform poorly in cytopathology slides due to a unique bag structure: sparsely located informative instances with varying characteristics of abnormality. We address these challenges by considering multiple types of labels: bag-level malignancy and ordered diagnostic scores, as well as instance-level informativeness and abnormality labels. We study their contribution beyond the MIL setting by proposing a maximum likelihood estimation (MLE) framework, from which we derive a two-stage deep-learning-based algorithm. The algorithm identifies informative instances and assigns them local malignancy scores that are incorporated into a global malignancy prediction. We derive a lower bound of the MLE, leading to an improved training strategy based on weak supervision, that we motivate through statistical analysis. The lower bound further allows us to extend the proposed algorithm to simultaneously predict multiple bag and instance-level labels from a single output of a neural network. Experimental results demonstrate that the proposed algorithm provides competitive performance compared to several competing methods, achieves (expert) human-level performance, and allows augmentation of human decisions.


Subject(s)
Image Interpretation, Computer-Assisted , Thyroid Neoplasms , Algorithms , Humans , Machine Learning , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...