Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
J Pathol Inform ; 13: 100002, 2022.
Article in English | MEDLINE | ID: mdl-35242442

ABSTRACT

Breast cancer is the second most commonly diagnosed type of cancer among women as of 2021. Grading of histopathological images is used to guide breast cancer treatment decisions and a critical component of this is a mitotic score, which is related to tumor aggressiveness. Manual mitosis counting is an extremely tedious manual task, but automated approaches can be used to overcome inefficiency and subjectivity. In this paper, we propose an automatic mitosis and nuclear segmentation method for a diverse set of H&E breast cancer pathology images. The method is based on a conditional generative adversarial network to segment both mitoses and nuclei at the same time. Architecture optimizations are investigated, including hyper parameters and the addition of a focal loss. The accuracy of the proposed method is investigated using images from multiple centers and scanners, including TUPAC16, ICPR14 and ICPR12 datasets. In TUPAC16, we use 618 carefully annotated images of size 256×256 scanned at 40×. TUPAC16 is used to train the model, and segmentation performance is measured on the test set for both nuclei and mitoses. Results on 200 held-out testing images from the TUPAC16 dataset were mean DSC = 0.784 and 0.721 for nuclear and mitosis, respectively. On 202 ICPR12 images, mitosis segmentation accuracy had a mean DSC = 0.782, indicating the model generalizes well to unseen datasets. For datasets that had mitosis centroid annotations, which included 200 TUPAC16, 202 ICPR12 and 524 ICPR14, a mean F1-score of 0.854 was found indicating high mitosis detection accuracy.

2.
Cancers (Basel) ; 13(1)2020 Dec 22.
Article in English | MEDLINE | ID: mdl-33375043

ABSTRACT

In this work, a novel proliferation index (PI) calculator for Ki67 images called piNET is proposed. It is successfully tested on four datasets, from three scanners comprised of patches, tissue microarrays (TMAs) and whole slide images (WSI), representing a diverse multi-centre dataset for evaluating Ki67 quantification. Compared to state-of-the-art methods, piNET consistently performs the best over all datasets with an average PI difference of 5.603%, PI accuracy rate of 86% and correlation coefficient R = 0.927. The success of the system can be attributed to several innovations. Firstly, this tool is built based on deep learning, which can adapt to wide variability of medical images-and it was posed as a detection problem to mimic pathologists' workflow which improves accuracy and efficiency. Secondly, the system is trained purely on tumor cells, which reduces false positives from non-tumor cells without needing the usual pre-requisite tumor segmentation step for Ki67 quantification. Thirdly, the concept of learning background regions through weak supervision is introduced, by providing the system with ideal and non-ideal (artifact) patches that further reduces false positives. Lastly, a novel hotspot analysis is proposed to allow automated methods to score patches from WSI that contain "significant" activity.

3.
J Clin Neurosci ; 72: 350-356, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31937502

ABSTRACT

Implementing pedicle safe zones with augmented reality has the potential to improve operating room workflow during pedicle screw insertion. These safe zones will allow for image guidance when tracked instruments are unavailable. Using the correct screw trajectory as a reference angle for a successful screw insertion, we will determine the angles which lead to medial, lateral, superior and inferior breaches. These breaches serve as the boundaries of the safe zones. Measuring safe zones from the view of the surgical site and comparing to the radiological view will further understand the visual relationship between the radiological scans and the surgical site. Safe zones were measured on a spine phantom and were then replicated on patients. It was found that the largest causes for variance was between each of the camera views and the radiological views. The differences between the left and right cameras were insignificant. Overall, the camera angles appeared to be larger than the radiological angles. The magnification effect found in the surgical site result in an increased level of angle sensitivity for pedicle screw insertion techniques. By designing a virtual road map on top of the surgical site directly using tracked tools, the magnification effect is already taken into consideration during surgery. Future initiatives include the use of an augmented reality headset.


Subject(s)
Pedicle Screws , Spine/surgery , Surgery, Computer-Assisted/instrumentation , Surgery, Computer-Assisted/methods , Augmented Reality , Female , Humans , Male , Phantoms, Imaging , Spinal Fusion/methods , Workflow
4.
J Clin Neurosci ; 72: 392-396, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31892493

ABSTRACT

Computer assisted navigation (CAN) is a technology which has been available for commercial use in operating rooms for quite some time now. CAN relies on the information presented in patient imaging (usually CT or MRI images) and the surgical site. The method for registration between these two sets of data is crucial for safe image guided navigation during surgery. Although the existing technologies are extremely accurate, they still pose problems in the operating. Motivation for this study is to explore the possibility of using augmented reality (AR) to improve ease of use for surgical navigation and provide a system which complements the existing operating room workflow. As with all commercially available surgical navigation systems, registration accuracy is of utmost important to maintain patient safety. In this paper, we propose a novel method to quantify registration accuracy for augmented reality (AR) devices in neurosurgery.


Subject(s)
Augmented Reality , Neurosurgery/methods , Surgery, Computer-Assisted/instrumentation , Surgery, Computer-Assisted/methods , Humans , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Neurosurgical Procedures
5.
Article in English | MEDLINE | ID: mdl-31737619

ABSTRACT

Image analysis tools for cancer, such as automatic nuclei segmentation, are impacted by the inherent variation contained in pathology image data. Convolutional neural networks (CNN), demonstrate success in generalizing to variable data, illustrating great potential as a solution to the problem of data variability. In some CNN-based segmentation works for digital pathology, authors apply color normalization (CN) to reduce color variability of data as a preprocessing step prior to prediction, while others do not. Both approaches achieve reasonable performance and yet, the reasoning for utilizing this step has not been justified. It is therefore important to evaluate the necessity and impact of CN for deep learning frameworks, and its effect on downstream processes. In this paper, we evaluate the effect of popular CN methods on CNN-based nuclei segmentation frameworks.

6.
Article in English | MEDLINE | ID: mdl-31632956

ABSTRACT

Automated image analysis tools for Ki67 breast cancer digital pathology images would have significant value if integrated into diagnostic pathology workflows. Such tools would reduce the workload of pathologists, while improving efficiency, and accuracy. Developing tools that are robust and reliable to multicentre data is challenging, however, differences in staining protocols, digitization equipment, staining compounds, and slide preparation can create variabilities in image quality and color across digital pathology datasets. In this work, a novel unsupervised color separation framework based on the IHC color histogram (IHCCH) is proposed for the robust analysis of Ki67 and hematoxylin stained images in multicentre datasets. An "overstaining" threshold is implemented to adjust for background overstaining, and an automated nuclei radius estimator is designed to improve nuclei detection. Proliferation index and F1 scores were compared between the proposed method and manually labeled ground truth data for 30 TMA cores that have ground truths for Ki67+ and Ki67- nuclei. The method accurately quantified the PI over the dataset, with an average proliferation index difference of 3.25%. To ensure the method generalizes to new, diverse datasets, 50 Ki67 TMAs from the Protein Atlas were used to test the validated approach. As the ground truth for this dataset is PI ranges, the automated result was compared to the PI range. The proposed method correctly classified 74 out of 80 TMA images, resulting in a 92.5% accuracy. In addition to these validations experiments, performance was compared to two color-deconvolution based methods, and to six machine learning classifiers. In all cases, the proposed work maintained more consistent (reproducible) results, and higher PI quantification accuracy.

SELECTION OF CITATIONS
SEARCH DETAIL
...