Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE J Biomed Health Inform ; 21(2): 441-450, 2017 03.
Article in English | MEDLINE | ID: mdl-26800556

ABSTRACT

In this paper, we introduce and evaluate the systems submitted to the first Overlapping Cervical Cytology Image Segmentation Challenge, held in conjunction with the IEEE International Symposium on Biomedical Imaging 2014. This challenge was organized to encourage the development and benchmarking of techniques capable of segmenting individual cells from overlapping cellular clumps in cervical cytology images, which is a prerequisite for the development of the next generation of computer-aided diagnosis systems for cervical cancer. In particular, these automated systems must detect and accurately segment both the nucleus and cytoplasm of each cell, even when they are clumped together and, hence, partially occluded. However, this is an unsolved problem due to the poor contrast of cytoplasm boundaries, the large variation in size and shape of cells, and the presence of debris and the large degree of cellular overlap. The challenge initially utilized a database of 16 high-resolution ( ×40 magnification) images of complex cellular fields of view, in which the isolated real cells were used to construct a database of 945 cervical cytology images synthesized with a varying number of cells and degree of overlap, in order to provide full access of the segmentation ground truth. These synthetic images were used to provide a reliable and comprehensive framework for quantitative evaluation on this segmentation problem. Results from the submitted methods demonstrate that all the methods are effective in the segmentation of clumps containing at most three cells, with overlap coefficients up to 0.3. This highlights the intrinsic difficulty of this challenge and provides motivation for significant future improvement.


Subject(s)
Algorithms , Cervix Uteri/cytology , Image Processing, Computer-Assisted/methods , Microscopy/methods , Cervix Uteri/diagnostic imaging , Female , Humans , Papanicolaou Test/methods , Uterine Cervical Neoplasms
2.
J Pathol Inform ; 7: 28, 2016.
Article in English | MEDLINE | ID: mdl-27563487

ABSTRACT

CONTEXT: It has been shown that ovarian carcinoma subtypes are distinct pathologic entities with differing prognostic and therapeutic implications. Histotyping by pathologists has good reproducibility, but occasional cases are challenging and require immunohistochemistry and subspecialty consultation. Motivated by the need for more accurate and reproducible diagnoses and to facilitate pathologists' workflow, we propose an automatic framework for ovarian carcinoma classification. MATERIALS AND METHODS: Our method is inspired by pathologists' workflow. We analyse imaged tissues at two magnification levels and extract clinically-inspired color, texture, and segmentation-based shape descriptors using image-processing methods. We propose a carefully designed machine learning technique composed of four modules: A dissimilarity matrix, dimensionality reduction, feature selection and a support vector machine classifier to separate the five ovarian carcinoma subtypes using the extracted features. RESULTS: This paper presents the details of our implementation and its validation on a clinically derived dataset of eighty high-resolution histopathology images. The proposed system achieved a multiclass classification accuracy of 95.0% when classifying unseen tissues. Assessment of the classifier's confusion (confusion matrix) between the five different ovarian carcinoma subtypes agrees with clinician's confusion and reflects the difficulty in diagnosing endometrioid and serous carcinomas. CONCLUSIONS: Our results from this first study highlight the difficulty of ovarian carcinoma diagnosis which originate from the intrinsic class-imbalance observed among subtypes and suggest that the automatic analysis of ovarian carcinoma subtypes could be valuable to clinician's diagnostic procedure by providing a second opinion.

3.
Int J Comput Assist Radiol Surg ; 11(8): 1409-18, 2016 Aug.
Article in English | MEDLINE | ID: mdl-26872810

ABSTRACT

PURPOSE: Despite great advances in medical image segmentation, the accurate and automatic segmentation of endoscopic scenes remains a challenging problem. Two important aspects have to be considered in segmenting an endoscopic scene: (1) noise and clutter due to light reflection and smoke from cutting tissue, and (2) structure occlusion (e.g. vessels occluded by fat, or endophytic tumours occluded by healthy kidney tissue). METHODS: In this paper, we propose a variational technique to augment a surgeon's endoscopic view by segmenting visible as well as occluded structures in the intraoperative endoscopic view. Our method estimates the 3D pose and deformation of anatomical structures segmented from 3D preoperative data in order to align to and segment corresponding structures in 2D intraoperative endoscopic views. Our preoperative to intraoperative alignment is driven by, first, spatio-temporal, signal processing based vessel pulsation cues and, second, machine learning based analysis of colour and textural visual cues. To our knowledge, this is the first work that utilizes vascular pulsation cues for guiding preoperative to intraoperative registration. In addition, we incorporate a tissue-specific (i.e. heterogeneous) physically based deformation model into our framework to cope with the non-rigid deformation of structures that occurs during the intervention. RESULTS: We validated the utility of our technique on fifteen challenging clinical cases with 45 % improvements in accuracy compared to the state-of-the-art method. CONCLUSIONS: A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.


Subject(s)
Endoscopy/methods , Imaging, Three-Dimensional/methods , Color , Humans , Nephrectomy/methods
4.
IEEE Trans Med Imaging ; 35(1): 1-12, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26151933

ABSTRACT

In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness.


Subject(s)
Imaging, Three-Dimensional/methods , Robotic Surgical Procedures/methods , Algorithms , Animals , Humans , Kidney/pathology , Kidney/surgery , Kidney Neoplasms/pathology , Kidney Neoplasms/surgery , Nephrectomy , Sheep
5.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 324-31, 2014.
Article in English | MEDLINE | ID: mdl-25485395

ABSTRACT

Synergistic fusion of pre-operative (pre-op) and intraoperative (intra-op) imaging data provides surgeons with invaluable insightful information that can improve their decision-making during minimally invasive robotic surgery. In this paper, we propose an efficient technique to segment multiple objects in intra-op multi-view endoscopic videos based on priors captured from pre-op data. Our approach leverages information from 3D pre-op data into the analysis of visual cues in the 2D intra-op data by formulating the problem as one of finding the 3D pose and non-rigid deformations of tissue models driven by features from 2D images. We present a closed-form solution for our formulation and demonstrate how it allows for the inclusion of laparoscopic camera motion model. Our efficient method runs in real-time on a single core CPU making it practical even for robotic surgery systems with limited computational resources. We validate the utility of our technique on ex vivo data as well as in vivo clinical data from laparoscopic partial nephrectomy surgery and demonstrate its robustness in segmenting stereo endoscopic videos.


Subject(s)
Capsule Endoscopy/methods , Imaging, Three-Dimensional/methods , Kidney Neoplasms/pathology , Kidney Neoplasms/surgery , Nephrectomy/methods , Pattern Recognition, Automated/methods , Surgery, Computer-Assisted/methods , Animals , Image Interpretation, Computer-Assisted/methods , Preoperative Care/methods , Reproducibility of Results , Sensitivity and Specificity , Sheep , Subtraction Technique , Viscera/pathology , Viscera/surgery
6.
IEEE Trans Med Imaging ; 33(9): 1845-59, 2014 Sep.
Article in English | MEDLINE | ID: mdl-24835214

ABSTRACT

Incorporating prior knowledge into image segmentation algorithms has proven useful for obtaining more accurate and plausible results. Two important constraints, containment and exclusion of regions, have gained attention in recent years mainly due to their descriptive power. In this paper, we augment the level set framework with the ability to handle these two intuitive geometric relationships, containment and exclusion, along with a distance constraint between boundaries of multi-region objects. Level set's important property of automatically handling topological changes of evolving contours/surfaces enables us to segment spatially-recurring objects (e.g., multiple instances of multi-region cells in a large microscopy image) while satisfying the two aforementioned constraints. In addition, the level set approach gives us a very simple and natural way to compute the distance between contours/surfaces and impose constraints on it. The downside, however, is a local optimization framework in which the final segmentation solution depends on the initialization. In fact, here, we sacrifice the optimizability (local instead of global solution) in exchange for lower space complexity (less memory usage) and faster runtime (especially for large microscopic images) as well as no grid artifacts. Nevertheless, the result from validating our method on several biomedical applications showed the utility and advantages of this augmented level set framework (even with rough initialization that is distant from the desired boundaries). We also compared our framework with its counterpart methods in the discrete domain and reported the pros and cons of each of these methods in terms of metrication error and efficiency in memory usage and runtime.


Subject(s)
Diagnostic Imaging/methods , Image Processing, Computer-Assisted/methods , Algorithms , Brain/anatomy & histology , Brain/blood supply , Heart/anatomy & histology , Humans , Lung/anatomy & histology , Lung/blood supply
7.
Article in English | MEDLINE | ID: mdl-24505699

ABSTRACT

We propose a method for targeted segmentation that identifies and delineates only those spatially-recurring objects that conform to specific geometrical, topological and appearance priors. By adopting a "tribes"-based, global genetic algorithm, we show how we incorporate such priors into a faithful objective function unconcerned about its convexity. We evaluated our framework on a variety of histology and microscopy images to segment potentially overlapping cells with complex topology. Our experiments confirmed the generality, reproducibility and improved accuracy of our approach compared to competing methods.


Subject(s)
Algorithms , Cell Tracking/methods , Cells, Cultured/cytology , Image Interpretation, Computer-Assisted/methods , Microscopy/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Animals , Humans , Image Enhancement/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...