Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 32
Filter
1.
Clin Oral Investig ; 28(2): 133, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38315246

ABSTRACT

OBJECTIVE: The objective of this study was to compare the detection of caries in bitewing radiographs by multiple dentists with an automatic method and to evaluate the detection performance in the absence of a reliable ground truth. MATERIALS AND METHODS: Four experts and three novices marked caries using bounding boxes in 100 bitewing radiographs. The same dataset was processed by an automatic object detection deep learning method. All annotators were compared in terms of the number of errors and intersection over union (IoU) using pairwise comparisons, with respect to the consensus standard, and with respect to the annotator of the training dataset of the automatic method. RESULTS: The number of lesions marked by experts in 100 images varied between 241 and 425. Pairwise comparisons showed that the automatic method outperformed all dentists except the original annotator in the mean number of errors, while being among the best in terms of IoU. With respect to a consensus standard, the performance of the automatic method was best in terms of the number of errors and slightly below average in terms of IoU. Compared with the original annotator, the automatic method had the highest IoU and only one expert made fewer errors. CONCLUSIONS: The automatic method consistently outperformed novices and performed as well as highly experienced dentists. CLINICAL SIGNIFICANCE: The consensus in caries detection between experts is low. An automatic method based on deep learning can improve both the accuracy and repeatability of caries detection, providing a useful second opinion even for very experienced dentists.


Subject(s)
Dental Caries Susceptibility , Dental Caries , Humans , Radiography, Bitewing , Dental Caries/diagnostic imaging
2.
Clin Oral Investig ; 27(12): 7463-7471, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37968358

ABSTRACT

OBJECTIVE: The aim of this work was to assemble a large annotated dataset of bitewing radiographs and to use convolutional neural networks to automate the detection of dental caries in bitewing radiographs with human-level performance. MATERIALS AND METHODS: A dataset of 3989 bitewing radiographs was created, and 7257 carious lesions were annotated using minimal bounding boxes. The dataset was then divided into 3 parts for the training (70%), validation (15%), and testing (15%) of multiple object detection convolutional neural networks (CNN). The tested CNN architectures included YOLOv5, Faster R-CNN, RetinaNet, and EfficientDet. To further improve the detection performance, model ensembling was used, and nested predictions were removed during post-processing. The models were compared in terms of the [Formula: see text] score and average precision (AP) with various thresholds of the intersection over union (IoU). RESULTS: The twelve tested architectures had [Formula: see text] scores of 0.72-0.76. Their performance was improved by ensembling which increased the [Formula: see text] score to 0.79-0.80. The best-performing ensemble detected caries with the precision of 0.83, recall of 0.77, [Formula: see text], and AP of 0.86 at IoU=0.5. Small carious lesions were predicted with slightly lower accuracy (AP 0.82) than medium or large lesions (AP 0.88). CONCLUSIONS: The trained ensemble of object detection CNNs detected caries with satisfactory accuracy and performed at least as well as experienced dentists (see companion paper, Part II). The performance on small lesions was likely limited by inconsistencies in the training dataset. CLINICAL SIGNIFICANCE: Caries can be automatically detected using convolutional neural networks. However, detecting incipient carious lesions remains challenging.


Subject(s)
Deep Learning , Dental Caries , Humans , Dental Caries/diagnostic imaging , Dental Caries Susceptibility , Neural Networks, Computer
3.
Neurosurg Rev ; 46(1): 116, 2023 May 10.
Article in English | MEDLINE | ID: mdl-37162632

ABSTRACT

This study aims to develop a fully automated imaging protocol independent system for pituitary adenoma segmentation from magnetic resonance imaging (MRI) scans that can work without user interaction and evaluate its accuracy and utility for clinical applications. We trained two independent artificial neural networks on MRI scans of 394 patients. The scans were acquired according to various imaging protocols over the course of 11 years on 1.5T and 3T MRI systems. The segmentation model assigned a class label to each input pixel (pituitary adenoma, internal carotid artery, normal pituitary gland, background). The slice segmentation model classified slices as clinically relevant (structures of interest in slice) or irrelevant (anterior or posterior to sella turcica). We used MRI data of another 99 patients to evaluate the performance of the model during training. We validated the model on a prospective cohort of 28 patients, Dice coefficients of 0.910, 0.719, and 0.240 for tumour, internal carotid artery, and normal gland labels, respectively, were achieved. The slice selection model achieved 82.5% accuracy, 88.7% sensitivity, 76.7% specificity, and an AUC of 0.904. A human expert rated 71.4% of the segmentation results as accurate, 21.4% as slightly inaccurate, and 7.1% as coarsely inaccurate. Our model achieved good results comparable with recent works of other authors on the largest dataset to date and generalized well for various imaging protocols. We discussed future clinical applications, and their considerations. Models and frameworks for clinical use have yet to be developed and evaluated.


Subject(s)
Adenoma , Pituitary Neoplasms , Humans , Pituitary Neoplasms/diagnostic imaging , Pituitary Neoplasms/surgery , Prospective Studies , Magnetic Resonance Imaging , Neural Networks, Computer , Adenoma/diagnostic imaging , Adenoma/surgery , Image Processing, Computer-Assisted/methods
4.
Diagnostics (Basel) ; 12(12)2022 Dec 17.
Article in English | MEDLINE | ID: mdl-36553213

ABSTRACT

Primary aldosteronism (PA) is the most frequent cause of secondary hypertension. Early diagnoses of PA are essential to avoid the long-term negative effects of elevated aldosterone concentration on the cardiovascular and renal system. In this work, we study the texture of the carotid artery vessel wall from longitudinal ultrasound images in order to automatically distinguish between PA and essential hypertension (EH). The texture is characterized using 140 Haralick and 10 wavelet features evaluated in a region of interest in the vessel wall, followed by the XGBoost classifier. Carotid ultrasound studies were carried out on 33 patients aged 42-72 years with PA, 52 patients with EH, and 33 normotensive controls. For the most clinically relevant task of distinguishing PA and EH classes, we achieved a classification accuracy of 73% as assessed by a leave-one-out procedure. This result is promising even compared to the 57% prediction accuracy using clinical characteristics alone or 63% accuracy using a combination of clinical characteristics and intima-media thickness (IMT) parameters. If the accuracy is improved and the method incorporated into standard clinical procedures, this could eventually lead to an improvement in the early diagnosis of PA and consequently improve the clinical outcome for these patients in future.

5.
Comput Biol Med ; 151(Pt A): 106171, 2022 12.
Article in English | MEDLINE | ID: mdl-36306582

ABSTRACT

In this work, we classify chemotherapeutic agents (topoisomerase inhibitors) based on their effect on U-2 OS cells. We use phase-contrast microscopy images, which are faster and easier to obtain than fluorescence images and support live cell imaging. We use a convolutional neural network (CNN) trained end-to-end directly on the input images without requiring for manual segmentations or any other auxiliary data. Our method can distinguish between tested cytotoxic drugs with an accuracy of 98%, provided that their mechanism of action differs, outperforming previous work. The results are even better when substance-specific concentrations are used. We show the benefit of sharing the extracted features over all classes (drugs). Finally, a 2D visualization of these features reveals clusters, which correspond well to known class labels, suggesting the possible use of our methodology for drug discovery application in analyzing new, unseen drugs.


Subject(s)
Cell Culture Techniques , Neural Networks, Computer , Microscopy, Phase-Contrast
6.
IEEE Trans Med Imaging ; 39(10): 3042-3052, 2020 10.
Article in English | MEDLINE | ID: mdl-32275587

ABSTRACT

Automatic Non-rigid Histological Image Registration (ANHIR) challenge was organized to compare the performance of image registration algorithms on several kinds of microscopy histology images in a fair and independent manner. We have assembled 8 datasets, containing 355 images with 18 different stains, resulting in 481 image pairs to be registered. Registration accuracy was evaluated using manually placed landmarks. In total, 256 teams registered for the challenge, 10 submitted the results, and 6 participated in the workshop. Here, we present the results of 7 well-performing methods from the challenge together with 6 well-known existing methods. The best methods used coarse but robust initial alignment, followed by non-rigid registration, used multiresolution, and were carefully tuned for the data at hand. They outperformed off-the-shelf methods, mostly by being more robust. The best methods could successfully register over 98% of all landmarks and their mean landmark registration accuracy (TRE) was 0.44% of the image diagonal. The challenge remains open to submissions and all images are available for download.


Subject(s)
Algorithms , Histological Techniques
7.
IEEE Trans Pattern Anal Mach Intell ; 40(3): 755-761, 2018 03.
Article in English | MEDLINE | ID: mdl-28333621

ABSTRACT

We propose a novel approach to reconstructing curvilinear tree structures evolving over time, such as road networks in 2D aerial images or neural structures in 3D microscopy stacks acquired in vivo. To enforce temporal consistency, we simultaneously process all images in a sequence, as opposed to reconstructing structures of interest in each image independently. We formulate the problem as a Quadratic Mixed Integer Program and demonstrate the additional robustness that comes from using all available visual clues at once, instead of working frame by frame. Furthermore, when the linear structures undergo local changes over time, our approach automatically detects them.

8.
Comput Biol Med ; 87: 236-249, 2017 08 01.
Article in English | MEDLINE | ID: mdl-28618336

ABSTRACT

In recent years, computed tomography (CT) has become a standard technique in cardiac imaging because it provides detailed information that may facilitate the diagnosis of the conditions that interfere with correct heart function. However, CT-based cardiac diagnosis requires manual segmentation of heart cavities, which is a difficult and time-consuming task. Thus, in this paper, we propose a novel technique to segment endocardium and epicardium boundaries based on a 2D approach. The proposal computes relevant information of the left ventricle and its adjacent structures using the Hermite transform. The novelty of the work is that the information is combined with active shape models and level sets to improve the segmentation. Our database consists of mid-third slices selected from 28 volumes manually segmented by expert physicians. The segmentation is assessed using Dice coefficient and Hausdorff distance. In addition, we introduce a novel metric called Ray Feature error to evaluate our method. The results show that the proposal accurately discriminates cardiac tissue. Thus, it may be a useful tool for supporting heart disease diagnosis and tailoring treatments.


Subject(s)
Heart Ventricles/pathology , Humans , Models, Biological , Tomography, X-Ray Computed/methods
9.
IEEE Trans Pattern Anal Mach Intell ; 39(11): 2171-2185, 2017 11.
Article in English | MEDLINE | ID: mdl-28114003

ABSTRACT

We present an efficient matching method for generalized geometric graphs. Such graphs consist of vertices in space connected by curves and can represent many real world structures such as road networks in remote sensing, or vessel networks in medical imaging. Graph matching can be used for very fast and possibly multimodal registration of images of these structures. We formulate the matching problem as a single player game solved using Monte Carlo Tree Search, which automatically balances exploring new possible matches and extending existing matches. Our method can handle partial matches, topological differences, geometrical distortion, does not use appearance information and does not require an initial alignment. Moreover, our method is very efficient-it can match graphs with thousands of nodes, which is an order of magnitude better than the best competing method, and the matching only takes a few seconds.

10.
Cell Transplant ; 25(12): 2145-2156, 2016 12 13.
Article in English | MEDLINE | ID: mdl-27302978

ABSTRACT

Clinical islet transplantation programs rely on the capacities of individual centers to quantify isolated islets. Current computer-assisted methods require input from human operators. Here we describe two machine learning algorithms for islet quantification: the trainable islet algorithm (TIA) and the nontrainable purity algorithm (NPA). These algorithms automatically segment pancreatic islets and exocrine tissue on microscopic images in order to count individual islets and calculate islet volume and purity. References for islet counts and volumes were generated by the fully manual segmentation (FMS) method, which was validated against the internal DNA standard. References for islet purity were generated via the expert visual assessment (EVA) method, which was validated against the FMS method. The TIA is intended to automatically evaluate micrographs of isolated islets from future donors after being trained on micrographs from a limited number of past donors. Its training ability was first evaluated on 46 images from four donors. The pixel-to-pixel comparison, binary statistics, and islet DNA concentration indicated that the TIA was successfully trained, regardless of the color differences of the original images. Next, the TIA trained on the four donors was validated on an additional 36 images from nine independent donors. The TIA was fast (67 s/image), correlated very well with the FMS method (R2=1.00 and 0.92 for islet volume and islet count, respectively), and had small REs (0.06 and 0.07 for islet volume and islet count, respectively). Validation of the NPA against the EVA method using 70 images from 12 donors revealed that the NPA had a reasonable speed (69 s/image), had an acceptable RE (0.14), and correlated well with the EVA method (R2=0.88). Our results demonstrate that a fully automated analysis of clinical-grade micrographs of isolated pancreatic islets is feasible. The algorithms described herein will be freely available as a Fiji platform plugin.


Subject(s)
Image Processing, Computer-Assisted , Islets of Langerhans Transplantation , Islets of Langerhans/cytology , Algorithms , Animals , Automation , Humans , Machine Learning , Rats , Rats, Wistar
11.
Comput Biol Med ; 71: 57-66, 2016 Apr 01.
Article in English | MEDLINE | ID: mdl-26894595

ABSTRACT

This paper presents a fully automated method for the identification of bone marrow infiltration in femurs in low-dose CT of patients with multiple myeloma. We automatically find the femurs and the bone marrow within them. In the next step, we create a probabilistic, spatially dependent density model of normal tissue. At test time, we detect unexpectedly high density voxels which may be related to bone marrow infiltration, as outliers to this model. Based on a set of global, aggregated features representing all detections from one femur, we classify the subjects as being either healthy or not. This method was validated on a dataset of 127 subjects with ground truth created from a consensus of two expert radiologists, obtaining an AUC of 0.996 for the task of distinguishing healthy controls and patients with bone marrow infiltration. To the best of our knowledge, no other automatic image-based method for this task has been published before.


Subject(s)
Bone Marrow Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted , Machine Learning , Multiple Myeloma/diagnostic imaging , Tomography, X-Ray Computed/methods , Aged , Female , Humans , Male , Middle Aged , Neoplasm Metastasis
12.
IEEE Trans Pattern Anal Mach Intell ; 37(3): 625-38, 2015 Mar.
Article in English | MEDLINE | ID: mdl-26353266

ABSTRACT

We present a new approach for matching sets of branching curvilinear structures that form graphs embedded in R2 or R3 and may be subject to deformations. Unlike earlier methods, ours does not rely on local appearance similarity nor does require a good initial alignment. Furthermore, it can cope with non-linear deformations, topological differences, and partial graphs. To handle arbitrary non-linear deformations, we use Gaussian process regressions to represent the geometrical mapping relating the two graphs. In the absence of appearance information, we iteratively establish correspondences between points, update the mapping accordingly, and use it to estimate where to find the most likely correspondences that will be used in the next step. To make the computation tractable for large graphs, the set of new potential matches considered at each iteration is not selected at random as with many RANSAC-based algorithms. Instead, we introduce a so-called Active Testing Search strategy that performs a priority search to favor the most likely matches and speed-up the process. We demonstrate the effectiveness of our approach first on synthetic cases and then on angiography data, retinal fundus images, and microscopy image stacks acquired at very different resolutions.

13.
Med Image Anal ; 18(1): 22-35, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24080528

ABSTRACT

Accurate detection of liver lesions is of great importance in hepatic surgery planning. Recent studies have shown that the detection rate of liver lesions is significantly higher in gadoxetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA-enhanced MRI) than in contrast-enhanced portal-phase computed tomography (CT); however, the latter remains essential because of its high specificity, good performance in estimating liver volumes and better vessel visibility. To characterize liver lesions using both the above image modalities, we propose a multimodal nonrigid registration framework using organ-focused mutual information (OF-MI). This proposal tries to improve mutual information (MI) based registration by adding spatial information, benefiting from the availability of expert liver segmentation in clinical protocols. The incorporation of an additional information channel containing liver segmentation information was studied. A dataset of real clinical images and simulated images was used in the validation process. A Gd-EOB-DTPA-enhanced MRI simulation framework is presented. To evaluate results, warping index errors were calculated for the simulated data, and landmark-based and surface-based errors were calculated for the real data. An improvement of the registration accuracy for OF-MI as compared with MI was found for both simulated and real datasets. Statistical significance of the difference was tested and confirmed in the simulated dataset (p<0.01).


Subject(s)
Gadolinium DTPA , Liver Neoplasms/diagnosis , Magnetic Resonance Imaging/methods , Multimodal Imaging/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Tomography, X-Ray Computed/methods , Algorithms , Contrast Media , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Reproducibility of Results , Sensitivity and Specificity
14.
Comput Biol Med ; 43(12): 2036-45, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24290919

ABSTRACT

We present a method for automatic surgical tool localization in 3D ultrasound images based on line filtering, voxel classification and model fitting. This could possibly provide assistance for biopsy needle or micro-electrode insertion, or a robotic system performing this insertion. The line-filtering method is first used to enhance the contrast of the 3D ultrasound image, then a classifier is chosen to separate the tool voxels, in order to reduce the number of outliers. The last step is Random Sample Consensus (RANSAC) model fitting. Experimental results on several different polyvinyl alcohol (PVA) cryogel data sets demonstrate that the failure rate of the method proposed herein is improved by at least 86% compared to the model-fitting RANSAC algorithm with axis accuracy better than 1mm, at the expense of only a modest increase in computational effort. The results of this experiment show that this system could be useful for clinical applications.


Subject(s)
Imaging, Three-Dimensional/methods , Ultrasonography/methods , Humans
15.
Inf Process Med Imaging ; 23: 572-83, 2013.
Article in English | MEDLINE | ID: mdl-24684000

ABSTRACT

We present a general approach for solving the point-cloud matching problem for the case of mildly nonlinear transformations. Our method quickly finds a coarse approximation of the solution by exploring a reduced set of partial matches using an approach to which we refer to as Active Testing Search (ATS). We apply the method to registration of graph structures by branching point matching. It is based solely on the geometric position of the points, no additional information is used nor the knowledge of an initial alignment. In the second stage, we use dynamic programming to refine the solution. We tested our algorithm on angiography, retinal fundus, and neuronal data gathered using electron and light microscopy. We show that our method solves cases not solved by most approaches, and is faster than the remaining ones.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Bayes Theorem , Humans , Information Storage and Retrieval/methods , Models, Biological , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity
16.
Med Phys ; 39(2): 1006-15, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22320810

ABSTRACT

PURPOSE: Deformable registration generally relies on the assumption that the sought spatial transformation is smooth. Yet, breathing motion involves sliding of the lung with respect to the chest wall, causing a discontinuity in the motion field, and the smoothness assumption can lead to poor matching accuracy. In response, alternative registration methods have been proposed, several of which rely on prior segmentations. We propose an original method for automatically extracting a particular segmentation, called a motion mask, from a CT image of the thorax. METHODS: The motion mask separates moving from less-moving regions, conveniently allowing simultaneous estimation of their motion, while providing an interface where sliding occurs. The sought segmentation is subanatomical and based on physiological considerations, rather than organ boundaries. We therefore first extract clear anatomical features from the image, with respect to which the mask is defined. Level sets are then used to obtain smooth surfaces interpolating these features. The resulting procedure comes down to a monitored level set segmentation of binary label images. The method was applied to sixteen inhale-exhale image pairs. To illustrate the suitability of the motion masks, they were used during deformable registration of the thorax. RESULTS: For all patients, the obtained motion masks complied with the physiological requirements and were consistent with respect to patient anatomy between inhale and exhale. Registration using the motion mask resulted in higher matching accuracy for all patients, and the improvement was statistically significant. Registration performance was comparable to that obtained using lung masks when considering the entire lung region, but the use of motion masks led to significantly better matching near the diaphragm and mediastinum, for the bony anatomy and for the trachea. The use of the masks was shown to facilitate the registration, allowing to reduce the complexity of the spatial transformation considerably, while maintaining matching accuracy. CONCLUSIONS: We proposed an automated segmentation method for obtaining motion masks, capable of facilitating deformable registration of the thorax. The use of motion masks during registration leads to matching accuracies comparable to the use of lung masks for the lung region but motion masks are more suitable when registering the entire thorax.


Subject(s)
Artifacts , Pattern Recognition, Automated/methods , Radiographic Image Enhancement/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , Subtraction Technique , Tomography, X-Ray Computed/methods , Algorithms , Humans , Reproducibility of Results , Sensitivity and Specificity
17.
Comput Biol Med ; 41(10): 960-70, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21890126

ABSTRACT

Colposcopy is a well-established method to detect and diagnose intraepithelial lesions and uterine cervical cancer in early stages. During the exam color and texture changes are induced by the application of a contrast agent (e.g.3-5% acetic acid solution or iodine). Our aim is to densely quantify the change in the acetowhite decay level for a sequence of images captured during a colposcopy exam to help the physician in his diagnosis providing new tools that overcome subjectivity and improve reproducibility. As the change in acetowhite decay level must be calculated from the same tissue point in all images, we present an elastic image registration scheme able to compensate patient, camera and tissue movement robustly in cervical images. The image registration is based on a novel multi-feature entropy similarity criterion. Temporal features are then extracted using the color properties of the aligned image sequence and a dual compartment tissue model of the cervix. An example of the use of the temporal features for pixel-wise classification is presented and the results are compared against ground truth histopathological annotations.


Subject(s)
Colposcopy/methods , Diagnosis, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Acetic Acid/chemistry , Adult , Algorithms , Cervix Uteri/pathology , Databases, Factual , Female , Humans , Middle Aged , Reproducibility of Results , Uterine Cervical Neoplasms/diagnosis
18.
Med Phys ; 38(1): 166-78, 2011 Jan.
Article in English | MEDLINE | ID: mdl-21361185

ABSTRACT

PURPOSE: Four-dimensional computed tomography (4D CT) can provide patient-specific motion information for radiotherapy planning and delivery. Motion estimation in 4D CT is challenging due to the reduced image quality and the presence of artifacts. We aim to improve the robustness of deformable registration applied to respiratory-correlated imaging of the lungs, by using a global problem formulation and pursuing a restrictive parametrization for the spatiotemporal deformation model. METHODS: A spatial transformation based on free-form deformations was extended to the temporal domain, by explicitly modeling the trajectory using a cyclic temporal model based on B-splines. A global registration criterion allowed to consider the entire image sequence simultaneously and enforce the temporal coherence of the deformation throughout the respiratory cycle. To ensure a parametrization capable of capturing the dynamics of respiratory motion, a prestudy was performed on the temporal dimension separately. The temporal parameters were tuned by fitting them to diaphragm motion data acquired for a large patient group. Suitable properties were retained and applied to spatiotemporal registration of 4D CT data. Registration results were validated using large sets of landmarks and compared to consecutive spatial registrations. To illustrate the benefit of the spatiotemporal approach, we also assessed the performance in the presence of motion-induced artifacts. RESULTS: Cubic B-splines gave better or similar fitting results as lower orders and were selected because of their inherently stronger regularization. The fitting and registration errors increased gradually with the temporal control point spacing, representing a trade-off between achievable accuracy and sensitivity to noise and artifacts. A piecewise smooth trajectory model, allowing for a discontinuous change of speed at end-inhale, was found most suitable to account for the sudden changes of motion at this breathing phase. The spatiotemporal modeling allowed a reduction of the number of parameters of 45%, while maintaining registration accuracy within 0.1 mm. The approach reduced the sensitivity to artifacts. CONCLUSIONS: Spatiotemporal registration can provide accurate motion estimation for 4D CT and improves the robustness to artifacts.


Subject(s)
Four-Dimensional Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Lung/diagnostic imaging , Lung/physiology , Movement , Respiration , Artifacts , Diaphragm/diagnostic imaging , Diaphragm/physiology , Models, Biological , Time Factors
19.
IEEE Trans Biomed Eng ; 57(8): 1907-16, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20483680

ABSTRACT

Ultrasound guidance is used for many surgical interventions such as biopsy and electrode insertion. We present a method to localize a thin surgical tool such as a biopsy needle or a microelectrode in a 3-D ultrasound image. The proposed method starts with thresholding and model fitting using random sample consensus for robust localization of the axis. Subsequent local optimization refines its position. Two different tool image models are presented: one is simple and fast and the second uses learned a priori information about the tool's voxel intensities and the background. Finally, the tip of the tool is localized by finding an intensity drop along the axis. The simulation study shows that our algorithm can localize the tool at nearly real-time speed, even using a MATLAB implementation, with accuracy better than 1 mm. In an experimental comparison with several alternative localization methods, our method appears to be the fastest and the most robust one. We also show the results on real 3-D ultrasound data from a PVA cryogel phantom, turkey breast, and breast biopsy.


Subject(s)
Algorithms , Imaging, Three-Dimensional/methods , Surgery, Computer-Assisted/methods , Ultrasonography/methods , Animals , Breast , Computer Simulation , Cryogels , Electrodes , Female , Humans , Hydrogels , Meat , Models, Statistical , Needles , Phantoms, Imaging , Polyvinyl Alcohol , Turkeys
20.
IEEE Trans Image Process ; 19(1): 64-73, 2010 Jan.
Article in English | MEDLINE | ID: mdl-19709978

ABSTRACT

We address the problem of estimating the uncertainty of pixel based image registration algorithms, given just the two images to be registered, for cases when no ground truth data is available. Our novel method uses bootstrap resampling. It is very general, applicable to almost any registration method based on minimizing a pixel-based similarity criterion; we demonstrate it using the SSD, SAD, correlation, and mutual information criteria. We show experimentally that the bootstrap method provides better estimates of the registration accuracy than the state-of-the-art CramEr-Rao bound method. Additionally, we evaluate also a fast registration accuracy estimation (FRAE) method which is based on quadratic sensitivity analysis ideas and has a negligible computational overhead. FRAE mostly works better than the CramEr-Rao bound method but is outperformed by the bootstrap method.

SELECTION OF CITATIONS
SEARCH DETAIL
...