Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
J Med Imaging (Bellingham) ; 10(Suppl 1): S11913, 2023 Feb.
Article in English | MEDLINE | ID: mdl-37223324

ABSTRACT

Purpose: Portable magnetic resonance imaging (pMRI) has potential to rapidly acquire images at the patients' bedside to improve access in locations lacking MRI devices. The scanner under consideration has a magnetic field strength of 0.064 T, thus image-processing algorithms to improve image quality are required. Our study evaluated pMRI images produced using a deep learning (DL)-based advanced reconstruction scheme to improve image quality by reducing image blurring and noise to determine if diagnostic performance was similar to images acquired at 1.5 T. Approach: Six radiologists viewed 90 brain MRI cases (30 acute ischemic stroke (AIS), 30 hemorrhage, 30 no lesion) with T1, T2, and fluid attenuated inversion recovery sequences, once using standard of care (SOC) images (1.5 T) and once using pMRI DL-based advanced reconstruction images. Observers provided a diagnosis and decision confidence. Time to review each image was recorded. Results: Receiver operating characteristic area under the curve revealed overall no significant difference (p=0.0636) between pMRI and SOC images. Examining each abnormality, for acute ischemic stroke, there was a significant difference (p=0.0042) with SOC better than pMRI; but for hemorrhage, there was no significant difference (p=0.1950). There was no significant difference in viewing time for pMRI versus SOC (p=0.0766) or abnormality (p=0.3601). Conclusions: The deep learning (DL)-based reconstruction scheme to improve pMRI was successful for hemorrhage, but for acute ischemic stroke the scheme could still be improved. For neurocritical care especially in remote and/or resource poor locations, pMRI has significant clinical utility, although radiologists should be aware of limitations of low-field MRI devices in overall quality and take that into account when diagnosing. As an initial triage to aid in the decision of whether to transport or keep patients on site, pMRI images likely provide enough information.

2.
Med Image Anal ; 81: 102538, 2022 10.
Article in English | MEDLINE | ID: mdl-35926336

ABSTRACT

While enabling accelerated acquisition and improved reconstruction accuracy, current deep MRI reconstruction networks are typically supervised, require fully sampled data, and are limited to Cartesian sampling patterns. These factors limit their practical adoption as fully-sampled MRI is prohibitively time-consuming to acquire clinically. Further, non-Cartesian sampling patterns are particularly desirable as they are more amenable to acceleration and show improved motion robustness. To this end, we present a fully self-supervised approach for accelerated non-Cartesian MRI reconstruction which leverages self-supervision in both k-space and image domains. In training, the undersampled data are split into disjoint k-space domain partitions. For the k-space self-supervision, we train a network to reconstruct the input undersampled data from both the disjoint partitions and from itself. For the image-level self-supervision, we enforce appearance consistency obtained from the original undersampled data and the two partitions. Experimental results on our simulated multi-coil non-Cartesian MRI dataset demonstrate that DDSS can generate high-quality reconstruction that approaches the accuracy of the fully supervised reconstruction, outperforming previous baseline methods. Finally, DDSS is shown to scale to highly challenging real-world clinical MRI reconstruction acquired on a portable low-field (0.064 T) MRI scanner with no data available for supervised training while demonstrating improved image quality as compared to traditional reconstruction, as determined by a radiologist study.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Motion , Supervised Machine Learning
3.
Sci Rep ; 12(1): 67, 2022 01 07.
Article in English | MEDLINE | ID: mdl-34996970

ABSTRACT

Neuroimaging is crucial for assessing mass effect in brain-injured patients. Transport to an imaging suite, however, is challenging for critically ill patients. We evaluated the use of a low magnetic field, portable MRI (pMRI) for assessing midline shift (MLS). In this observational study, 0.064 T pMRI exams were performed on stroke patients admitted to the neuroscience intensive care unit at Yale New Haven Hospital. Dichotomous (present or absent) and continuous MLS measurements were obtained on pMRI exams and locally available and accessible standard-of-care imaging exams (CT or MRI). We evaluated the agreement between pMRI and standard-of-care measurements. Additionally, we assessed the relationship between pMRI-based MLS and functional outcome (modified Rankin Scale). A total of 102 patients were included in the final study (48 ischemic stroke; 54 intracranial hemorrhage). There was significant concordance between pMRI and standard-of-care measurements (dichotomous, κ = 0.87; continuous, ICC = 0.94). Low-field pMRI identified MLS with a sensitivity of 0.93 and specificity of 0.96. Moreover, pMRI MLS assessments predicted poor clinical outcome at discharge (dichotomous: adjusted OR 7.98, 95% CI 2.07-40.04, p = 0.005; continuous: adjusted OR 1.59, 95% CI 1.11-2.49, p = 0.021). Low-field pMRI may serve as a valuable bedside tool for detecting mass effect.


Subject(s)
Brain/diagnostic imaging , Magnetic Resonance Imaging , Point-of-Care Systems , Point-of-Care Testing , Stroke/diagnostic imaging , Aged , Connecticut , Female , Humans , Intensive Care Units , Male , Middle Aged , Predictive Value of Tests , Prognosis , Prospective Studies , Reproducibility of Results , Stroke/therapy
4.
Med Image Comput Comput Assist Interv ; 13436: 66-77, 2022 Sep.
Article in English | MEDLINE | ID: mdl-37576451

ABSTRACT

Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize hand-crafted inter-domain similarity functions, are limited in modeling nonlinear intensity-relationships and deformations, and may require significant re-engineering or underperform on new tasks, datasets, and domain pairs. This work presents ContraReg, an unsupervised contrastive representation learning approach to multi-modality deformable registration. By projecting learned multi-scale local patch features onto a jointly learned inter-domain embedding space, ContraReg obtains representations useful for non-rigid multi-modality alignment. Experimentally, ContraReg achieves accurate and robust results with smooth and invertible deformations across a series of baselines and ablations on a neonatal T1-T2 brain MRI registration task with all methods validated over a wide range of deformation regularization strengths.

5.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 372-80, 2014.
Article in English | MEDLINE | ID: mdl-25333140

ABSTRACT

Patient-specific orthopedic knee surgery planning requires precisely segmenting from 3D CT images multiple knee bones, namely femur, tibia, fibula, and patella, around the knee joint with severe pathologies. In this work, we propose a fully automated, highly precise, and computationally efficient segmentation approach for multiple bones. First, each bone is initially segmented using a model-based marginal space learning framework for pose estimation followed by non-rigid boundary deformation. To recover shape details, we then refine the bone segmentation using graph cut that incorporates the shape priors derived from the initial segmentation. Finally we remove overlap between neighboring bones using multi-layer graph partition. In experiments, we achieve simultaneous segmentation of femur, tibia, patella, and fibula with an overall accuracy of less than 1mm surface-to-surface error in less than 90s on hundreds of 3D CT scans with pathological knee joints.


Subject(s)
Bone and Bones/diagnostic imaging , Information Storage and Retrieval/methods , Knee Joint/diagnostic imaging , Pattern Recognition, Automated/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Artificial Intelligence , Bone and Bones/surgery , Humans , Imaging, Three-Dimensional/methods , Knee Joint/surgery , Preoperative Care/methods , Radiographic Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
6.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 804-11, 2014.
Article in English | MEDLINE | ID: mdl-25333193

ABSTRACT

The diversity in appearance of diseased lung tissue makes automatic segmentation of lungs from CT with severe pathologies challenging. To overcome this challenge, we rely on contextual constraints from neighboring anatomies to detect and segment lung tissue across a variety of pathologies. We propose an algorithm that combines statistical learning with these anatomical constraints to seek a segmentation of the lung consistent with adjacent structures, such as the heart, liver, spleen, and ribs. We demonstrate that our algorithm reduces the number of failed detections and increases the accuracy of the segmentation on unseen test cases with severe pathologies.


Subject(s)
Anatomic Landmarks/diagnostic imaging , Imaging, Three-Dimensional/methods , Lung Diseases/diagnostic imaging , Lung/diagnostic imaging , Pattern Recognition, Automated/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Humans , Radiographic Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
7.
IEEE Trans Med Imaging ; 33(5): 1054-70, 2014 May.
Article in English | MEDLINE | ID: mdl-24770911

ABSTRACT

Routine ultrasound exam in the second and third trimesters of pregnancy involves manually measuring fetal head and brain structures in 2-D scans. The procedure requires a sonographer to find the standardized visualization planes with a probe and manually place measurement calipers on the structures of interest. The process is tedious, time consuming, and introduces user variability into the measurements. This paper proposes an automatic fetal head and brain (AFHB) system for automatically measuring anatomical structures from 3-D ultrasound volumes. The system searches the 3-D volume in a hierarchy of resolutions and by focusing on regions that are likely to be the measured anatomy. The output is a standardized visualization of the plane with correct orientation and centering as well as the biometric measurement of the anatomy. The system is based on a novel framework for detecting multiple structures in 3-D volumes. Since a joint model is difficult to obtain in most practical situations, the structures are detected in a sequence, one-by-one. The detection relies on Sequential Estimation techniques, frequently applied to visual tracking. The interdependence of structure poses and strong prior information embedded in our domain yields faster and more accurate results than detecting the objects individually. The posterior distribution of the structure pose is approximated at each step by sequential Monte Carlo. The samples are propagated within the sequence across multiple structures and hierarchical levels. The probabilistic model helps solve many challenges present in the ultrasound images of the fetus such as speckle noise, signal drop-out, shadows caused by bones, and appearance variations caused by the differences in the fetus gestational age. This is possible by discriminative learning on an extensive database of scans comprising more than two thousand volumes and more than thirteen thousand annotations. The average difference between ground truth and automatic measurements is below 2 mm with a running time of 6.9 s (GPU) or 14.7 s (CPU). The accuracy of the AFHB system is within inter-user variability and the running time is fast, which meets the requirements for clinical use.


Subject(s)
Fetus/anatomy & histology , Head/anatomy & histology , Imaging, Three-Dimensional/methods , Ultrasonography, Prenatal/methods , Algorithms , Cephalometry , Female , Humans , Pregnancy
8.
Med Image Comput Comput Assist Interv ; 16(Pt 3): 235-42, 2013.
Article in English | MEDLINE | ID: mdl-24505766

ABSTRACT

Automatic segmentation techniques, despite demonstrating excellent overall accuracy, can often produce inaccuracies in local regions. As a result, correcting segmentations remains an important task that is often laborious, especially when done manually for 3D datasets. This work presents a powerful tool called Intelligent Learning-Based Editor of Segmentations (IntellEditS) that minimizes user effort and further improves segmentation accuracy. The tool partners interactive learning with an energy-minimization approach to editing. Based on interactive user input, a discriminative classifier is trained and applied to the edited 3D region to produce soft voxel labeling. The labels are integrated into a novel energy functional along with the existing segmentation and image data. Unlike the state of the art, IntellEditS is designed to correct segmentation results represented not only as masks but also as meshes. In addition, IntellEditS accepts intuitive boundary-based user interactions. The versatility and performance of IntellEditS are demonstrated on both MRI and CT datasets consisting of varied anatomical structures and resolutions.


Subject(s)
Algorithms , Artificial Intelligence , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Pattern Recognition, Automated/methods , Software , Tomography, X-Ray Computed/methods , Documentation/methods , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
9.
Med Image Comput Comput Assist Interv ; 16(Pt 3): 243-50, 2013.
Article in English | MEDLINE | ID: mdl-24505767

ABSTRACT

This paper proposes a fully automatic approach for computing Nuchal Translucency (NT) measurement in an ultrasound scans of the mid-sagittal plane of a fetal head. This is an improvement upon current NT measurement methods which require manual placement of NT measurement points or user-guidance in semi-automatic segmentation of the NT region. The algorithm starts by finding the pose of the fetal head using discriminative learning-based detectors. The fetal head serves as a robust anchoring structure and the NT region is estimated from the statistical relationship between the fetal head and the NT region. Next, the pose of the NT region is locally refined and its inner and outer edge approximately determined via Dijkstra's shortest path applied on the edge-enhanced image. Finally, these two region edges are used to define foreground and background seeds for accurate graph cut segmentation. The NT measurement is computed from the segmented region. Experiments show that the algorithm efficiently and effectively detects the NT region and provides accurate NT measurement which suggests suitability for clinical use.


Subject(s)
Artificial Intelligence , Image Interpretation, Computer-Assisted/methods , Neck/diagnostic imaging , Neck/embryology , Nuchal Translucency Measurement/methods , Pattern Recognition, Automated/methods , Ultrasonography, Prenatal/methods , Algorithms , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
10.
Inf Process Med Imaging ; 23: 328-39, 2013.
Article in English | MEDLINE | ID: mdl-24683980

ABSTRACT

Detecting tubular structures such as airways or vessels in medical images is important for diagnosis and surgical planning. Many state-of-the-art approaches address this problem by starting from the root and progressing towards thinnest tubular structures usually guided by image filtering techniques. These approaches need to be tailored for each application and can fail in noisy or low-contrast regions. In this work, we address these challenges by a two-layer model which consists of a low-level likelihood measure and a high-level measure verifying tubular branches. The algorithm starts by computing a robust measure of tubular presence using a discriminative classifier at multiple image scales. The measure is then used in an efficient multi-scale shortest path algorithm to generate candidate centerline branches and corresponding radii measurements. Finally, the branches are verified by a learning-based indicator function that discards false candidate branches. The experiments on detecting airways in rotational X-ray volumes show that the technique is robust to noise and correctly finds airways even in the presence of imaging artifacts.


Subject(s)
Algorithms , Artificial Intelligence , Imaging, Three-Dimensional/methods , Lung/diagnostic imaging , Pattern Recognition, Automated/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Discriminant Analysis , Humans , Radiographic Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
11.
IEEE Trans Biomed Eng ; 59(12): 3337-47, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22929368

ABSTRACT

Motivated by the goals of automatically extracting vessel segments and constructing retinal vascular trees with anatomical realism, this paper presents and analyses an algorithm that combines vessel segmentation and grouping of the extracted vessel segments. The proposed method aims to restore the topology of the vascular trees with anatomical realism for clinical studies and diagnosis of retinal vascular diseases, which manifest abnormalities in either venous and/or arterial vascular systems. Vessel segments are grouped using extended Kalman filter which takes into account continuities in curvature, width, and intensity changes at the bifurcation or crossover point. At a junction, the proposed method applies the minimum-cost matching algorithm to resolve the conflict in grouping due to error in tracing. The system was trained with 20 images from the DRIVE dataset, and tested using the remaining 20 images. The dataset contained a mixture of normal and pathological images. In addition, six pathological fluorescein angiogram sequences were also included in this study. The results were compared against the groundtruth images provided by a physician, achieving average success rates of 88.79% and 90.09%, respectively.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Retinal Vessels/anatomy & histology , Databases, Factual , Fluorescein Angiography , Humans , Retinal Diseases/pathology , Retinal Vessels/pathology
12.
Med Image Comput Comput Assist Interv ; 14(Pt 3): 166-74, 2011.
Article in English | MEDLINE | ID: mdl-22003696

ABSTRACT

We propose an automatic algorithm for phase labeling that relies on the intensity changes in anatomical regions due to the contrast agent propagation. The regions (specified by aorta, vena cava, liver, and kidneys) are first detected by a robust learning-based discriminative algorithm. The intensities inside each region are then used in multi-class LogitBoost classifiers to independently estimate the contrast phase. Each classifier forms a node in a decision tree which is used to obtain the final phase label. Combining independent classification from multiple regions in a tree has the advantage when one of the region detectors fail or when the phase training example database is imbalanced. We show on a dataset of 1016 volumes that the system correctly classifies native phase in 96.2% of the cases, hepatic dominant phase (92.2%), hepatic venous phase (96.7%), and equilibrium phase (86.4%) in 7 seconds on average.


Subject(s)
Cone-Beam Computed Tomography/methods , Contrast Media/pharmacology , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Aorta/pathology , Automation , Decision Trees , Humans , Kidney/pathology , Liver/pathology , Models, Statistical , Myocardium/pathology , Pattern Recognition, Automated , Reproducibility of Results
13.
Med Image Comput Comput Assist Interv ; 14(Pt 3): 338-45, 2011.
Article in English | MEDLINE | ID: mdl-22003717

ABSTRACT

We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.


Subject(s)
Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Tomography, X-Ray Computed/methods , Algorithms , Artificial Intelligence , Databases, Factual , Humans , Kidney/pathology , Learning , Liver/pathology , Lung/pathology , Models, Anatomic , Models, Statistical , Principal Component Analysis , Reproducibility of Results , Software
14.
Med Image Comput Comput Assist Interv ; 14(Pt 3): 667-74, 2011.
Article in English | MEDLINE | ID: mdl-22003757

ABSTRACT

Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.


Subject(s)
Cone-Beam Computed Tomography/methods , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/diagnosis , Lung/diagnostic imaging , Algorithms , Diagnostic Imaging/methods , Humans , Learning , Lung/pathology , Lung Neoplasms/pathology , Models, Statistical , Pattern Recognition, Automated/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Software
15.
Med Image Anal ; 14(3): 407-28, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20363173

ABSTRACT

In the clinical workflow for lung cancer management, the comparison of nodules between CT scans from subsequent visits by a patient is necessary for timely classification of pulmonary nodules into benign and malignant and for analyzing nodule growth and response to therapy. The algorithm described in this paper takes (a) two temporally-separated CT scans, I(1) and I(2), and (b) a series of nodule locations in I(1), and for each location it produces an affine transformation that maps the locations and their immediate neighborhoods from I(1) to I(2). It does this without deformable registration and without initialization by global affine registration. Requiring the nodule locations to be specified in only one volume provides the clinician more flexibility in investigating the condition of the lung. The algorithm uses a combination of feature extraction, indexing, refinement, and decision processes. Together, these processes essentially "recognize" the neighborhoods. We show on lung CT scans that our technique works at near interactive speed and that the median alignment error of 134 nodules is 1.70mm compared to the error 2.14mm of the Diffeomorphic Demons algorithm, and to the error 3.57mm of the global nodule registration with local refinement. We demonstrate on the alignment of 250 nodules, that the algorithm is robust to changes caused by cancer progression and differences in breathing states, scanning procedures, and patient positioning. Our algorithm may be used both for diagnosis and treatment monitoring of lung cancer. Because of the generic design of the algorithm, it might also be used in other applications that require fast and accurate mapping of regions.


Subject(s)
Algorithms , Lung Neoplasms/diagnostic imaging , Pattern Recognition, Automated/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed/methods , Artificial Intelligence , Humans , Radiographic Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
16.
Med Image Comput Comput Assist Interv ; 11(Pt 2): 989-97, 2008.
Article in English | MEDLINE | ID: mdl-18982701

ABSTRACT

The algorithm described in this paper takes (a) two temporally-separated CT scans, I1 and I2, and (b) a series of locations in I1, and it produces, for each location, an affine transformation mapping the locations and their immediate neighborhood from I1 to I2. It does this without deformable registration by using a combination of feature extraction, indexing, refinement and decision processes. Together these essentially "recognize" the neighborhoods. We show on lung CT scans that this works at near interactive speeds, and is at least as accurate as the Diffeomorphic Demons algorithm. The algorithm may be used both for diagnosis and treatment monitoring.


Subject(s)
Algorithms , Imaging, Three-Dimensional/methods , Lung Neoplasms/diagnostic imaging , Pattern Recognition, Automated/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Subtraction Technique , Tomography, X-Ray Computed/methods , Humans , Radiographic Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
17.
IEEE Trans Inf Technol Biomed ; 12(4): 480-7, 2008 Jul.
Article in English | MEDLINE | ID: mdl-18632328

ABSTRACT

Retinal clinicians and researchers make extensive use of images, and the current emphasis is on digital imaging of the retinal fundus. The goal of this paper is to introduce a system, known as retinal image vessel extraction and registration system, which provides the community of retinal clinicians, researchers, and study directors an integrated suite of advanced digital retinal image analysis tools over the Internet. The capabilities include vasculature tracing and morphometry, joint (simultaneous) montaging of multiple retinal fields, cross-modality registration (color/red-free fundus photographs and fluorescein angiograms), and generation of flicker animations for visualization of changes from longitudinal image sequences. Each capability has been carefully validated in our previous research work. The integrated Internet-based system can enable significant advances in retina-related clinical diagnosis, visualization of the complete fundus at full resolution from multiple low-angle views, analysis of longitudinal changes, research on the retinal vasculature, and objective, quantitative computer-assisted scoring of clinical trials imagery. It could pave the way for future screening services from optometry facilities.


Subject(s)
Fluorescein Angiography/methods , Image Enhancement/methods , Internet , Pattern Recognition, Automated/methods , Remote Consultation/methods , Retinal Vessels/anatomy & histology , Retinoscopy/methods , Artificial Intelligence , Image Interpretation, Computer-Assisted/methods
18.
IEEE Trans Pattern Anal Mach Intell ; 29(11): 1973-89, 2007 Nov.
Article in English | MEDLINE | ID: mdl-17848778

ABSTRACT

Our goal is an automated 2D-image-pair registration algorithm capable of aligning images taken of a wide variety of natural and man-made scenes as well as many medical images. The algorithm should handle low overlap, substantial orientation and scale differences, large illumination variations, and physical changes in the scene. An important component of this is the ability to automatically reject pairs that have no overlap or have too many differences to be aligned well. We propose a complete algorithm, including techniques for initialization, for estimating transformation parameters, and for automatically deciding if an estimate is correct. Keypoints extracted and matched between images are used to generate initial similarity transform estimates, each accurate over a small region. These initial estimates are rank-ordered and tested individually in succession. Each estimate is refined using the Dual-Bootstrap ICP algorithm, driven by matching of multiscale features. A three-part decision criteria, combining measurements of alignment accuracy, stability in the estimate, and consistency in the constraints, determines whether the refined transformation estimate is accepted as correct. Experimental results on a data set of 22 challenging image pairs show that the algorithm effectively aligns 19 of the 22 pairs and rejects 99.8% of the misalignments that occur when all possible pairs are tried. The algorithm substantially out-performs algorithms based on keypoint matching alone.


Subject(s)
Algorithms , Artificial Intelligence , Decision Support Techniques , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Cluster Analysis , Information Storage and Retrieval/methods , Reproducibility of Results , Sensitivity and Specificity
19.
IEEE Trans Med Imaging ; 25(12): 1531-46, 2006 Dec.
Article in English | MEDLINE | ID: mdl-17167990

ABSTRACT

Motivated by the goals of improving detection of low-contrast and narrow vessels and eliminating false detections at nonvascular structures, a new technique is presented for extracting vessels in retinal images. The core of the technique is a new likelihood ratio test that combines matched-filter responses, confidence measures and vessel boundary measures. Matched filter responses are derived in scale-space to extract vessels of widely varying widths. A vessel confidence measure is defined as a projection of a vector formed from a normalized pixel neighborhood onto a normalized ideal vessel profile. Vessel boundary measures and associated confidences are computed at potential vessel boundaries. Combined, these responses form a six-dimensional measurement vector at each pixel. A training technique is used to develop a mapping of this vector to a likelihood ratio that measures the "vesselness" at each pixel. Results comparing this vesselness measure to matched filters alone and to measures based on the Hessian of intensities show substantial improvements, both qualitatively and quantitatively. The Hessian can be used in place of the matched filter to obtain similar but less-substantial improvements or to steer the matched filter by preselecting kernel orientations. Finally, the new vesselness likelihood ratio is embedded into a vessel tracing framework, resulting in an efficient and effective vessel centerline extraction algorithm.


Subject(s)
Algorithms , Fluorescein Angiography/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Retinal Vessels/anatomy & histology , Retinoscopy/methods , Artificial Intelligence , Humans , Imaging, Three-Dimensional/methods , Information Storage and Retrieval/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...