Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Med Image Anal ; 13(4): 621-33, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19592291

RESUMO

We propose a selective method of measurement for computing image similarities based on characteristic structure extraction and demonstrate its application to flexible endoscope navigation, in particular to a bronchoscope navigation system. Camera motion tracking is a fundamental function required for image-guided treatment or therapy systems. In recent years, an ultra-tiny electromagnetic sensor commercially became available, and many image-guided treatment or therapy systems use this sensor for tracking the camera position and orientation. However, due to space limitations, it is difficult to equip the tip of a bronchoscope with such a position sensor, especially in the case of ultra-thin bronchoscopes. Therefore, continuous image registration between real and virtual bronchoscopic images becomes an efficient tool for tracking the bronchoscope. Usually, image registration is done by calculating the image similarity between real and virtual bronchoscopic images. Since global schemes to measure image similarity, such as mutual information, squared gray-level difference, or cross correlation, average differences in intensity values over an entire region, they fail at tracking of scenes where less characteristic structures can be observed. The proposed method divides an entire image into a set of small subblocks and only selects those in which characteristic shapes are observed. Then image similarity is calculated within the selected subblocks. Selection is done by calculating feature values within each subblock. We applied our proposed method to eight pairs of chest X-ray CT images and bronchoscopic video images. The experimental results revealed that bronchoscope tracking using the proposed method could track up to 1600 consecutive bronchoscopic images (about 50s) without external position sensors. Tracking performance was greatly improved in comparison with a standard method utilizing squared gray-level differences of the entire images.


Assuntos
Algoritmos , Broncoscopia/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Inteligência Artificial , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Interface Usuário-Computador
3.
Cell ; 128(6): 1187-203, 2007 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-17382886

RESUMO

In Drosophila, approximately 50 classes of olfactory receptor neurons (ORNs) send axons to 50 corresponding glomeruli in the antennal lobe. Uniglomerular projection neurons (PNs) relay olfactory information to the mushroom body (MB) and lateral horn (LH). Here, we combine single-cell labeling and image registration to create high-resolution, quantitative maps of the MB and LH for 35 input PN channels and several groups of LH neurons. We find (1) PN inputs to the MB are stereotyped as previously shown for the LH; (2) PN partners of ORNs from different sensillar groups are clustered in the LH; (3) fruit odors are represented mostly in the posterior-dorsal LH, whereas candidate pheromone-responsive PNs project to the anterior-ventral LH; (4) dendrites of single LH neurons each overlap with specific subsets of PN axons. Our results suggest that the LH is organized according to biological values of olfactory input.


Assuntos
Drosophila/anatomia & histologia , Drosophila/fisiologia , Corpos Pedunculados/fisiologia , Neurônios Receptores Olfatórios/fisiologia , Animais , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Mapeamento Encefálico , Feminino , Frutas , Masculino , Odorantes , Condutos Olfatórios/fisiologia , Feromônios , Terminações Pré-Sinápticas/fisiologia , Caracteres Sexuais , Olfato/fisiologia , Sinapses/fisiologia
4.
Neurosurgery ; 60(2 Suppl 1): ONS147-56; discussion ONS156, 2007 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17297377

RESUMO

OBJECTIVE: New technology has enabled the increasing use of radiosurgery to ablate spinal lesions. The first generation of the CyberKnife (Accuray, Inc., Sunnyvale, CA) image-guided radiosurgery system required implanted radiopaque markers (fiducials) to localize spinal targets. A recently developed and now commercially available spine tracking technology called Xsight (Accuray, Inc.) tracks skeletal structures and eliminates the need for implanted fiducials. The Xsight system localizes spinal targets by direct reference to the adjacent vertebral elements. This study sought to measure the accuracy of Xsight spine tracking and provide a qualitative assessment of overall system performance. METHODS: Total system error, which is defined as the distance between the centroids of the planned and delivered dose distributions and represents all possible treatment planning and delivery errors, was measured using a realistic, anthropomorphic head-and-neck phantom. The Xsight tracking system error component of total system error was also computed by retrospectively analyzing image data obtained from eleven patients with a total of 44 implanted fiducials who underwent CyberKnife spinal radiosurgery. RESULTS: The total system error of the Xsight targeting technology was measured to be 0.61 mm. The tracking system error component was found to be 0.49 mm. CONCLUSION: The Xsight spine tracking system is practically important because it is accurate and eliminates the use of implanted fiducials. Experience has shown this technology to be robust under a wide range of clinical circumstances.


Assuntos
Radiocirurgia/instrumentação , Radiocirurgia/métodos , Coluna Vertebral/cirurgia , Humanos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Intensificação de Imagem Radiográfica , Coluna Vertebral/diagnóstico por imagem , Tomógrafos Computadorizados
5.
IEEE Trans Image Process ; 16(1): 153-61, 2007 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-17283774

RESUMO

A new method for averaging multidimensional images is presented, which is based on signed Euclidean distance maps computed for each of the pixel values. We refer to the algorithm as "shape-based averaging" (SBA) because of its similarity to Raya and Udupa's shape-based interpolation method. The new method does not introduce pixel intensities that were not present in the input data, which makes it suitable for averaging nonnumerical data such as label maps (segmentations). Using segmented human brain magnetic resonance images, SBA is compared to label voting for the purpose of averaging image segmentations in a multiclassifier fashion. SBA, on average, performed as well as label voting in terms of recognition rates of the averaged segmentations. SBA produced more regular and contiguous structures with less fragmentation than did label voting. SBA also was more robust for small numbers of atlases and for low atlas resolutions, in particular, when combined with shape-based interpolation. We conclude that SBA improves the contiguity and accuracy of averaged image segmentations.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Técnica de Subtração
6.
Comput Aided Surg ; 11(3): 109-17, 2006 May.
Artigo em Inglês | MEDLINE | ID: mdl-16829504

RESUMO

This paper describes a method for tracking a bronchoscope by combining a position sensor and image registration. A bronchoscopy guidance system is a tool for providing real-time navigation information acquired from pre-operative CT images to a physician during a bronchoscopic examination. In this system, one of the fundamental functions is tracking a bronchoscope's camera motion. Recently, a very small electromagnetic position sensor has become available. It is possible to insert this sensor into a bronchoscope's working channel to obtain the bronchoscope's camera motion. However, the accuracy of its output is inadequate for bronchoscope tracking. The proposed combination of the sensor and image registration between real and virtual bronchoscopic images derived from CT images is quite useful for improving tracking accuracy. Furthermore, this combination has enabled us to achieve a real-time bronchoscope guidance system. We performed evaluation experiments for the proposed method using a rubber phantom model. The experimental results showed that the proposed system allowed the bronchoscope's camera motion to be tracked at 2.5 frames per second.


Assuntos
Inteligência Artificial , Broncoscopia/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/instrumentação , Técnica de Subtração , Fenômenos Eletromagnéticos , Humanos , Imageamento Tridimensional , Reconhecimento Automatizado de Padrão , Imagens de Fantasmas , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Reprodutibilidade dos Testes , Integração de Sistemas
7.
Comput Aided Surg ; 11(2): 51-62, 2006 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-16782639

RESUMO

We present a system for 3D planning and pre-operative rehearsal of mandibular distraction osteogenesis procedures. Two primary architectural components are described: a planning system that allows geometric bone manipulation to rapidly explore various modifications and configurations, and a visuohaptic simulator that allows both general-purpose training and preoperative, patient-specific procedure rehearsal. We provide relevant clinical background, then describe the underlying simulation algorithms and their application to craniofacial procedures.


Assuntos
Imageamento Tridimensional/métodos , Doenças Mandibulares/diagnóstico por imagem , Osteogênese por Distração/métodos , Interface Usuário-Computador , Simulação por Computador , Humanos , Doenças Mandibulares/cirurgia , Radiografia
8.
Artigo em Inglês | MEDLINE | ID: mdl-17354827

RESUMO

This paper presents a method for tracking a bronchoscope based on motion prediction and image registration from multiple initial starting points as a function of a bronchoscope navigation system. We try to improve performance of bronchoscope tracking based on image registration using multiple initial guesses estimated using motion prediction. This method basically tracks a bronchoscopic camera by image registration between real bronchoscopic images and virtual ones derived from CT images taken prior to the bronchoscopic examinations. As an initial guess for image registration, we use multiple starting points to avoid falling into local minima. These initial guesses are computed using the motion prediction results obtained from the Kalman filter's output. We applied the proposed method to nine pairs of X-ray CT images and real bronchoscopic video images. The experimental results showed significant performance in continuous tracking without using any positional sensors.


Assuntos
Algoritmos , Brônquios/anatomia & histologia , Broncoscopia/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Movimento , Técnica de Subtração , Inteligência Artificial , Broncografia/métodos , Humanos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
9.
Med Phys ; 32(9): 2870-80, 2005 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-16266101

RESUMO

Computation of digitally reconstructed radiograph (DRR) images is the rate-limiting step in most current intensity-based algorithms for the registration of three-dimensional (3D) images to two-dimensional (2D) projection images. This paper introduces and evaluates the progressive attenuation field (PAF), which is a new method to speed up DRR computation. A PAF is closely related to an attenuation field (AF). A major difference is that a PAF is constructed on the fly as the registration proceeds; it does not require any precomputation time, nor does it make any prior assumptions of the patient pose or limit the permissible range of patient motion. A PAF effectively acts as a cache memory for projection values once they are computed, rather than as a lookup table for precomputed projections like standard AFs. We use a cylindrical attenuation field parametrization, which is better suited for many medical applications of 2D-3D registration than the usual two-plane parametrization. The computed attenuation values are stored in a hash table for time-efficient storage and access. Using clinical gold-standard spine image data sets from five patients, we demonstrate consistent speedups of intensity-based 2D-3D image registration using PAF DRRs by a factor of 10 over conventional ray casting DRRs with no decrease of registration accuracy or robustness.


Assuntos
Imageamento Tridimensional , Interpretação de Imagem Radiográfica Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Coluna Vertebral/diagnóstico por imagem
10.
IEEE Trans Med Imaging ; 24(11): 1441-54, 2005 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-16279081

RESUMO

Generation of digitally reconstructed radiographs (DRRs) is computationally expensive and is typically the rate-limiting step in the execution time of intensity-based two-dimensional to three-dimensional (2D-3D) registration algorithms. We address this computational issue by extending the technique of light field rendering from the computer graphics community. The extension of light fields, which we call attenuation fields (AFs), allows most of the DRR computation to be performed in a preprocessing step; after this precomputation step, DRRs can be generated substantially faster than with conventional ray casting. We derive expressions for the physical sizes of the two planes of an AF necessary to generate DRRs for a given X-ray camera geometry and all possible object motion within a specified range. Because an AF is a ray-based data structure, it is substantially more memory efficient than a huge table of precomputed DRRs because it eliminates the redundancy of replicated rays. Nonetheless, an AF can require substantial memory, which we address by compressing it using vector quantization. We compare DRRs generated using AFs (AF-DRRs) to those generated using ray casting (RC-DRRs) for a typical C-arm geometry and computed tomography images of several anatomic regions. They are quantitatively very similar: the median peak signal-to-noise ratio of AF-DRRs versus RC-DRRs is greater than 43 dB in all cases. We perform intensity-based 2D-3D registration using AF-DRRs and RC-DRRs and evaluate registration accuracy using gold-standard clinical spine image data from four patients. The registration accuracy and robustness of the two methods is virtually identical whereas the execution speed using AF-DRRs is an order of magnitude faster.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Coluna Vertebral/diagnóstico por imagem , Técnica de Subtração , Cirurgia Assistida por Computador/métodos , Sistemas Computacionais , Humanos , Reprodutibilidade dos Testes , Espalhamento de Radiação , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador , Coluna Vertebral/cirurgia
11.
IEEE Trans Med Imaging ; 24(11): 1455-68, 2005 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-16279082

RESUMO

Accurate and fast localization of a predefined target region inside the patient is an important component of many image-guided therapy procedures. This problem is commonly solved by registration of intraoperative 2-D projection images to 3-D preoperative images. If the patient is not fixed during the intervention, the 2-D image acquisition is repeated several times during the procedure, and the registration problem can be cast instead as a 3-D tracking problem. To solve the 3-D problem, we propose in this paper to apply 2-D region tracking to first recover the components of the transformation that are in-plane to the projections. The 2-D motion estimates of all projections are backprojected into 3-D space, where they are then combined into a consistent estimate of the 3-D motion. We compare this method to intensity-based 2-D to 3-D registration and a combination of 2-D motion backprojection followed by a 2-D to 3-D registration stage. Using clinical data with a fiducial marker-based gold-standard transformation, we show that our method is capable of accurately tracking vertebral targets in 3-D from 2-D motion measured in X-ray projection images. Using a standard tracking algorithm (hyperplane tracking), tracking is achieved at video frame rates but fails relatively often (32% of all frames tracked with target registration error (TRE) better than 1.2 mm, 82% of all frames tracked with TRE better than 2.4 mm). With intensity-based 2-D to 2-D image registration using normalized mutual information (NMI) and pattern intensity (PI), accuracy and robustness are substantially improved. NMI tracked 82% of all frames in our data with TRE better than 1.2 mm and 96% of all frames with TRE better than 2.4 mm. This comes at the cost of a reduced frame rate, 1.7 s average processing time per frame and projection device. Results using PI were slightly more accurate, but required on average 5.4 s time per frame. These results are still substantially faster than 2-D to 3-D registration. We conclude that motion backprojection from 2-D motion tracking is an accurate and efficient method for tracking 3-D target motion, but tracking 2-D motion accurately and robustly remains a challenge.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Movimento , Neuronavegação/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiocirurgia/métodos , Técnica de Subtração , Artefatos , Inteligência Artificial , Sistemas Computacionais , Humanos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
J Biomed Opt ; 10(2): 024018, 2005.
Artigo em Inglês | MEDLINE | ID: mdl-15910092

RESUMO

Confocal microscopy (CM) is a powerful image acquisition technique that is well established in many biological applications. It provides 3-D acquisition with high spatial resolution and can acquire several different channels of complementary image information. Due to the specimen extraction and preparation process, however, the shapes of imaged objects may differ considerably from their in vivo appearance. Magnetic resonance microscopy (MRM) is an evolving variant of magnetic resonance imaging, which achieves microscopic resolutions using a high magnetic field and strong magnetic gradients. Compared to CM imaging, MRM allows for in situ imaging and is virtually free of geometrical distortions. We propose to combine the advantages of both methods by unwarping CM images using a MRM reference image. Our method incorporates a sequence of image processing operators applied to the MRM image, followed by a two-stage intensity-based registration to compute a nonrigid coordinate transformation between the CM images and the MRM image. We present results obtained using CM images from the brains of 20 honey bees and a MRM image of an in situ bee brain.


Assuntos
Encéfalo/anatomia & histologia , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Microscopia Confocal , Animais , Abelhas
13.
Acad Radiol ; 12(1): 37-50, 2005 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-15691724

RESUMO

RATIONALE AND OBJECTIVES: The two-dimensional (2D)-three dimensional (3D) registration of a computed tomography image to one or more x-ray projection images has a number of image-guided therapy applications. In general, fiducial marker-based methods are fast, accurate, and robust, but marker implantation is not always possible, often is considered too invasive to be clinically acceptable, and entails risk. There also is the unresolved issue of whether it is acceptable to leave markers permanently implanted. Intensity-based registration methods do not require the use of markers and can be automated because such geometric features as points and surfaces do not need to be segmented from the images. However, for spine images, intensity-based methods are susceptible to local optima in the cost function and thus need initial transformations that are close to the correct transformation. MATERIALS AND METHODS: In this report, we propose a hybrid similarity measure for 2D-3D registration that is a weighted combination of an intensity-based similarity measure (mutual information) and a point-based measure using one fiducial marker. We evaluate its registration accuracy and robustness by using gold-standard clinical spine image data from four patients. RESULTS: Mean registration errors for successful registrations for the four patients were 1.3 and 1.1 mm for the intensity-based and hybrid similarity measures, respectively. Whereas the percentage of successful intensity-based registrations (registration error < 2.5 mm) decreased rapidly as the initial transformation got further from the correct transformation, the incorporation of a single marker produced successful registrations more than 99% of the time independent of the initial transformation. CONCLUSION: The use of one fiducial marker reduces 2D-3D spine image registration error slightly and improves robustness substantially. The findings are potentially relevant for image-guided therapy. If one marker is sufficient to obtain clinically acceptable registration accuracy and robustness, as the preliminary results using the proposed hybrid similarity measure suggest, the marker can be placed on a spinous process, which could be accomplished without penetrating muscle or using fluoroscopic guidance, and such a marker could be removed relatively easily.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Calibragem , Vértebras Cervicais/diagnóstico por imagem , Desenho de Equipamento , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Radiocirurgia/instrumentação , Radiocirurgia/métodos , Doenças da Coluna Vertebral/cirurgia , Coluna Vertebral/diagnóstico por imagem , Cirurgia Assistida por Computador/instrumentação , Vértebras Torácicas/diagnóstico por imagem
14.
Artigo em Inglês | MEDLINE | ID: mdl-16686002

RESUMO

In this paper, we propose a hybrid method for tracking a bronchoscope that uses a combination of magnetic sensor tracking and image registration. The position of a magnetic sensor placed in the working channel of the bronchoscope is provided by a magnetic tracking system. Because of respiratory motion, the magnetic sensor provides only the approximate position and orientation of the bronchoscope in the coordinate system of a CT image acquired before the examination. The sensor position and orientation is used as the starting point for an intensity-based registration between real bronchoscopic video images and virtual bronchoscopic images generated from the CT image. The output transformation of the image registration process is the position and orientation of the bronchoscope in the CT image. We tested the proposed method using a bronchial phantom model. Virtual breathing motion was generated to simulate respiratory motion. The proposed hybrid method successfully tracked the bronchoscope at a rate of approximately 1 Hz.


Assuntos
Inteligência Artificial , Broncoscopia/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Magnetismo , Técnica de Subtração , Interface Usuário-Computador , Algoritmos , Artefatos , Humanos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Mecânica Respiratória , Sensibilidade e Especificidade , Integração de Sistemas , Gravação em Vídeo/métodos
15.
Otolaryngol Head Neck Surg ; 131(5): 666-72, 2004 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-15523446

RESUMO

OBJECTIVE: The objective of this study was to assess registration error due to fiducial configuration for the ENT headsets for the CBYON Suite (CBYON, Mountain View, CA) and InstaTrak (GEMS Navigation and Visualization, Waukesha, WI). STUDY DESIGN: Axial CT scans (1-mm slice thickness) were obtained of for 24 cadaveric heads using the CBYON headset and for 23 cadaveric heads using the GEMS headset. The CBYON and GEMS NAV software were used to calculate the fiducial registration error (FRE). Fiducial localization error (FLE) was estimated from FRE. Theoretical target registration error (TRE) was calculated at 11 targets. RESULTS: The FRE for CBYON and GEMS NAV was 0.69 mm and 0.27 mm, respectively. The theoretical TRE for CBYON and GEMS NAV was 0.41 mm and 0.30 mm, respectively. The theoretical TRE was greater at targets posterior in the sinus cavities. CONCLUSION: Theoretical TRE values for both ENT headsets are less than clinically observed TRE. Clinically observed TRE is likely due to repositioning accuracy. EBM RATING: B-2.


Assuntos
Procedimentos Cirúrgicos Otorrinolaringológicos/instrumentação , Seios Paranasais/cirurgia , Técnicas Estereotáxicas/instrumentação , Cirurgia Assistida por Computador/instrumentação , Cadáver , Humanos , Modelos Anatômicos , Modelos Teóricos
16.
IEEE Trans Med Imaging ; 23(8): 983-94, 2004 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-15338732

RESUMO

It is well known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher than the accuracy of the individual classifiers. We have previously shown this to be true for atlas-based segmentation of biomedical images. The conventional method for combining individual classifiers weights each classifier equally (vote or sum rule fusion). In this paper, we propose two methods that estimate the performances of the individual classifiers and combine the individual classifiers by weighting them according to their estimated performance. The two methods are multiclass extensions of an expectation-maximization (EM) algorithm for ground truth estimation of binary classification based on decisions of multiple experts (Warfield et al., 2004). The first method performs parameter estimation independently for each class with a subsequent integration step. The second method considers all classes simultaneously. We demonstrate the efficacy of these performance-based fusion methods by applying them to atlas-based segmentations of three-dimensional confocal microscopy images of bee brains. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. In a second evaluation study, multiple actual atlas-based segmentations are combined and their accuracies computed by comparing them to a manual segmentation. We demonstrate in both evaluation studies that segmentations produced by combining multiple individual registration-based segmentations are more accurate for the two classifier fusion methods we propose, which weight the individual classifiers according to their EM-based performance estimates, than for simple sum rule fusion, which weights each classifier equally.


Assuntos
Algoritmos , Encéfalo/citologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão , Técnica de Subtração , Anatomia Artística , Animais , Abelhas , Análise por Conglomerados , Simulação por Computador , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Funções Verossimilhança , Ilustração Médica , Microscopia Confocal/métodos , Modelos Biológicos , Modelos Estatísticos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador
17.
Am J Rhinol ; 18(3): 173-8, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15283492

RESUMO

BACKGROUND: This study describes a novel computer-generated anatomic symmetry plane as a framework for the quantitative description of sphenoid sinus anatomy. The aim of this study was to (1) determine relationships and distances between a midline sphenoid reference point (called the central sphenoid point [CSP]) and lateral sphenoid wall structures and (2) assess the incidence of anterior clinoid process (ACP) pneumatization and pterygoid recess (PR) pneumatization. METHODS: Axial computed tomography (CT) scans (1-mm slice thickness) were obtained on a VolumeZoom CT scanner (Siemens Medical, Erlangen, Germany). Mathematically derived anatomic symmetry planes were created using custom postprocessing software. A standardized review of each CT scan using surgical planning software (CBYON Suite version 2.6; CBYON, Mountain View, CA) was performed. The CSP was defined as a reference point in the midline sagittal plane at the intersection of the vertical sellar face and the horizontal sellar floor. RESULTS: A total of 128 sides in 64 cadaveric specimens were available for review. The incidences of ACP pneumatization and PR pneumatization were 23.4 and 37.5%. The mean distances from the CSP to the left optic canal midpoint, the left ACP entrance point, and the left PR lateral wall were 17.2, 15.6, and 27.6 mm, respectively. The corresponding distances from the CSP on the right side were 17.3, 15.8, and 28.0 mm, respectively. Measurements from the maxillary spine to the optic canal midpoint, ACP entrance point, and PR lateral wall on each side were performed also. CONCLUSION: This approach provides both quantitative and qualitative understanding of sphenoid osteology and may be coupled with intraoperative surgical navigation to reduce the risks of sphenoid surgery. Both PR and ACP pneumatization are surprisingly common. Because the CSP-derived relationships may be referenced during endoscopic surgical navigation, they may provide greater clinical utility than traditional alternatives. This paradigm may facilitate a greater understanding of sphenoid anatomy and enhance surgical safety and precision.


Assuntos
Diagnóstico por Computador , Seio Esfenoidal/anatomia & histologia , Seio Esfenoidal/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
18.
IEEE Trans Med Imaging ; 23(5): 533-45, 2004 May.
Artigo em Inglês | MEDLINE | ID: mdl-15147007

RESUMO

Most image-guided surgery (IGS) systems track the positions of surgical instruments in the physical space occupied by the patient. This task is commonly performed using an optical tracking system that determines the positions of fiducial markers such as infrared-emitting diodes or retroreflective spheres that are attached to the instrument. Instrument tracking error is an important component of the overall IGS system error. This paper is concerned with the effect of fiducial marker configuration (number and spatial distribution) on tip position tracking error. Statistically expected tip position tracking error is calculated by applying results from the point-based registration error theory developed by Fitzpatrick et al. Tracking error depends not only on the error in localizing the fiducials, which is the error value generally provided by manufacturers of optical tracking systems, but also on the number and spatial distribution of the tracking fiducials and the position of the instrument tip relative to the fiducials. The theory is extended in two ways. First, a formula is derived for the special case in which the fiducials and the tip are collinear. Second, the theory is extended for the case in which there is a composition of transformations, as is the situation for tracking an instrument relative to a coordinate reference frame (i.e., a set of fiducials attached to the patient). The derivation reveals that the previous theory may be applied independently to the two transformations; the resulting independent components of tracking error add in quadrature to give the overall tracking error. The theoretical results are verified with numerical simulations and experimental measurements. The results in this paper may be useful for the design of optically tracked instruments for image-guided surgery; this is illustrated with several examples.


Assuntos
Desenho Assistido por Computador , Desenho de Equipamento/métodos , Análise de Falha de Equipamento/métodos , Interpretação de Imagem Assistida por Computador/instrumentação , Imageamento Tridimensional/instrumentação , Óptica e Fotônica/instrumentação , Cirurgia Assistida por Computador/instrumentação , Instrumentos Cirúrgicos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Cirurgia Assistida por Computador/métodos
19.
Med Phys ; 31(3): 427-32, 2004 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15070239

RESUMO

We present a technique for modeling liver motion during the respiratory cycle using intensity-based nonrigid registration of gated magnetic resonance (MR) images. Three-dimensional MR images of the abdomens of four volunteers were acquired at end-inspiration, end-expiration, and eight time points in between using respiratory gating. The deformation fields between the images were computed using intensity-based rigid and nonrigid registration algorithms. Global motion is modeled by a rigid transformation while local motion is modeled by a free-form deformation based on B-splines. Much of the liver motion was cranial-caudal translation, which was captured by the rigid transformation. However, there was still substantial residual deformation (approximately 10 mm averaged over the entire liver in four volunteers, and 34 mm at one place in the liver of one volunteer). The computed organ motion model can potentially be used to determine an appropriate respiratory-gated radiotherapy window during which the position of the target is known within a specified excursion.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Fígado/patologia , Imageamento por Ressonância Magnética/métodos , Adulto , Algoritmos , Humanos , Masculino , Movimento , Respiração , Fatores de Tempo
20.
Neuroimage ; 21(4): 1428-42, 2004 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-15050568

RESUMO

This paper evaluates strategies for atlas selection in atlas-based segmentation of three-dimensional biomedical images. Segmentation by intensity-based nonrigid registration to atlas images is applied to confocal microscopy images acquired from the brains of 20 bees. This paper evaluates and compares four different approaches for atlas image selection: registration to an individual atlas image (IND), registration to an average-shape atlas image (AVG), registration to the most similar image from a database of individual atlas images (SIM), and registration to all images from a database of individual atlas images with subsequent multi-classifier decision fusion (MUL). The MUL strategy is a novel application of multi-classifier techniques, which are common in pattern recognition, to atlas-based segmentation. For each atlas selection strategy, the segmentation performance of the algorithm was quantified by the similarity index (SI) between the automatic segmentation result and a manually generated gold standard. The best segmentation accuracy was achieved using the MUL paradigm, which resulted in a mean similarity index value between manual and automatic segmentation of 0.86 (AVG, 0.84; SIM, 0.82; IND, 0.81). The superiority of the MUL strategy over the other three methods is statistically significant (two-sided paired t test, P < 0.001). Both the MUL and AVG strategies performed better than the best possible SIM and IND strategies with optimal a posteriori atlas selection (mean similarity index for optimal SIM, 0.83; for optimal IND, 0.81). Our findings show that atlas selection is an important issue in atlas-based segmentation and that, in particular, multi-classifier techniques can substantially increase the segmentation accuracy.


Assuntos
Abelhas/anatomia & histologia , Mapeamento Encefálico , Encéfalo/anatomia & histologia , Interpretação de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Microscopia Confocal , Algoritmos , Animais , Bases de Dados como Assunto , Dominância Cerebral/fisiologia , Valores de Referência , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...