Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Clin Med ; 12(2)2023 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-36675438

RESUMO

Understanding cochlear anatomy is crucial for developing less traumatic electrode arrays and insertion guidance for cochlear implantation. The human cochlea shows considerable variability in size and morphology. This study analyses 1000+ clinical temporal bone CT images using a web-based image analysis tool. Cochlear size and shape parameters were obtained to determine population statistics and perform regression and correlation analysis. The analysis revealed that cochlear morphology follows Gaussian distribution, while cochlear dimensions A and B are not well-correlated to each other. Additionally, dimension B is more correlated to duct lengths, the wrapping factor and volume than dimension A. The scala tympani size varies considerably among the population, with the size generally decreasing along insertion depth with dimensional jumps through the trajectory. The mean scala tympani radius was 0.32 mm near the 720° insertion angle. Inter-individual variability was four times that of intra-individual variation. On average, the dimensions of both ears are similar. However, statistically significant differences in clinical dimensions were observed between ears of the same patient, suggesting that size and shape are not the same. Harnessing deep learning-based, automated image analysis tools, our results yielded important insights into cochlear morphology and implant development, helping to reduce insertion trauma and preserving residual hearing.

2.
J Clin Med ; 12(2)2023 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-36675460

RESUMO

Facial nerve stimulation (FNS) is a potential complication which may affect the auditory performance of children with cochlear implants (CIs). We carried out an exploratory prospective observational study to investigate the effects of the electrical stimulation pattern on FNS reduction in young children with CI. Ten ears of seven prelingually deafened children with ages up to 6 years old who undergone a unilateral or bilateral CI surgery were included in this study. Electromyographic (EMG) action potentials from orbicularis oculi muscle were recorded using monopolar biphasic stimulation (ST1) and multi-mode monophasic stimulation with capacitive discharge (ST2). Presence of EMG responses, facial nerve stimulation thresholds (T-FNS) and EMG amplitudes were compared between ST1 and ST2. Intra-cochlear electrodes placement, cochlear-nerve and electrode-nerve distances were also estimated to investigate their effects on EMG responses. The use of ST2 significantly reduced the presence of intraoperative EMG responses compared to ST1. Higher stimulation levels were required to elicit FNS with ST2, with smaller amplitudes, compared to ST1. No and weak correlation was observed between cochlea-nerve and electrode-nerve distances and EMG responses, respectively. ST2 may reduce FNS in young children with CI. Differently from the electrical stimulation pattern, the cochlea-nerve and electrode-nerve distances seem to have limited effects on FNS in this population.

3.
Cochlear Implants Int ; 24(2): 55-64, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36583989

RESUMO

Objectives: To investigate the outcomes of cochlear re-implantation using multi-mode grounding stimulation associated with anodic monophasic pulses to manage abnormal facial nerve stimulation (AFNS) in cochlear implant (CI) recipients. Methods: Retrospective case report. An adult CI recipient with severe AFNS and decrease in auditory performance was re-implanted with a new CI device to change the pulse shape and stimulation mode. Patient's speech perception scores and AFNS were compared before and after cochlear re-implantation, using monopolar stimulation associated with cathodic biphasic pulses and multi-mode stimulation mode associated to anodic monophasic pulses, respectively. The insertion depth angle and the electrode-nerve distances were also investigated, before and after cochlear re-implantation. Results: AFNS was resolved, and the speech recognition scores rapidly increased in the first year after cochlear re-implantation while remaining stable. After cochlear re-implantation, the e15 and e20 electrodes showed shorter electrode-nerve distances compared to their correspondent e4 and e7 electrodes, which induced AFNS in the first implantation. Conclusions: Cochlear re-implantation with multi-mode grounding stimulation associated with anodic monophasic pulses was an effective strategy for managing AFNS. The patient's speech perception scores rapidly improved and AFNS was not detected four years after cochlear re-implantation.


Assuntos
Implante Coclear , Implantes Cocleares , Adulto , Humanos , Nervo Facial/cirurgia , Estudos Retrospectivos , Cóclea/cirurgia , Estimulação Elétrica , Nervo Coclear
4.
J Clin Med ; 11(22)2022 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-36431117

RESUMO

The robust delineation of the cochlea and its inner structures combined with the detection of the electrode of a cochlear implant within these structures is essential for envisaging a safer, more individualized, routine image-guided cochlear implant therapy. We present Nautilus-a web-based research platform for automated pre- and post-implantation cochlear analysis. Nautilus delineates cochlear structures from pre-operative clinical CT images by combining deep learning and Bayesian inference approaches. It enables the extraction of electrode locations from a post-operative CT image using convolutional neural networks and geometrical inference. By fusing pre- and post-operative images, Nautilus is able to provide a set of personalized pre- and post-operative metrics that can serve the exploration of clinically relevant questions in cochlear implantation therapy. In addition, Nautilus embeds a self-assessment module providing a confidence rating on the outputs of its pipeline. We present a detailed accuracy and robustness analyses of the tool on a carefully designed dataset. The results of these analyses provide legitimate grounds for envisaging the implementation of image-guided cochlear implant practices into routine clinical workflows.

5.
Med Image Anal ; 79: 102428, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35500498

RESUMO

A key factor for assessing the state of the heart after myocardial infarction (MI) is to measure whether the myocardium segment is viable after reperfusion or revascularization therapy. Delayed enhancement-MRI or DE-MRI, which is performed 10 min after injection of the contrast agent, provides high contrast between viable and nonviable myocardium and is therefore a method of choice to evaluate the extent of MI. To automatically assess myocardial status, the results of the EMIDEC challenge that focused on this task are presented in this paper. The challenge's main objectives were twofold. First, to evaluate if deep learning methods can distinguish between non-infarct and pathological exams, i.e. exams with or without hyperenhanced area. Second, to automatically calculate the extent of myocardial infarction. The publicly available database consists of 150 exams divided into 50 cases without any hyperenhanced area after injection of a contrast agent and 100 cases with myocardial infarction (and then with a hyperenhanced area on DE-MRI), whatever their inclusion in the cardiac emergency department. Along with MRI, clinical characteristics are also provided. The obtained results issued from several works show that the automatic classification of an exam is a reachable task (the best method providing an accuracy of 0.92), and the automatic segmentation of the myocardium is possible. However, the segmentation of the diseased area needs to be improved, mainly due to the small size of these areas and the lack of contrast with the surrounding structures.


Assuntos
Aprendizado Profundo , Infarto do Miocárdio , Meios de Contraste , Humanos , Imageamento por Ressonância Magnética/métodos , Infarto do Miocárdio/diagnóstico por imagem , Miocárdio/patologia
6.
Comput Med Imaging Graph ; 97: 102049, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35334316

RESUMO

Cardiovascular disease is a major cause of death worldwide. Computed Tomography Coronary Angiography (CTCA) is a non-invasive method used to evaluate coronary artery disease, as well as evaluating and reconstructing heart and coronary vessel structures. Reconstructed models have a wide array of for educational, training and research applications such as the study of diseased and non-diseased coronary anatomy, machine learning based disease risk prediction and in-silico and in-vitro testing of medical devices. However, coronary arteries are difficult to image due to their small size, location, and movement, causing poor resolution and artefacts. Segmentation of coronary arteries has traditionally focused on semi-automatic methods where a human expert guides the algorithm and corrects errors, which severely limits large-scale applications and integration within clinical systems. International challenges aiming to overcome this barrier have focussed on specific tasks such as centreline extraction, stenosis quantification, and segmentation of specific artery segments only. Here we present the results of the first challenge to develop fully automatic segmentation methods of full coronary artery trees and establish the first large standardized dataset of normal and diseased arteries. This forms a new automated segmentation benchmark allowing the automated processing of CTCAs directly relevant for large-scale and personalized clinical applications.


Assuntos
Doença da Artéria Coronariana , Vasos Coronários , Algoritmos , Angiografia por Tomografia Computadorizada , Angiografia Coronária/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Vasos Coronários/diagnóstico por imagem , Humanos , Tomografia Computadorizada por Raios X/métodos
7.
Otol Neurotol ; 43(2): 190-198, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-34855687

RESUMO

HYPOTHESIS: Transmodiolar auditory implantation via the middle ear cavity could be possible using an augmented reality system (ARS). BACKGROUND: There is no clear landmark to indicate the cochlear apex or the modiolar axis. The ARS seems to be a promising tool for transmodiolar implantation by combining information from the preprocedure computed tomography scan (CT-scan) images to the real-time video of the surgical field. METHODS: Eight human temporal bone resin models were included (five adults and three children). The procedure started by the identification of the modiolar axis on the preprocedure CT-scan followed by a 3D reconstruction of the images. Information on modiolar location and navigational guidance was supplemented to the reconstructed model, which was then registered with the surgical video using a point-based approach. Relative movements between the phantom and the microscope were tracked using image feature-based motion tracking. Based on the information provided via the ARS, the surgeon implanted the electrode-array inside the modiolus after drilling the helicothrema. Postprocedure CT-scan images were acquired to evaluate the registration error and the implantation accuracy. RESULTS: The implantation could be conducted in all cases with a 2D registration error of 0.4 ±â€Š0.24 mm. The mean entry point error was 0.6 ±â€Š1.00 mm and the implant angular error 13.5 ±â€Š8.93 degrees (n = 8), compatible with the procedure requirements. CONCLUSION: We developed an image-based ARS to identify the extremities and the axis of the cochlear modiolus on intraprocedure videos. The system yielded submillimetric accuracy for implantation and remained stable throughout the experimental study.


Assuntos
Realidade Aumentada , Implante Coclear , Implantes Cocleares , Adulto , Criança , Cóclea/diagnóstico por imagem , Cóclea/cirurgia , Implante Coclear/métodos , Orelha Média/cirurgia , Humanos , Osso Temporal/diagnóstico por imagem , Osso Temporal/cirurgia , Gravação de Videoteipe
8.
Otol Neurotol ; 43(3): 385-394, 2022 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-34889824

RESUMO

HYPOTHESIS: Augmented reality (AR) solely based on image features is achievable in operating room conditions and its precision is compatible with otological surgery. BACKGROUND: The objective of this work was to evaluate the performance of a vision-based AR system for middle ear surgery in the operating room conditions. METHODS: Nine adult patients undergoing ossicular procedures were included in this prospective study. AR was obtained by combining real-time video from the operating microscope with the virtual image obtained from the preoperative computed tomography (CT)-scan. Initial registration between the video and the virtual CT image was achieved using manual selection of six points on the tympanic sulcus. Patient-microscope movements during the procedure were tracked using image-feature matching algorithm. The microscope was randomly moved at an approximated speed of 5 mm/s in the three axes of space and rotation for 180 seconds. The accuracy of the system was assessed by calculating the distance between each fiducial point selected on the video image and its corresponding point on the scanner. RESULTS: AR could be obtained for at least 3 minutes in seven out of nine patients. The overlay fiducial and target registration errors were 0.38 ±â€Š0.23 mm (n = 7) and 0.36 ±â€Š0.15 mm (n = 5) respectively, with a drift error of 1.2 ±â€Š0.5 µm/s. The system was stable throughout the procedure and achieved a refresh rate of 12 fps. Moderate bleeding and introduction of surgical instruments did not compromise the performance of the system. CONCLUSION: The AR system yielded sub-millimetric accuracy and remained stable throughout the experimental study despite patient-microscope movements and field of view obtrusions.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Adulto , Orelha Média/diagnóstico por imagem , Orelha Média/cirurgia , Humanos , Imageamento Tridimensional/métodos , Salas Cirúrgicas , Estudos Prospectivos , Cirurgia Assistida por Computador/métodos
9.
Sci Rep ; 11(1): 4406, 2021 02 23.
Artigo em Inglês | MEDLINE | ID: mdl-33623074

RESUMO

Temporal bone CT-scan is a prerequisite in most surgical procedures concerning the ear such as cochlear implants. The 3D vision of inner ear structures is crucial for diagnostic and surgical preplanning purposes. Since clinical CT-scans are acquired at relatively low resolutions, improved performance can be achieved by registering patient-specific CT images to a high-resolution inner ear model built from accurate 3D segmentations based on micro-CT of human temporal bone specimens. This paper presents a framework based on convolutional neural network for human inner ear segmentation from micro-CT images which can be used to build such a model from an extensive database. The proposed approach employs an auto-context based cascaded 2D U-net architecture with 3D connected component refinement to segment the cochlear scalae, semicircular canals, and the vestibule. The system was formulated on a data set composed of 17 micro-CT from public Hear-EU dataset. A Dice coefficient of 0.90 and Hausdorff distance of 0.74 mm were obtained. The system yielded precise and fast automatic inner-ear segmentations.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Orelha Interna/diagnóstico por imagem , Humanos , Osso Temporal/diagnóstico por imagem
10.
Otol Neurotol ; 41(10): e1207-e1213, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32976342

RESUMO

OBJECTIVE: Evaluate the useful length and the diameter of the cochlear lumen (CL) using routine imaging before cochlear implantation to study inter-individual variability and its impact on the insertion depth of the electrode carrier (EC). STUDY DESIGN: Prospective cross-sectional study. SETTING: Tertiary referral center. PATIENTS: Thirty-one preoperative and postimplantation temporal bone CT scans were analyzed by two investigators. INTERVENTION: Images were analyzed via orthogonal multiplanar reconstruction (Osirix) to measure the lengths of the entire CL and the basal turn. By means of curvilinear reconstruction, the CL was unfolded and the diameters of the CL and of the EC were measured every 2 mm from the round window (RW) to the apex. RESULTS: Very high-inter individual variability was found for the length of the basal turn (RSD > 1000%), the entire CL length (RSD > 800%), and the CL diameter at the RW (RSD > 600%). CL diameter was not correlated to the CL length. The inserted EC/total visible CL length ratio was 1.0 ±â€Š0.12. Reliability of the measures was acceptable for the CL length and the diameter at 16 mm from the RW (Crohnbach's alpha > 0.7, n = 31). CONCLUSION: CL length and diameter can be directly measured in a reliable manner by commercially available tools. These parameters potentially influence the EC insertion and should be assessed before cochlear implant surgery.


Assuntos
Implante Coclear , Implantes Cocleares , Cóclea/diagnóstico por imagem , Cóclea/cirurgia , Estudos Transversais , Humanos , Estudos Prospectivos , Reprodutibilidade dos Testes , Osso Temporal/diagnóstico por imagem , Osso Temporal/cirurgia
11.
Int J Comput Assist Radiol Surg ; 15(10): 1703-1711, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32737858

RESUMO

PURPOSE: Visualization of the cochlea is impossible due to the delicate and intricate ear anatomy. Augmented reality may be used to perform auditory nerve implantation by transmodiolar approach in patients with profound hearing loss. METHODS: We present an augmented reality system for the visualization of the cochlear axis in surgical videos. The system starts with an automatic anatomical landmark detection in preoperative computed tomography images based on deep reinforcement learning. These landmarks are used to register the preoperative geometry with the real-time microscopic video captured inside the auditory canal. Three-dimensional pose of the cochlear axis is determined using the registration projection matrices. In addition, the patient microscope movements are tracked using an image feature-based tracking process. RESULTS: The landmark detection stage yielded an average localization error of [Formula: see text] mm ([Formula: see text]). The target registration error was [Formula: see text] mm for the cochlear apex and [Formula: see text] for the cochlear axis. CONCLUSION: We developed an augmented reality system to visualize the cochlear axis in intraoperative videos. The system yielded millimetric accuracy and remained stable throughout the experimental study despite camera movements throughout the procedure in experimental conditions.


Assuntos
Realidade Aumentada , Cóclea/cirurgia , Cirurgia Assistida por Computador/métodos , Humanos , Microscopia/métodos , Tomografia Computadorizada por Raios X/métodos
12.
Int J Comput Assist Radiol Surg ; 15(9): 1467-1476, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32691302

RESUMO

PURPOSE: This paper addresses the detection of the clinical target volume (CTV) in transrectal ultrasound (TRUS) image-guided intraoperative for permanent prostate brachytherapy. Developing a robust and automatic method to detect the CTV on intraoperative TRUS images is clinically important to have faster and reproducible interventions that can benefit both the clinical workflow and patient health. METHODS: We present a multi-task deep learning method for an automatic prostate CTV boundary detection in intraoperative TRUS images by leveraging both the low-level and high-level (prior shape) information. Our method includes a channel-wise feature calibration strategy for low-level feature extraction and learning-based prior knowledge modeling for prostate CTV shape reconstruction. It employs CTV shape reconstruction from automatically sampled boundary surface coordinates (pseudo-landmarks) to detect the low-contrast and noisy regions across the prostate boundary, while being less biased from shadowing, inherent speckles, and artifact signals from the needle and implanted radioactive seeds. RESULTS: The proposed method was evaluated on a clinical database of 145 patients who underwent permanent prostate brachytherapy under TRUS guidance. Our method achieved a mean accuracy of [Formula: see text] and a mean surface distance error of [Formula: see text]. Extensive ablation and comparison studies show that our method outperformed previous deep learning-based methods by more than 7% for the Dice similarity coefficient and 6.9 mm reduced 3D Hausdorff distance error. CONCLUSION: Our study demonstrates the potential of shape model-based deep learning methods for an efficient and accurate CTV segmentation in an ultrasound-guided intervention. Moreover, learning both low-level features and prior shape knowledge with channel-wise feature calibration can significantly improve the performance of deep learning methods in medical image segmentation.


Assuntos
Braquiterapia , Aprendizado Profundo , Diagnóstico por Computador/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Ultrassonografia , Algoritmos , Artefatos , Humanos , Masculino , Modelos Estatísticos , Próstata/diagnóstico por imagem , Reprodutibilidade dos Testes , Fluxo de Trabalho
13.
Int J Comput Assist Radiol Surg ; 15(9): 1437-1444, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32653985

RESUMO

PURPOSE: To achieve accurate image segmentation, which is the first critical step in medical image analysis and interventions, using deep neural networks seems a promising approach provided sufficiently large and diverse annotated data from experts. However, annotated datasets are often limited because it is prone to variations in acquisition parameters and require high-level expert's knowledge, and manually labeling targets by tracing their contour is often laborious. Developing fast, interactive, and weakly supervised deep learning methods is thus highly desirable. METHODS: We propose a new efficient deep learning method to accurately segment targets from images while generating an annotated dataset for deep learning methods. It involves a generative neural network-based prior-knowledge prediction from pseudo-contour landmarks. The predicted prior knowledge (i.e., contour proposal) is then refined using a convolutional neural network that leverages the information from the predicted prior knowledge and the raw input image. Our method was evaluated on a clinical database of 145 intraoperative ultrasound and 78 postoperative CT images of image-guided prostate brachytherapy. It was also evaluated on a cardiac multi-structure segmentation from 450 2D echocardiographic images. RESULTS: Experimental results show that our model can segment the prostate clinical target volume in 0.499 s (i.e., 7.79 milliseconds per image) with an average Dice coefficient of 96.9 ± 0.9% and 95.4 ± 0.9%, 3D Hausdorff distance of 4.25 ± 4.58 and 5.17 ± 1.41 mm, and volumetric overlap ratio of 93.9 ± 1.80% and 91.3 ± 1.70 from TRUS and CT images, respectively. It also yielded an average Dice coefficient of 96.3 ± 1.3% on echocardiographic images. CONCLUSIONS: We proposed and evaluated a fast, interactive deep learning method for accurate medical image segmentation. Moreover, our approach has the potential to solve the bottleneck of deep learning methods in adapting to inter-clinical variations and speed up the annotation processes.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Braquiterapia , Bases de Dados Factuais , Diagnóstico por Computador/métodos , Ecocardiografia , Humanos , Masculino , Variações Dependentes do Observador , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X , Ultrassonografia
14.
Sci Rep ; 10(1): 6767, 2020 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-32317726

RESUMO

The aim of the study was to develop and assess the performance of a video-based augmented reality system, combining preoperative computed tomography (CT) and real-time microscopic video, as the first crucial step to keyhole middle ear procedures through a tympanic membrane puncture. Six different artificial human temporal bones were included in this prospective study. Six stainless steel fiducial markers were glued on the periphery of the eardrum, and a high-resolution CT-scan of the temporal bone was obtained. Virtual endoscopy of the middle ear based on this CT-scan was conducted on Osirix software. Virtual endoscopy image was registered to the microscope-based video of the intact tympanic membrane based on fiducial markers and a homography transformation was applied during microscope movements. These movements were tracked using Speeded-Up Robust Features (SURF) method. Simultaneously, a micro-surgical instrument was identified and tracked using a Kalman filter. The 3D position of the instrument was extracted by solving a three-point perspective framework. For evaluation, the instrument was introduced through the tympanic membrane and ink droplets were injected on three middle ear structures. An average initial registration accuracy of 0.21 ± 0.10 mm (n = 3) was achieved with a slow propagation error during tracking (0.04 ± 0.07 mm). The estimated surgical instrument tip position error was 0.33 ± 0.22 mm. The target structures' localization accuracy was 0.52 ± 0.15 mm. The submillimetric accuracy of our system without tracker is compatible with ear surgery.


Assuntos
Orelha Média/cirurgia , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Cirurgia Vídeoassistida/métodos , Realidade Aumentada , Orelha Média/diagnóstico por imagem , Orelha Média/patologia , Humanos , Imageamento Tridimensional , Microscopia , Pessoa de Meia-Idade , Imagens de Fantasmas
15.
IEEE J Biomed Health Inform ; 24(7): 2093-2106, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31751255

RESUMO

Cranial base procedures involve manipulation of small, delicate and complex structures in the fields of otology, rhinology, neurosurgery and maxillofacial surgery. Critical nerves and blood vessels are in close proximity of these structures. Augmented reality is an emerging technology that can revolutionize the cranial base procedures by providing supplementary anatomical and navigational information unified on a single display. However, the awareness and acceptance of possibilities of augmented reality systems in cranial base domain is fairly low. This article aims at evaluating the usefulness of augmented reality systems in cranial base surgeries and highlights the challenges that current technology faces and their potential solutions. A technical perspective about different strategies employed in development of an augmented realty system is also presented. The current trend suggests an increase in interest towards augmented reality systems that may lead to safer and cost-effective procedures. However, several issues need to be addressed before it can be widely integrated into routine practice.


Assuntos
Realidade Aumentada , Procedimentos Neurocirúrgicos/métodos , Base do Crânio/cirurgia , Cirurgia Assistida por Computador/métodos , Humanos
16.
Otol Neurotol ; 39(8): 931-939, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-30113553

RESUMO

HYPOTHESIS: Augmented reality (AR) may enhance otologic procedures by providing sub-millimetric accuracy and allowing the unification of information in a single screen. BACKGROUND: Several issues related to otologic procedures can be addressed through an AR system by providing sub-millimetric precision, supplying a global view of the middle ear cleft, and advantageously unifying the information in a single screen. The AR system is obtained by combining otoendoscopy with temporal bone computer tomography (CT). METHODS: Four human temporal bone specimens were explored by high-resolution CT-scan and dynamic otoendoscopy with video recordings. The initialization of the system consisted of a semi-automatic registration between the otoendoscopic video and the 3D CT-scan reconstruction of the middle ear. Endoscope movements were estimated by several computer vision techniques (feature detectors/descriptors and optical flow) and used to warp the CT-scan to keep the correspondence with the otoendoscopic video. RESULTS: The system maintained synchronization between the CT-scan image and the otoendoscopic video in all experiments during slow and rapid (5-10 mm/s) endoscope movements. Among tested algorithms, two feature-based methods, scale-invariant feature transform (SIFT); and speeded up robust features (SURF), provided sub-millimeter mean tracking errors (0.38 ±â€Š0.53 mm and 0.20 ±â€Š0.16 mm, respectively) and an adequate image refresh rate (11 and 17 frames per second, respectively) after 2 minutes of procedure with continuous endoscope movements. CONCLUSION: A precise augmented reality combining video and 3D CT-scan data can be applied to otoendoscopy without the use of conventional neuronavigation tracking thanks to computer vision algorithms.


Assuntos
Orelha Média/diagnóstico por imagem , Imageamento Tridimensional/métodos , Osso Temporal/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Endoscopia/métodos , Humanos , Gravação em Vídeo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...