Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; 38(1): 79-89, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30010552

RESUMO

Contemporary endoscopic simultaneous localization and mapping (SLAM) methods accurately compute endoscope poses; however, they only provide a sparse 3-D reconstruction that poorly describes the surgical scene. We propose a novel dense SLAM method whose qualities are: 1) monocular, requiring only RGB images of a handheld monocular endoscope; 2) fast, providing endoscope positional tracking and 3-D scene reconstruction, running in parallel threads; 3) dense, yielding an accurate dense reconstruction; 4) robust, to the severe illumination changes, poor texture and small deformations that are typical in endoscopy; and 5) self-contained, without needing any fiducials nor external tracking devices and, therefore, it can be smoothly integrated into the surgical workflow. It works as follows. First, accurate cluster frame poses are estimated using the sparse SLAM feature matches. The system segments clusters of video frames according to parallax criteria. Next, dense matches between cluster frames are computed in parallel by a variational approach that combines zero mean normalized cross correlation and a gradient Huber norm regularizer. This combination copes with challenging lighting and textures at an affordable time budget on a modern GPU. It can outperform pure stereo reconstructions, because the frames cluster can provide larger parallax from the endoscope's motion. We provide an extensive experimental validation on real sequences of the porcine abdominal cavity, both in-vivo and ex-vivo. We also show a qualitative evaluation on human liver. In addition, we show a comparison with the other dense SLAM methods showing the performance gain in terms of accuracy, density, and computation time.


Assuntos
Realidade Aumentada , Endoscopia/métodos , Imageamento Tridimensional/métodos , Cavidade Abdominal/diagnóstico por imagem , Algoritmos , Animais , Humanos , Fígado/diagnóstico por imagem , Suínos
2.
Int J Med Robot ; 13(4)2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28387448

RESUMO

BACKGROUND: Flexible bendable instruments are key tools for performing surgical endoscopy. Being able to measure the 3D position of such instruments can be useful for various tasks, such as controlling automatically robotized instruments and analyzing motions. METHODS: An automatic method is proposed to infer the 3D pose of a single bending section instrument, using only the images provided by a monocular camera embedded at the tip of the endoscope. The proposed method relies on colored markers attached onto the bending section. The image of the instrument is segmented using a graph-based method and the corners of the markers are extracted by detecting the color transitions along Bézier curves fitted on edge points. These features are accurately located and then used to estimate the 3D pose of the instrument using an adaptive model that takes into account the mechanical play between the instrument and its housing channel. RESULTS: The feature extraction method provides good localization of marker corners with images of the in vivo environment despite sensor saturation due to strong lighting. The RMS error on estimation of the tip position of the instrument for laboratory experiments was 2.1, 1.96, and 3.18 mm in the x, y and z directions, respectively. Qualitative analysis in the case of in vivo images shows the ability to correctly estimate the 3D position of the instrument tip during real motions. CONCLUSIONS: The proposed method provides an automatic and accurate estimation of the 3D position of the tip of a bendable instrument in realistic conditions, where standard approaches fail.


Assuntos
Endoscópios , Endoscopia/instrumentação , Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Robóticos/instrumentação , Cirurgia Assistida por Computador/instrumentação , Algoritmos , Automação , Endoscopia/métodos , Desenho de Equipamento , Humanos , Movimento (Física) , Imagens de Fantasmas , Reprodutibilidade dos Testes , Procedimentos Cirúrgicos Robóticos/métodos , Robótica , Cirurgia Assistida por Computador/métodos
3.
Med Image Anal ; 37: 66-90, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28160692

RESUMO

This article establishes a comprehensive review of all the different methods proposed by the literature concerning augmented reality in intra-abdominal minimally invasive surgery (also known as laparoscopic surgery). A solid background of surgical augmented reality is first provided in order to support the survey. Then, the various methods of laparoscopic augmented reality as well as their key tasks are categorized in order to better grasp the current landscape of the field. Finally, the various issues gathered from these reviewed approaches are organized in order to outline the remaining challenges of augmented reality in laparoscopic surgery.


Assuntos
Laparoscopia/métodos , Laparoscopia/tendências , Algoritmos , Animais , Humanos , Reprodutibilidade dos Testes
4.
Int J Comput Assist Radiol Surg ; 12(1): 1-11, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27376720

RESUMO

PURPOSE: An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. METHODS: We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. RESULTS: Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. CONCLUSIONS: The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.


Assuntos
Imageamento Tridimensional/métodos , Imagens de Fantasmas , Cirurgia Assistida por Computador/métodos , Pontos de Referência Anatômicos , Animais , Computadores de Mão , Humanos , Modelos Anatômicos , Suínos
5.
Med Image Anal ; 30: 130-143, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26925804

RESUMO

The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data. We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach.


Assuntos
Endoscópios , Laparoscopia/instrumentação , Laparoscopia/métodos , Cirurgia Assistida por Computador/instrumentação , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Desenho de Equipamento , Análise de Falha de Equipamento , Humanos , Cuidados Intraoperatórios/instrumentação , Cuidados Intraoperatórios/métodos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Interface Usuário-Computador
6.
Artigo em Inglês | MEDLINE | ID: mdl-23367240

RESUMO

In this paper we present a new 3-D laparoscopic device based on structured light for minimally invasive surgery. Real-time reconstruction of internal organs' surfaces is very challenging as the numerous geometric and photometric variabilities and disturbances (bloody parts, specularities, smokes,...) often occur during the surgical operation, sometimes with manipulations by several assistants. We then conceived a structured light vision system to illuminate a coded pattern by means of an external video projector device or miniaturized diffractive optical elements and a laser source. Among the structured light techniques, the spatial neighbourhood scheme is the most relevant class of approaches to deal with moving and deformable surfaces, then to capture the depth map with only one shot. Each neighbourhood (a (3 × 3) window) is representing a codeword of length 9, and is unique in the whole pattern, even if there is a lack of information. To do so, a monochromatic subperfect map-based pattern is computed, driven by a desired minimal Hamming distance, H(min), between any couple of codewords. This provides patterns with high correction capabilities (H(min) > 1). For practical considerations, each numerical codeword symbol is associated to a unique visual feature embedding the local orientation of the pattern, which is helpful for the neighbourhood retrieval during the decoding process. Together with the endoscopic device, in vivo real-time reconstructions (in mini-invasive surgical conditions) are presented to assess both the efficiency of the proposed pattern design, the decoding process and the 3-D laparoscope setup realized in the lab.


Assuntos
Desenho de Equipamento , Laparoscópios , Procedimentos Cirúrgicos Minimamente Invasivos , Humanos
7.
IEEE Trans Biomed Eng ; 55(10): 2417-25, 2008 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-18838367

RESUMO

In this paper, we present a novel robotic assistant dedicated to medical interventions under computed tomography scan guidance. This compact and lightweight patient-mounted robot is designed so as to fulfill the requirements of most interventional radiology procedures. It is built from an original 5 DOF parallel structure with a semispherical workspace, particularly well suited to CT-scan interventional procedures. The specifications, the design, and the choice of compatible technological solutions are detailed. A preclinical evaluation is presented, with the registration of the robot in the CT-scan.


Assuntos
Radiografia Intervencionista/instrumentação , Robótica/instrumentação , Cirurgia Assistida por Computador/instrumentação , Tomografia Computadorizada por Raios X/métodos , Fenômenos Biomecânicos , Biópsia por Agulha/instrumentação , Desenho de Equipamento , Humanos , Imagens de Fantasmas , Técnicas Estereotáxicas , Equipamentos Cirúrgicos , Tecnologia Radiológica/instrumentação
8.
Artigo em Inglês | MEDLINE | ID: mdl-17354931

RESUMO

In robot-assisted laparoscopic surgery, an endoscopic camera is used to control the motion of surgical instruments. With this minimally invasive surgical (MIS) technique, every instrument has to pass through an insertion point in the abdominal wall and is mounted on the end-effector of a surgical robot which can be controlled by visual feedback. To achieve an accurate vision-based positioning of laparoscopic instruments, we introduce the motion constraint in MIS which is based on the location of out-of-field of view insertion points. The knowledge of the (image of the) insertion point location is helpful for real-time image segmentation issues, particularly to initiate the search for region seeds corresponding to the instruments. Moreover, with this "eye-to-hand" robot vision system, visual servoing is a very convenient technique to automatically guide an instrument but it requires the velocity screw to be expressed in the appropriate frame. Then, the location of the insertion point is seen as the main part of the larger problem of determining the overall transformation between the camera and the robot end-effector frame. This is achieved thanks to a novel algorithm for the pose determination of cylindrical-shaped instruments. With the proposed method, the location of insertion points can be recovered, on-line, with no marker, without any knowledge of robot kinematics and without an external measurement device.


Assuntos
Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Laparoscopia/métodos , Robótica/métodos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...