Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Biomed Opt Express ; 15(4): 2543-2560, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38633079

ABSTRACT

Anastomosis is a common and critical part of reconstructive procedures within gastrointestinal, urologic, and gynecologic surgery. The use of autonomous surgical robots such as the smart tissue autonomous robot (STAR) system demonstrates an improved efficiency and consistency of the laparoscopic small bowel anastomosis over the current da Vinci surgical system. However, the STAR workflow requires auxiliary manual monitoring during the suturing procedure to avoid missed or wrong stitches. To eliminate this monitoring task from the operators, we integrated an optical coherence tomography (OCT) fiber sensor with the suture tool and developed an automatic tissue classification algorithm for detecting missed or wrong stitches in real time. The classification results were updated and sent to the control loop of STAR robot in real time. The suture tool was guided to approach the object by a dual-camera system. If the tissue inside the tool jaw was inconsistent with the desired suture pattern, a warning message would be generated. The proposed hybrid multilayer perceptron dual-channel convolutional neural network (MLP-DC-CNN) classification platform can automatically classify eight different abdominal tissue types that require different suture strategies for anastomosis. In MLP, numerous handcrafted features (∼1955) were utilized including optical properties and morphological features of one-dimensional (1D) OCT A-line signals. In DC-CNN, intensity-based features and depth-resolved tissues' attenuation coefficients were fully exploited. A decision fusion technique was applied to leverage the information collected from both classifiers to further increase the accuracy. The algorithm was evaluated on 69,773 testing A-line data. The results showed that our model can classify the 1D OCT signals of small bowels in real time with an accuracy of 90.06%, a precision of 88.34%, and a sensitivity of 87.29%, respectively. The refresh rate of the displayed A-line signals was set as 300 Hz, the maximum sensing depth of the fiber was 3.6 mm, and the running time of the image processing algorithm was ∼1.56 s for 1,024 A-lines. The proposed fully automated tissue sensing model outperformed the single classifier of CNN, MLP, or SVM with optimized architectures, showing the complementarity of different feature sets and network architectures in classifying intestinal OCT A-line signals. It can potentially reduce the manual involvement of robotic laparoscopic surgery, which is a crucial step towards a fully autonomous STAR system.

2.
J Biomed Opt ; 27(6)2022 06.
Article in English | MEDLINE | ID: mdl-35751143

ABSTRACT

SIGNIFICANCE: Optical coherence tomography (OCT) allows high-resolution volumetric three-dimensional (3D) imaging of biological tissues in vivo. However, 3D-image acquisition can be time-consuming and often suffers from motion artifacts due to involuntary and physiological movements of the tissue, limiting the reproducibility of quantitative measurements. AIM: To achieve real-time 3D motion compensation for corneal tissue with high accuracy. APPROACH: We propose an OCT system for volumetric imaging of the cornea, capable of compensating both axial and lateral motion with micron-scale accuracy and millisecond-scale time consumption based on higher-order regression. Specifically, the system first scans three reference B-mode images along the C-axis before acquiring a standard C-mode image. The difference between the reference and volumetric images is compared using a surface-detection algorithm and higher-order polynomials to deduce 3D motion and remove motion-related artifacts. RESULTS: System parameters are optimized, and performance is evaluated using both phantom and corneal (ex vivo) samples. An overall motion-artifact error of <4.61 microns and processing time of about 3.40 ms for each B-scan was achieved. CONCLUSIONS: Higher-order regression achieved effective and real-time compensation of 3D motion artifacts during corneal imaging. The approach can be expanded to 3D imaging of other ocular tissues. Implementing such motion-compensation strategies has the potential to improve the reliability of objective and quantitative information that can be extracted from volumetric OCT measurements.


Subject(s)
Artifacts , Tomography, Optical Coherence , Cornea/diagnostic imaging , Imaging, Three-Dimensional/methods , Motion , Reproducibility of Results , Tomography, Optical Coherence/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...