Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Med Image Anal ; 80: 102488, 2022 08.
Article in English | MEDLINE | ID: mdl-35667327

ABSTRACT

Semantic image segmentation is an important prerequisite for context-awareness and autonomous robotics in surgery. The state of the art has focused on conventional RGB video data acquired during minimally invasive surgery, but full-scene semantic segmentation based on spectral imaging data and obtained during open surgery has received almost no attention to date. To address this gap in the literature, we are investigating the following research questions based on hyperspectral imaging (HSI) data of pigs acquired in an open surgery setting: (1) What is an adequate representation of HSI data for neural network-based fully automated organ segmentation, especially with respect to the spatial granularity of the data (pixels vs. superpixels vs. patches vs. full images)? (2) Is there a benefit of using HSI data compared to other modalities, namely RGB data and processed HSI data (e.g. tissue parameters like oxygenation), when performing semantic organ segmentation? According to a comprehensive validation study based on 506 HSI images from 20 pigs, annotated with a total of 19 classes, deep learning-based segmentation performance increases - consistently across modalities - with the spatial context of the input data. Unprocessed HSI data offers an advantage over RGB data or processed data from the camera provider, with the advantage increasing with decreasing size of the input to the neural network. Maximum performance (HSI applied to whole images) yielded a mean DSC of 0.90 ((standard deviation (SD)) 0.04), which is in the range of the inter-rater variability (DSC of 0.89 ((standard deviation (SD)) 0.07)). We conclude that HSI could become a powerful image modality for fully-automatic surgical scene understanding with many advantages over traditional imaging, including the ability to recover additional functional tissue information. Our code and pre-trained models are available at https://github.com/IMSY-DKFZ/htc.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Animals , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Semantics , Swine
2.
Sci Rep ; 12(1): 11028, 2022 06 30.
Article in English | MEDLINE | ID: mdl-35773276

ABSTRACT

Visual discrimination of tissue during surgery can be challenging since different tissues appear similar to the human eye. Hyperspectral imaging (HSI) removes this limitation by associating each pixel with high-dimensional spectral information. While previous work has shown its general potential to discriminate tissue, clinical translation has been limited due to the method's current lack of robustness and generalizability. Specifically, the scientific community is lacking a comprehensive spectral tissue atlas, and it is unknown whether variability in spectral reflectance is primarily explained by tissue type rather than the recorded individual or specific acquisition conditions. The contribution of this work is threefold: (1) Based on an annotated medical HSI data set (9059 images from 46 pigs), we present a tissue atlas featuring spectral fingerprints of 20 different porcine organs and tissue types. (2) Using the principle of mixed model analysis, we show that the greatest source of variability related to HSI images is the organ under observation. (3) We show that HSI-based fully-automatic tissue differentiation of 20 organ classes with deep neural networks is possible with high accuracy (> 95%). We conclude from our study that automatic tissue discrimination based on HSI data is feasible and could thus aid in intraoperative decisionmaking and pave the way for context-aware computer-assisted surgery systems and autonomous robotics.


Subject(s)
Hyperspectral Imaging , Machine Learning , Animals , Neural Networks, Computer , Swine
SELECTION OF CITATIONS
SEARCH DETAIL
...