Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
1.
Sci Rep ; 14(1): 5683, 2024 03 07.
Article in English | MEDLINE | ID: mdl-38454099

ABSTRACT

Artificially created human faces play an increasingly important role in our digital world. However, the so-called uncanny valley effect may cause people to perceive highly, yet not perfectly human-like faces as eerie, bringing challenges to the interaction with virtual agents. At the same time, the neurocognitive underpinnings of the uncanny valley effect remain elusive. Here, we utilized an electroencephalography (EEG) dataset of steady-state visual evoked potentials (SSVEP) in which participants were presented with human face images of different stylization levels ranging from simplistic cartoons to actual photographs. Assessing neuronal responses both in frequency and time domain, we found a non-linear relationship between SSVEP amplitudes and stylization level, that is, the most stylized cartoon images and the real photographs evoked stronger responses than images with medium stylization. Moreover, realness of even highly similar stylization levels could be decoded from the EEG data with task-related component analysis (TRCA). Importantly, we also account for confounding factors, such as the size of the stimulus face's eyes, which previously have not been adequately addressed. Together, this study provides a basis for future research and neuronal benchmarking of real-time detection of face realness regarding three aspects: SSVEP-based neural markers, efficient classification methods, and low-level stimulus confounders.


Subject(s)
Brain-Computer Interfaces , Evoked Potentials, Visual , Humans , Electroencephalography/methods , Eye , Neurologic Examination , Photic Stimulation
2.
IEEE Trans Vis Comput Graph ; 30(5): 2644-2650, 2024 May.
Article in English | MEDLINE | ID: mdl-38466595

ABSTRACT

We propose a novel representation of virtual humans for highly realistic real-time animation and rendering in 3D applications. We learn pose dependent appearance and geometry from highly accurate dynamic mesh sequences obtained from state-of-the-art multiview-video reconstruction. Learning pose-dependent appearance and geometry from mesh sequences poses significant challenges, as it requires the network to learn the intricate shape and articulated motion of a human body. However, statistical body models like SMPL provide valuable a-priori knowledge which we leverage in order to constrain the dimension of the search space, enabling more efficient and targeted learning and to define pose-dependency. Instead of directly learning absolute pose-dependent geometry, we learn the difference between the observed geometry and the fitted SMPL model. This allows us to encode both pose-dependent appearance and geometry in the consistent UV space of the SMPL model. This approach not only ensures a high level of realism but also facilitates streamlined processing and rendering of virtual humans in real-time scenarios.


Subject(s)
Computer Graphics , Humans
3.
Sensors (Basel) ; 23(13)2023 Jul 04.
Article in English | MEDLINE | ID: mdl-37447986

ABSTRACT

We investigate an edge-computing scenario for robot control, where two similar neural networks are running on one computational node. We test the feasibility of using a single object-detection model (YOLOv5) with the benefit of reduced computational resources against the potentially more accurate independent and specialized models. Our results show that using one single convolutional neural network (for object detection and hand-gesture classification) instead of two separate ones can reduce resource usage by almost 50%. For many classes, we observed an increase in accuracy when using the model trained with more labels. For small datasets (a few hundred instances per label), we found that it is advisable to add labels with many instances from another dataset to increase detection accuracy.


Subject(s)
Gestures , Running , Hand , Neural Networks, Computer , Upper Extremity
4.
Sensors (Basel) ; 23(11)2023 May 24.
Article in English | MEDLINE | ID: mdl-37299770

ABSTRACT

Multimodal user interfaces promise natural and intuitive human-machine interactions. However, is the extra effort for the development of a complex multisensor system justified, or can users also be satisfied with only one input modality? This study investigates interactions in an industrial weld inspection workstation. Three unimodal interfaces, including spatial interaction with buttons augmented on a workpiece or a worktable, and speech commands, were tested individually and in a multimodal combination. Within the unimodal conditions, users preferred the augmented worktable, but overall, the interindividual usage of all input technologies in the multimodal condition was ranked best. Our findings indicate that the implementation and the use of multiple input modalities is valuable and that it is difficult to predict the usability of individual input modalities for complex systems.


Subject(s)
Technology , User-Computer Interface , Humans , Speech
5.
J Telemed Telecare ; : 1357633X231166226, 2023 Apr 24.
Article in English | MEDLINE | ID: mdl-37093788

ABSTRACT

Existing challenges in surgical education (See one, do one, teach one) as well as the COVID-19 pandemic make it necessary to develop new ways for surgical training. Therefore, this work describes the implementation of a scalable remote solution called "TeleSTAR" using immersive, interactive and augmented reality elements which enhances surgical training in the operating room. The system uses a full digital surgical microscope in the context of Ear-Nose-Throat surgery. The microscope is equipped with a modular software augmented reality interface consisting an interactive annotation mode to mark anatomical landmarks using a touch device, an experimental intraoperative image-based stereo-spectral algorithm unit to measure anatomical details and highlight tissue characteristics. The new educational tool was evaluated and tested during the broadcast of three live XR-based three-dimensional cochlear implant surgeries. The system was able to scale to five different remote locations in parallel with low latency and offering a separate two-dimensional YouTube stream with a higher latency. In total more than 150 persons were trained including healthcare professionals, biomedical engineers and medical students.

6.
Sci Rep ; 13(1): 1532, 2023 01 27.
Article in English | MEDLINE | ID: mdl-36707664

ABSTRACT

Flap loss through limited perfusion remains a major complication in reconstructive surgery. Continuous monitoring of perfusion will facilitate early detection of insufficient perfusion. Remote or imaging photoplethysmography (rPPG/iPPG) as a non-contact, non-ionizing, and non-invasive monitoring technique provides objective and reproducible information on physiological parameters. The aim of this study is to establish rPPG for intra- and postoperative monitoring of flap perfusion in patients undergoing reconstruction with free fasciocutaneous flaps (FFCF). We developed a monitoring algorithm for flap perfusion, which was evaluated in 15 patients. For 14 patients, ischemia of the FFCF in the forearm and successful reperfusion of the implanted FFCF was quantified based on the local signal. One FFCF showed no perfusion after reperfusion and devitalized in the course. Intraoperative monitoring of perfusion with rPPG provides objective and reproducible results. Therefore, rPPG is a promising technology for standard flap perfusion monitoring on low costs without the need for additional monitoring devices.


Subject(s)
Free Tissue Flaps , Photoplethysmography , Humans , Free Tissue Flaps/blood supply , Perfusion , Monitoring, Intraoperative , Monitoring, Physiologic/methods
7.
HNO ; 70(Suppl 1): 1-7, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34633475

ABSTRACT

BACKGROUND: Nasal septum perforations (NSP) have many uncomfortable symptoms for the patient and a highly negative impact on quality of life. NSPs are closed using patient-specific implants or surgery. Implants are created either under anesthesia using silicone impressions or using 3D models from CT data. Disadvantages for patient safety are the increased risk of morbidity or radiation exposure. MATERIALS AND METHODS: In the context of otorhinolaryngologic surgery, we present a gentle approach to treating NSP with a new image-based, contactless, and radiation-free measurement method using a 3D endoscope. The method relies on image information only and makes use of real-time capable computer vision algorithms to compute 3D information. This endoscopic method can be repeated as often as desired in the clinical course and has already proven its accuracy and robustness for robotic-assisted surgery (RAS) and surgical microscopy. We expand our method for nasal surgery, as there are additional spatial and stereoperspective challenges. RESULTS: After measuring 3 relevant parameters (NSP extension: axial, coronal, and NSP circumference) of 6 patients and comparing the results of 2 stereoendoscopes with CT data, it was shown that the image-based measurements can achieve comparable accuracies to CT data. One patient could be only partially evaluated because the NSP was larger than the endoscopic field of view. CONCLUSION: Based on the very good measurements, we outline a therapeutic procedure which should enable the production of patient-specific NSP implants based on endoscopic data only.


Subject(s)
Nasal Septal Perforation , Robotic Surgical Procedures , Endoscopy , Humans , Nasal Septal Perforation/diagnostic imaging , Nasal Septal Perforation/surgery , Nasal Septum/diagnostic imaging , Nasal Septum/surgery , Quality of Life
8.
HNO ; 70(3): 206-213, 2022 Mar.
Article in German | MEDLINE | ID: mdl-34477908

ABSTRACT

BACKGROUND: Nasal septum perforations (NSP) have many uncomfortable symptoms for the patient and a highly negative impact on quality of life. NSPs are closed using patient-specific implants or surgery. Implants are created either under anesthesia using silicone impressions or using 3D models from CT data. Disadvantages for patient safety are the increased risk of morbidity or radiation exposure. MATERIALS AND METHODS: In the context of otorhinolaryngologic surgery, we present a gentle approach to treating NSP with a new image-based, contactless, and radiation-free measurement method using a 3D endoscope. The method relies on image information only and makes use of real-time capable computer vision algorithms to compute 3D information. This endoscopic method can be repeated as often as desired in the clinical course and has already proven its accuracy and robustness for robotic-assisted surgery (RAS) and surgical microscopy. We expand our method for nasal surgery, as there are additional spatial and stereoperspective challenges. RESULTS: After measuring 3 relevant parameters (NSP extension: axial, coronal, and NSP circumference) of 6 patients and comparing the results of 2 stereoendoscopes with CT data, it was shown that the image-based measurements can achieve comparable accuracies to CT data. One patient could be only partially evaluated because the NSP was larger than the endoscopic field of view. CONCLUSION: Based on the very good measurements, we outline a therapeutic procedure which should enable the production of patient-specific NSP implants based on endoscopic data only.


Subject(s)
Nasal Septal Perforation , Robotic Surgical Procedures , Endoscopy/methods , Humans , Nasal Septal Perforation/diagnostic imaging , Nasal Septal Perforation/surgery , Nasal Septum/diagnostic imaging , Nasal Septum/surgery , Quality of Life
9.
Datenbank Spektrum ; 21(3): 255-260, 2021.
Article in English | MEDLINE | ID: mdl-34786019

ABSTRACT

Today's scientific data analysis very often requires complex Data Analysis Workflows (DAWs) executed over distributed computational infrastructures, e.g., clusters. Much research effort is devoted to the tuning and performance optimization of specific workflows for specific clusters. However, an arguably even more important problem for accelerating research is the reduction of development, adaptation, and maintenance times of DAWs. We describe the design and setup of the Collaborative Research Center (CRC) 1404 "FONDA -- Foundations of Workflows for Large-Scale Scientific Data Analysis", in which roughly 50 researchers jointly investigate new technologies, algorithms, and models to increase the portability, adaptability, and dependability of DAWs executed over distributed infrastructures. We describe the motivation behind our project, explain its underlying core concepts, introduce FONDA's internal structure, and sketch our vision for the future of workflow-based scientific data analysis. We also describe some lessons learned during the "making of" a CRC in Computer Science with strong interdisciplinary components, with the aim to foster similar endeavors.

10.
J Biomed Opt ; 26(7)2021 07.
Article in English | MEDLINE | ID: mdl-34304399

ABSTRACT

SIGNIFICANCE: Hyperspectral and multispectral imaging (HMSI) in medical applications provides information about the physiology, morphology, and composition of tissues and organs. The use of these technologies enables the evaluation of biological objects and can potentially be applied as an objective assessment tool for medical professionals. AIM: Our study investigates HMSI systems for their usability in medical applications. APPROACH: Four HMSI systems (one hyperspectral pushbroom camera and three multispectral snapshot cameras) were examined and a spectrometer was used as a reference system, which was initially validated with a standardized color chart. The spectral accuracy of the cameras reproducing chemical properties of different biological objects (porcine blood, physiological porcine tissue, and pathological porcine tissue) was analyzed using the Pearson correlation coefficient. RESULTS: All the HMSI cameras examined were able to provide the characteristic spectral properties of blood and tissues. A pushbroom camera and two snapshot systems achieve Pearson coefficients of at least 0.97 compared to the ground truth, indicating a very high positive correlation. Only one snapshot camera performs moderately to high positive correlation (0.59 to 0.85). CONCLUSION: The knowledge of the suitability of HMSI cameras for accurate measurement of chemical properties of biological objects offers a good opportunity for the selection of the optimal imaging tool for specific medical applications, such as organ transplantation.


Subject(s)
Diagnostic Imaging , Organ Transplantation , Animals , Swine
11.
IEEE Comput Graph Appl ; 41(4): 52-63, 2021.
Article in English | MEDLINE | ID: mdl-33755560

ABSTRACT

This article presents a hybrid animation approach that combines example-based and neural animation methods to create a simple, yet powerful animation regime for human faces. Example-based methods usually employ a database of prerecorded sequences that are concatenated or looped in order to synthesize novel animations. In contrast to this traditional example-based approach, we introduce a light-weight auto-regressive network to transform our animation-database into a parametric model. During training, our network learns the dynamics of facial expressions, which enables the replay of annotated sequences from our animation database as well as their seamless concatenation in new order. This representation is especially useful for the synthesis of visual speech, where coarticulation creates interdependencies between adjacent visemes, which affects their appearance. Instead of creating an exhaustive database that contains all viseme variants, we use our animation-network to predict the correct appearance. This allows realistic synthesis of novel facial animation sequences like visual-speech but also general facial expressions in an example-based manner.


Subject(s)
User-Computer Interface , Virtual Reality , Facial Expression , Humans , Neural Networks, Computer , Speech
12.
J Med Imaging (Bellingham) ; 7(6): 065001, 2020 Nov.
Article in English | MEDLINE | ID: mdl-33241074

ABSTRACT

Purpose: Hyperspectral imaging (HSI) is a non-contact optical imaging technique with the potential to serve as an intraoperative computer-aided diagnostic tool. Our work analyzes the optical properties of visible structures in the surgical field for automatic tissue categorization. Approach: Building an HSI-based computer-aided tissue analysis system requires accurate ground truth and validation of optical soft tissue properties as these show large variability. We introduce and validate two different hyperspectral intraoperative imaging setups and their use for the analysis of optical tissue properties. First, we present an improved multispectral filter-wheel setup integrated into a fully digital microscope. Second, we present a novel setup of two hyperspectral snapshot cameras for intraoperative usage. Both setups are operating in the spectral range of 400 up to 975 nm. They are calibrated and validated using the same database and calibration set. Results: For validation, a color chart with 18 well-defined color spectra in the visual range is analyzed. Thus the results acquired with both settings become transferable and comparable to each other as well as between different interventions. On patient data of two different otorhinolaryngology procedures, we analyze the optical behaviors of different soft tissues and show a visualization of such different spectral information. Conclusion: The introduced calibration pipeline for different HSI setups allows comparison between all acquired spectral information. Clinical in vivo data underline the potential of HSI as an intraoperative diagnostic tool and the clinical usability of both introduced setups. Thereby, we demonstrate their feasibility for the in vivo analysis and categorization of different human soft tissues.

13.
Sensors (Basel) ; 20(18)2020 Sep 17.
Article in English | MEDLINE | ID: mdl-32957675

ABSTRACT

We develop a stereo-multispectral endoscopic prototype in which a filter-wheel is used for surgical guidance to remove cholesteatoma tissue in the middle ear. Cholesteatoma is a destructive proliferating tissue. The only treatment for this disease is surgery. Removal is a very demanding task, even for experienced surgeons. It is very difficult to distinguish between bone and cholesteatoma. In addition, it can even reoccur if not all tissue particles of the cholesteatoma are removed, which leads to undesirable follow-up operations. Therefore, we propose an image-based method that combines multispectral tissue classification and 3D reconstruction to identify all parts of the removed tissue and determine their metric dimensions intraoperatively. The designed multispectral filter-wheel 3D-endoscope prototype can switch between narrow-band spectral and broad-band white illumination, which is technically evaluated in terms of optical system properties. Further, it is tested and evaluated on three patients. The wavelengths 400 nm and 420 nm are identified as most suitable for the differentiation task. The stereoscopic image acquisition allows accurate 3D surface reconstruction of the enhanced image information. The first results are promising, as the cholesteatoma can be easily highlighted, correctly identified, and visualized as a true-to-scale 3D model showing the patient-specific anatomy.


Subject(s)
Cholesteatoma , Cholesteatoma/surgery , Endoscopes , Endoscopy , Humans
14.
Biomed Opt Express ; 11(3): 1489-1500, 2020 Mar 01.
Article in English | MEDLINE | ID: mdl-32206424

ABSTRACT

Cholesteatoma of the ear can lead to life-threatening complications and its only treatment is surgery. The smallest remnants of cholesteatoma can lead to recurrence of this disease. Therefore, the optical properties of this tissue are of high importance to identify and remove all cholesteatoma during therapy. In this paper, we determine the absorption coefficient µ a and scattering coefficient µ s ' of cholesteatoma and bone samples in the wavelength range of 250 nm to 800 nm obtained during five surgeries. These values are determined by high precision integrating sphere measurements in combination with an optimized inverse Monte Carlo simulation (iMCS). To conserve the optical behavior of living tissues, the optical spectroscopy measurements are performed immediately after tissue removal and preparation. It is shown that in the near-UV and visible spectrum clear differences exist between cholesteatoma and bone tissue. While µ a is decreasing homogeneously for cholesteatoma, it retains at the high level for bone in the region of 350 nm to 580 nm. Further, the results for the cholesteatoma measurements correspond to published healthy epidermis data. These differences in the optical parameters reveal the future possibility to detect and identify, automatically or semi-automatically, cholesteatoma tissue for active treatment decisions during image-guided surgery leading to a better surgical outcome.

15.
J Biomed Opt ; 24(12): 1-7, 2019 12.
Article in English | MEDLINE | ID: mdl-31797647

ABSTRACT

The optical properties of human tissues are an important parameter in medical diagnostics and therapy. The knowledge of these parameters can encourage the development of automated, computer-driven optical tissue analysis methods. We determine the absorption coefficient µa and scattering coefficient µs' of different tissue types obtained during parotidectomy in the wavelength range of 250 to 800 nm. These values are determined by high precision integrating sphere measurements in combination with an optimized inverse Monte Carlo simulation. To conserve the optical behavior of living tissues, the optical spectroscopy measurements are performed immediately after tissue removal. Our study includes fresh samples of the ear, nose, and throat (ENT) region, as muscle tissue, nervous tissue, white adipose tissue, stromal tissue, parotid gland, and tumorous tissue of five patients. The measured behavior of adipose corresponds well with the literature, which sustains the applied method. It is shown that muscle is well supplied with blood as it features the same characteristic peaks at 430 and 555 nm in the absorption curve. The parameter µs' decreases for all tissue types above 570 nm. The accuracy is adequate for the purposes of providing µa and µs' of different human tissue types as muscle, fat, nerve, or gland tissue, which are embedded in large complex structures such as in the ENT area. It becomes possible for the first time to present reasonable results for the optical behavior of human soft tissue located in the ENT area and in the near-UV, visual, and near-infrared areas.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Optical Imaging/methods , Parotid Gland , Parotid Neoplasms , Adipose Tissue/diagnostic imaging , Aged , Aged, 80 and over , Humans , Middle Aged , Monte Carlo Method , Nerve Tissue/diagnostic imaging , Parotid Gland/diagnostic imaging , Parotid Gland/surgery , Parotid Neoplasms/diagnostic imaging , Parotid Neoplasms/surgery , Scattering, Radiation
16.
IEEE Trans Vis Comput Graph ; 25(11): 3105-3113, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31403419

ABSTRACT

Shader lamp systems augment the real environment by projecting new textures on known target geometries. In dynamic scenes, object tracking maintains the illusion if the physical and virtual objects are well aligned. However, traditional trackers based on texture or contour information are often distracted by the projected content and tend to fail. In this paper, we present a model-based tracking strategy, which directly takes advantage from the projected content for pose estimation in a projector-camera system. An iterative pose estimation algorithm captures and exploits visible distortions caused by object movements. In a closed-loop, the corrected pose allows the update of the projection for the subsequent frame. Synthetic frames simulating the projection on the model are rendered and an optical flow-based method minimizes the difference between edges of the rendered and the camera image. Since the thresholds automatically adapt to the synthetic image, a complicated radiometric calibration can be avoided. The pixel-wise linear optimization is designed to be easily implemented on the GPU. Our approach can be combined with a regular contour-based tracker and is transferable to other problems, like the estimation of the extrinsic pose between projector and camera. We evaluate our procedure with real and synthetic images and obtain very precise registration results.

17.
IEEE Comput Graph Appl ; 38(5): 119-132, 2018.
Article in English | MEDLINE | ID: mdl-30273132

ABSTRACT

Visual computing technologies have an important role in manufacturing and production, particularly in new Industry 4.0 scenarios with intelligent machines, human-robot collaboration and learning factories. In this article, we explore challenges and examples on how the fusion of graphics, vision and media technologies can enhance the role of operators in this new context.

18.
J Biomed Opt ; 23(9): 1-8, 2018 05.
Article in English | MEDLINE | ID: mdl-29745130

ABSTRACT

We address the automatic differentiation of human tissue using multispectral imaging with promising potential for automatic visualization during surgery. Currently, tissue types have to be continuously differentiated based on the surgeon's knowledge only. Further, automatic methods based on optical in vivo properties of human tissue do not yet exist, as these properties have not been sufficiently examined. To overcome this, we developed a hyperspectral camera setup to monitor the different optical behavior of tissue types in vivo. The aim of this work is to collect and analyze these behaviors to open up optical opportunities during surgery. Our setup uses a digital camera and several bandpass filters in front of the light source to illuminate different tissue types with 16 specific wavelength ranges. We analyzed the different intensities of eight healthy tissue types over the visible spectrum (400 to 700 nm). Using our setup and sophisticated postprocessing in order to handle motion during capturing, we are able to find tissue characteristics not visible for the human eye to differentiate tissue types in the 16-dimensional wavelength domain. Our analysis shows that this approach has the potential to support the surgeon's decisions during treatment.


Subject(s)
Microscopy , Spectrum Analysis , Surgery, Computer-Assisted/instrumentation , Blood Vessels/diagnostic imaging , Connective Tissue/diagnostic imaging , Equipment Design , Humans , Microscopy/instrumentation , Microscopy/methods , Spectrum Analysis/instrumentation , Spectrum Analysis/methods
20.
Crit Care Nurs Q ; 33(2): 190-9, 2010.
Article in English | MEDLINE | ID: mdl-20234208

ABSTRACT

Emergency department (ED) nurses care for victims of trauma almost daily. Although preservation of evidence is crucial, the ED is chaotic when a trauma patient arrives and staff members must do everything possible to save the patient's life. However, an integral responsibility of the staff nurse is collection and preservation of forensic evidence. This article provides insight into the process undertaken by a multidisciplinary team to develop a set of evidence-based guidelines for forensic evidence collection. The team compiled evidence from more than 20 articles and consultations with law enforcement officials and forensic experts. This information was used to develop a set of guidelines for forensic evidence collection in the ED or operating room. Staff educational needs presented some challenges. Training was designed to specifically address the roles of three major groups of staff: patient representatives and emergency and trauma nurses. Educational topics included evidence recognition, handling of clothing, gross/trace evidence, documentation, packaging of evidence, and use of the "chain-of-evidence" form. Practice modifications included development of a new "chain-of-evidence" form, a forensic cart in the operating room, and use of a collapsible plastic box for collection of clothing in the ED.


Subject(s)
Emergency Nursing , Emergency Service, Hospital , Forensic Medicine , Documentation , Guidelines as Topic , Humans , Nurse's Role
SELECTION OF CITATIONS
SEARCH DETAIL
...