Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 26(10): 2994-3007, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32870780

ABSTRACT

State-of-the-art methods for diminished reality propagate pixel information from a keyframe to subsequent frames for real-time inpainting. However, these approaches produce artifacts, if the scene geometry is not sufficiently planar. In this article, we present InpaintFusion, a new real-time method that extends inpainting to non-planar scenes by considering both color and depth information in the inpainting process. We use an RGB-D sensor for simultaneous localization and mapping, in order to both track the camera and obtain a surfel map in addition to RGB images. We use the RGB-D information in a cost function for both the color and the geometric appearance to derive a global optimization for simultaneous inpainting of color and depth. The inpainted depth is merged in a global map by depth fusion. For the final rendering, we project the map model into image space, where we can use it for effects such as relighting and stereo rendering of otherwise hidden structures. We demonstrate the capabilities of our method by comparing it to inpainting results with methods using planar geometric proxies.

2.
IEEE Trans Vis Comput Graph ; 25(11): 3063-3072, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31403421

ABSTRACT

We propose an algorithm for generating an unstructured lumigraph in real-time from an image stream. This problem has important applications in mixed reality, such as telepresence, interior design or as-built documentation. Unlike conventional texture optimization in structure from motion, our method must choose views from the input stream in a strictly incremental manner, since only a small number of views can be stored or transmitted. This requires formulating an online variant of the well-known view-planning problem, which must take into account what parts of the scene have already been seen and how the lumigraph sample distribution could improve in the future. We address this highly unconstrained problem by regularizing the scene structure using a regular grid structure. Upon the grid structure, we define a coverage metric describing how well the lumigraph samples cover the grid in terms of spatial and angular resolution, and we greedily keep incoming views if they improve the coverage. We evaluate the performance of our algorithm quantitatively and qualitatively on a variety of synthetic and real scenes, and demonstrate visually appealing results obtained at real-time frame rates (in the range of 3Hz-100Hz per incoming image, depending on configuration).

3.
IEEE Trans Vis Comput Graph ; 24(4): 1437-1446, 2018 04.
Article in English | MEDLINE | ID: mdl-29543162

ABSTRACT

Drones allow exploring dangerous or impassable areas safely from a distant point of view. However, flight control from an egocentric view in narrow or constrained environments can be challenging. Arguably, an exocentric view would afford a better overview and, thus, more intuitive flight control of the drone. Unfortunately, such an exocentric view is unavailable when exploring indoor environments. This paper investigates the potential of drone-augmented human vision, i.e., of exploring the environment and controlling the drone indirectly from an exocentric viewpoint. If used with a see-through display, this approach can simulate X-ray vision to provide a natural view into an otherwise occluded environment. The user's view is synthesized from a three-dimensional reconstruction of the indoor environment using image-based rendering. This user interface is designed to reduce the cognitive load of the drone's flight control. The user can concentrate on the exploration of the inaccessible space, while flight control is largely delegated to the drone's autopilot system. We assess our system with a first experiment showing how drone-augmented human vision supports spatial understanding and improves natural interaction with the drone.


Subject(s)
Aircraft , Computer Graphics , Data Display , Imaging, Three-Dimensional/methods , Man-Machine Systems , Adult , Humans , Male , Video Recording , Virtual Reality , X-Rays , Young Adult
4.
PLoS One ; 8(6): e67329, 2013.
Article in English | MEDLINE | ID: mdl-23825652

ABSTRACT

Brain tissue changes in autism spectrum disorders seem to be rather subtle and widespread than anatomically distinct. Therefore a multimodal, whole brain imaging technique appears to be an appropriate approach to investigate whether alterations in white and gray matter integrity relate to consistent changes in functional resting state connectivity in individuals with high functioning autism (HFA). We applied diffusion tensor imaging (DTI), voxel-based morphometry (VBM) and resting state functional connectivity magnetic resonance imaging (fcMRI) to assess differences in brain structure and function between 12 individuals with HFA (mean age 35.5, SD 11.4, 9 male) and 12 healthy controls (mean age 33.3, SD 9.0, 8 male). Psychological measures of empathy and emotionality were obtained and correlated with the most significant DTI, VBM and fcMRI findings. We found three regions of convergent structural and functional differences between HFA participants and controls. The right temporo-parietal junction area and the left frontal lobe showed decreased fractional anisotropy (FA) values along with decreased functional connectivity and a trend towards decreased gray matter volume. The bilateral superior temporal gyrus displayed significantly decreased functional connectivity that was accompanied by the strongest trend of gray matter volume decrease in the temporal lobe of HFA individuals. FA decrease in the right temporo-parietal region was correlated with psychological measurements of decreased emotionality. In conclusion, our results indicate common sites of structural and functional alterations in higher order association cortex areas and may therefore provide multimodal imaging support to the long-standing hypothesis of autism as a disorder of impaired higher-order multisensory integration.


Subject(s)
Autistic Disorder/physiopathology , Brain/physiopathology , Adult , Autistic Disorder/diagnostic imaging , Brain/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Neuropsychological Tests , Young Adult
5.
Med Image Comput Comput Assist Interv ; 15(Pt 2): 609-16, 2012.
Article in English | MEDLINE | ID: mdl-23286099

ABSTRACT

The alignment of the lower limb in high tibial osteotomy (HTO) or total knee arthroplasty (TKA) must be determined intraoperatively. One way to do so is to deform the mechanical axis deviation (MAD), for which a tolerance measurement of 10 mm is widely accepted. Many techniques are proposed in clinical practice such as visual inspection, cable method, grid with lead impregnated reference lines, or more recently, navigation systems. Each has their disadvantages including reliability of the MAD measurement, excess radiation, prolonged operation time, complicated setup and high cost. To alleviate such shortcomings, we propose a novel clinical protocol that allows quick and accurate intraoperative calculation of MAD. This is achieved by an X-ray stitching method requiring only three X-ray images placed into a panoramic image frame during the entire procedure. The method has been systematically analyzed in a simulation framework in order to investigate its accuracy and robustness. Furthermore, we validated our protocol via a preclinical study comprising 19 human cadaver legs. Four surgeons determined MAD measurements using our X-ray panorama and compared these values to a gold-standard CT-based technique. The maximum average MAD error was 3.5mm which shows great potential for the technique.


Subject(s)
Arthroplasty, Replacement/methods , Imaging, Three-Dimensional/methods , Leg/diagnostic imaging , Leg/surgery , Pattern Recognition, Automated/methods , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Cadaver , Humans , Radiographic Image Interpretation, Computer-Assisted/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...