Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 29(11): 4339-4349, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37782603

ABSTRACT

We introduce a high resolution spatially adaptive light source, or a projector, into a neural reflectance field that allows to both calibrate the projector and photo realistic light editing. The projected texture is fully differentiable with respect to all scene parameters, and can be optimized to yield a desired appearance suitable for applications in augmented reality and projection mapping. Our neural field consists of three neural networks, estimating geometry, material, and transmittance. Using an analytical BRDF model and carefully selected projection patterns, our acquisition process is simple and intuitive, featuring a fixed uncalibrated projected and a handheld camera with a co-located light source. As we demonstrate, the virtual projector incorporated into the pipeline improves scene understanding and enables various projection mapping applications, alleviating the need for time consuming calibration steps performed in a traditional setting per view or projector location. In addition to enabling novel viewpoint synthesis, we demonstrate state-of-the-art performance projector compensation for novel viewpoints, improvement over the baselines in material and scene reconstruction, and three simply implemented scenarios where projection image optimization is performed, including the use of a 2D generative model to consistently dictate scene appearance from multiple viewpoints. We believe that neural projection mapping opens up the door to novel and exciting downstream tasks, through the joint optimization of the scene and projection images.

2.
J Med Imaging (Bellingham) ; 9(4): 044503, 2022 Jul.
Article in English | MEDLINE | ID: mdl-36061214

ABSTRACT

Purpose: Cerebrovascular vessel segmentation is a key step in the detection of vessel pathology. Brain time-of-flight magnetic resonance angiography (TOF-MRA) is a main method used clinically for imaging of blood vessels using magnetic resonance imaging. This method is primarily used to detect narrowing, blockage of the arteries, and aneurysms. Despite its importance, TOF-MRA interpretation relies mostly on visual, subjective assessment performed by a neuroradiologist and is mostly based on maximum intensity projections reconstruction of the three-dimensional (3D) scan, thus reducing the acquired spatial resolution. Works tackling the central problem of automatically segmenting brain blood vessels typically suffer from memory and imbalance related issues. To address these issues, the spatial context of the segmentation consider by neural networks is typically restricted (e.g., by resolution reduction or analysis of environments of lower dimensions). Although efficient, such solutions hinder the ability of the neural networks to understand the complex 3D structures typical of the cerebrovascular system and to leverage this understanding for decision making. Approach: We propose a brain-vessels generative-adversarial-network (BV-GAN) segmentation model, that better considers connectivity and structural integrity, using prior based attention and adversarial learning techniques. Results: For evaluations, fivefold cross-validation experiments were performed on two datasets. BV-GAN demonstrates consistent improvement of up to 10% in vessel Dice score with each additive designed component to the baseline state-of-the-art models. Conclusions: Potentially, this automated 3D-approach could shorten analysis time, allow for quantitative characterization of vascular structures, and reduce the need to decrease resolution, overall improving diagnosis cerebrovascular vessel disorders.

3.
Infancy ; 27(4): 765-779, 2022 07.
Article in English | MEDLINE | ID: mdl-35416378

ABSTRACT

Infants' looking behaviors are often used for measuring attention, real-time processing, and learning-often using low-resolution videos. Despite the ubiquity of gaze-related methods in developmental science, current analysis techniques usually involve laborious post hoc coding, imprecise real-time coding, or expensive eye trackers that may increase data loss and require a calibration phase. As an alternative, we propose using computer vision methods to perform automatic gaze estimation from low-resolution videos. At the core of our approach is a neural network that classifies gaze directions in real time. We compared our method, called iCatcher, to manually annotated videos from a prior study in which infants looked at one of two pictures on a screen. We demonstrated that the accuracy of iCatcher approximates that of human annotators and that it replicates the prior study's results. Our method is publicly available as an open-source repository at https://github.com/yoterel/iCatcher.


Subject(s)
Eye Movements , Neural Networks, Computer , Attention , Child , Child, Preschool , Humans , Infant , Learning
4.
Comput Vis Media (Beijing) ; 6(4): 385-400, 2020.
Article in English | MEDLINE | ID: mdl-33194253

ABSTRACT

Visualizing high-dimensional data on a 2D canvas is generally challenging. It becomes significantly more difficult when multiple time-steps are to be presented, as the visual clutter quickly increases. Moreover, the challenge to perceive the significant temporal evolution is even greater. In this paper, we present a method to plot temporal high-dimensional data in a static scatterplot; it uses the established PCA technique to project data from multiple time-steps. The key idea is to extend each individual displacement prior to applying PCA, so as to skew the projection process, and to set a projection plane that balances the directions of temporal change and spatial variance. We present numerous examples and various visual cues to highlight the data trajectories, and demonstrate the effectiveness of the method for visualizing temporal data.

5.
Neurophotonics ; 7(3): 035001, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32704521

ABSTRACT

Significance: We propose a video-based, motion-resilient, and fast method for estimating the position of optodes on the scalp. Aim: Measuring the exact placement of probes (e.g., electrodes and optodes) on a participant's head is a notoriously difficult step in acquiring neuroimaging data from methods that rely on scalp recordings (e.g., electroencephalography and functional near-infrared spectroscopy) and is particularly difficult for any clinical or developmental population. Existing methods of head measurements require the participant to remain still for a lengthy period of time, are laborious, and require extensive training. Therefore, a fast and motion-resilient method is required for estimating the scalp location of probes. Approach: We propose an innovative video-based method for estimating the probes' positions relative to the participant's head, which is fast, motion-resilient, and automatic. Our method builds on capitalizing the advantages and understanding the limitations of cutting-edge computer vision and machine learning tools. We validate our method on 10 adult subjects and provide proof of feasibility with infant subjects. Results: We show that our method is both reliable and valid compared to existing state-of-the-art methods by estimating probe positions in a single measurement and by tracking their translation and consistency across sessions. Finally, we show that our automatic method is able to estimate the position of probes on an infant head without lengthy offline procedures, a task that has been considered challenging until now. Conclusions: Our proposed method allows, for the first time, the use of automated spatial co-registration methods on developmental and clinical populations, where lengthy, motion-sensitive measurement methods routinely fail.

SELECTION OF CITATIONS
SEARCH DETAIL
...