Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38941194

RESUMO

Sleep quality is an essential parameter of a healthy human life, while sleep disorders such as sleep apnea are abundant. In the investigation of sleep and its malfunction, the gold-standard is polysomnography, which utilizes an extensive range of variables for sleep stage classification. However, undergoing full polysomnography, which requires many sensors that are directly connected to the heaviness of the setup and the discomfort of sleep, brings a significant burden. In this study, sleep stage classification was performed using the single dimension of nasal pressure, dramatically decreasing the complexity of the process. In turn, such improvements could increase the much needed clinical applicability. Specifically, we propose a deep learning structure consisting of multi-kernel convolutional neural networks and bidirectional long short-term memory for sleep stage classification. Sleep stages of 25 healthy subjects were classified into 3-class (wake, rapid eye movement (REM), and non-REM) and 4-class (wake, REM, light, and deep sleep) based on nasal pressure. Following a leave-one-subject-out cross-validation, in the 3-class the accuracy was 0.704, the F1-score was 0.490, and the kappa value was 0.283 for the overall metrics. In the 4-class, the accuracy was 0.604, the F1-score was 0.349, and the kappa value was 0.217 for the overall metrics. This was higher than the four comparative models, including the class-wise F1-score. This result demonstrates the possibility of a sleep stage classification model only using easily applicable and highly practical nasal pressure recordings. This is also likely to be used with interventions that could help treat sleep-related diseases.

2.
IEEE Trans Vis Comput Graph ; 29(12): 5224-5234, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36112552

RESUMO

What happens if we put vision and touch into conflict? Which modality "wins"? Although several previous studies have addressed this topic, they have solely focused on integration of vision and touch for low-level object properties (such as curvature, slant, or depth). In the present study, we introduce a multimodal mixed-reality setup based on real-time hand-tracking, which was used to display real-world, haptic exploration of objects in a virtual environment through a head-mounted-display (HMD). With this setup we studied multimodal conflict situations of objects varying along higher-level, parametrically-controlled global shape properties. Participants explored these objects in both unimodal and multimodal settings with the latter including congruent and incongruent conditions and differing instructions for weighting the input modalities. Results demonstrated a surprisingly clear touch dominance throughout all experiments, which in addition was only marginally influenceable through instructions to bias their modality weighting. We also present an initial analysis of the hand-tracking patterns that illustrates the potential for our setup to investigate exploration behavior in more detail.

3.
Brain Struct Funct ; 223(2): 619-633, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28905126

RESUMO

Touch delivers a wealth of information already from birth, helping infants to acquire knowledge about a variety of important object properties using their hands. Despite the fact that we are touch experts as much as we are visual experts, surprisingly, little is known how our perceptual ability in touch is linked to either functional or structural aspects of the brain. The present study, therefore, investigates and identifies neuroanatomical correlates of haptic perceptual performance using a novel, multi-modal approach. For this, participants' performance in a difficult shape categorization task was first measured in the haptic domain. Using a multi-modal functional magnetic resonance imaging and diffusion-weighted magnetic resonance imaging analysis pipeline, functionally defined and anatomically constrained white-matter pathways were extracted and their microstructural characteristics correlated with individual variability in haptic categorization performance. Controlling for the effects of age, total intracranial volume and head movements in the regression model, haptic performance was found to correlate significantly with higher axial diffusivity in functionally defined superior longitudinal fasciculus (fSLF) linking frontal and parietal areas. These results were further localized in specific sub-parts of fSLF. Using additional data from a second group of participants, who first learned the categories in the visual domain and then transferred to the haptic domain, haptic performance correlates were obtained in the functionally defined inferior longitudinal fasciculus. Our results implicate SLF linking frontal and parietal areas as an important white-matter track in processing touch-specific information during object processing, whereas ILF relays visually learned information during haptic processing. Taken together, the present results chart for the first time potential neuroanatomical correlates and interactions of touch-related object processing.


Assuntos
Mapeamento Encefálico , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Neuroimagem Funcional , Percepção de Tamanho/fisiologia , Tato/fisiologia , Adolescente , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Vias Neurais/diagnóstico por imagem , Neuroanatomia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...