Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters











Publication year range
1.
Neural Netw ; 180: 106677, 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39260008

ABSTRACT

Spiking Neural Networks (SNNs), renowned for their low power consumption, brain-inspired architecture, and spatio-temporal representation capabilities, have garnered considerable attention in recent years. Similar to Artificial Neural Networks (ANNs), high-quality benchmark datasets are of great importance to the advances of SNNs. However, our analysis indicates that many prevalent neuromorphic datasets lack strong temporal correlation, preventing SNNs from fully exploiting their spatio-temporal representation capabilities. Meanwhile, the integration of event and frame modalities offers more comprehensive visual spatio-temporal information. Yet, the SNN-based cross-modality fusion remains underexplored. In this work, we present a neuromorphic dataset called DVS-SLR that can better exploit the inherent spatio-temporal properties of SNNs. Compared to existing datasets, it offers advantages in terms of higher temporal correlation, larger scale, and more varied scenarios. In addition, our neuromorphic dataset contains corresponding frame data, which can be used for developing SNN-based fusion methods. By virtue of the dual-modal feature of the dataset, we propose a Cross-Modality Attention (CMA) based fusion method. The CMA model efficiently utilizes the unique advantages of each modality, allowing for SNNs to learn both temporal and spatial attention scores from the spatio-temporal features of event and frame modalities, subsequently allocating these scores across modalities to enhance their synergy. Experimental results demonstrate that our method not only improves recognition accuracy but also ensures robustness across diverse scenarios.

2.
Philos Trans R Soc Lond B Biol Sci ; 379(1913): 20230398, 2024 Nov 04.
Article in English | MEDLINE | ID: mdl-39278242

ABSTRACT

While many aspects of cognition have been shown to be shared between humans and non-human animals, there remains controversy regarding whether the capacity to mentally time travel is a uniquely human one. In this paper, we argue that there are four ways of representing when some event happened: four kinds of temporal representation. Distinguishing these four kinds of temporal representation has five benefits. First, it puts us in a position to determine the particular benefits these distinct temporal representations afford an organism. Second, it provides the conceptual resources to foster a discussion about which of these representations is necessary for an organism to count as having the capacity to mentally time travel. Third, it enables us to distinguish stricter from more liberal views of mental time travel that differ regarding which kind(s) of temporal representation is taken to be necessary for mental time travel. Fourth, it allows us to determine the benefits of taking a stricter or more liberal view of mental time travel. Finally, it ensures that disagreement about whether some species can mentally time travel is not merely the product of unrecognized disagreement about which temporal representation is necessary for mental time travel. We argue for a more liberal view, on the grounds that it allows us to view mental time travel as an evolutionarily continuous phenomenon and to recognize that differences in the ways that organisms mentally time travel might reflect different temporal representations, or combinations thereof, that they employ. Our ultimate aim, however, is to create a conceptual framework for further discussion regarding what sorts of temporal representations are required for mental time travel.This article is part of the theme issue 'Elements of episodic memory: lessons from 40 years of research'.


Subject(s)
Cognition , Time Perception , Animals , Cognitive Science/methods , Time Perception/physiology
3.
Cortex ; 179: 143-156, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39173580

ABSTRACT

Although the peripheral nervous system lacks a dedicated receptor, the brain processes temporal information through different sensory channels. A critical question is whether temporal information from different sensory modalities at different times forms modality-specific representations or is integrated into a common representation in a supramodal manner. Behavioral studies on temporal memory mixing and the central tendency effect have provided evidence for supramodal temporal representations. We aimed to provide electrophysiological evidence for this proposal by employing a cross-modality time discrimination task combined with electroencephalogram (EEG) recordings. The task maintained a fixed auditory standard duration, whereas the visual comparison duration was randomly selected from the short and long ranges, creating two different audio-visual temporal contexts. The behavioral results showed that the point of subjective equality (PSE) in the short context was significantly lower than that in the long context. The EEG results revealed that the amplitude of the contingent negative variation (CNV) in the short context was significantly higher (more negative) than in the long context in the early stage, while it was lower (more positive) in the later stage. These results suggest that the audiovisual temporal context is integrated with the auditory standard duration to generate a subjective time criterion. Compared with the long context, the subjective time criterion in the short context was shorter, resulting in earlier decision-making and a preceding decrease in CNV. Our study provides electrophysiological evidence that temporal information from different modalities inputted into the brain at different times can form a supramodal temporal representation.


Subject(s)
Acoustic Stimulation , Auditory Perception , Electroencephalography , Time Perception , Visual Perception , Humans , Electroencephalography/methods , Male , Female , Young Adult , Time Perception/physiology , Auditory Perception/physiology , Adult , Visual Perception/physiology , Photic Stimulation/methods , Discrimination, Psychological/physiology , Reaction Time/physiology , Brain/physiology , Contingent Negative Variation/physiology
4.
Cell Rep ; 42(11): 113271, 2023 11 28.
Article in English | MEDLINE | ID: mdl-37906591

ABSTRACT

Grid cells in the entorhinal cortex demonstrate spatially periodic firing, thought to provide a spatial map on behaviorally relevant length scales. Whether such periodicity exists for behaviorally relevant time scales in the human brain remains unclear. We investigate neuronal firing during a temporally continuous experience by presenting 14 neurosurgical patients with a video while recording neuronal activity from multiple brain regions. We report on neurons that modulate their activity in a periodic manner across different time scales-from seconds to many minutes, most prevalently in the entorhinal cortex. These neurons remap their dominant periodicity to shorter time scales during a subsequent recognition memory task. When the video is presented at two different speeds, a significant percentage of these temporally periodic cells (TPCs) maintain their time scales, suggesting a degree of invariance. The TPCs' temporal periodicity might complement the spatial periodicity of grid cells and together provide scalable spatiotemporal metrics for human experience.


Subject(s)
Entorhinal Cortex , Neurons , Humans , Entorhinal Cortex/physiology , Neurons/physiology , Periodicity , Recognition, Psychology , Neural Pathways
5.
J Biomed Inform ; 143: 104408, 2023 07.
Article in English | MEDLINE | ID: mdl-37295630

ABSTRACT

Predicting the patient's in-hospital mortality from the historical Electronic Medical Records (EMRs) can assist physicians to make clinical decisions and assign medical resources. In recent years, researchers proposed many deep learning methods to predict in-hospital mortality by learning patient representations. However, most of these methods fail to comprehensively learn the temporal representations and do not sufficiently mine the contextual knowledge of demographic information. We propose a novel end-to-end approach based on Local and Global Temporal Representation Learning with Demographic Embedding (LGTRL-DE) to address the current issues for in-hospital mortality prediction. LGTRL-DE is enabled by (1) a local temporal representation learning module that captures the temporal information and analyzes the health status from a local perspective through a recurrent neural network with the demographic initialization and the local attention mechanism; (2) a Transformer-based global temporal representation learning module that extracts the interaction dependencies among clinical events; (3) a multi-view representation fusion module that fuses temporal and static information and generates the final patient's health representations. We evaluate our proposed LGTRL-DE on two public real-world clinical datasets (MIMIC-III and e-ICU). Experimental results show that LGTRL-DE achieves area under receiver operating characteristic curve of 0.8685 and 0.8733 on the MIMIC-III and e-ICU datasets, respectively, outperforming several state-of-the-art approaches.


Subject(s)
Neural Networks, Computer , Humans , Hospital Mortality
6.
EBioMedicine ; 92: 104629, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37247495

ABSTRACT

BACKGROUND: Alzheimer's Disease (AD) is a complex clinical phenotype with unprecedented social and economic tolls on an ageing global population. Real-world data (RWD) from electronic health records (EHRs) offer opportunities to accelerate precision drug development and scale epidemiological research on AD. A precise characterization of AD cohorts is needed to address the noise abundant in RWD. METHODS: We conducted a retrospective cohort study to develop and test computational models for AD cohort identification using clinical data from 8 Massachusetts healthcare systems. We mined temporal representations from EHR data using the transitive sequential pattern mining algorithm (tSPM) to train and validate our models. We then tested our models against a held-out test set from a review of medical records to adjudicate the presence of AD. We trained two classes of Machine Learning models, using Gradient Boosting Machine (GBM), to compare the utility of AD diagnosis records versus the tSPM temporal representations (comprising sequences of diagnosis and medication observations) from electronic medical records for characterizing AD cohorts. FINDINGS: In a group of 4985 patients, we identified 219 tSPM temporal representations (i.e., transitive sequences) of medical records for constructing the best classification models. The models with sequential features improved AD classification by a magnitude of 3-16 percent over the use of AD diagnosis codes alone. The computed cohort included 663 patients, 35 of whom had no record of AD. Six groups of tSPM sequences were identified for characterizing the AD cohorts. INTERPRETATION: We present sequential patterns of diagnosis and medication codes from electronic medical records, as digital markers of Alzheimer's Disease. Classification algorithms developed on sequential patterns can replace standard features from EHRs to enrich phenotype modelling. FUNDING: National Institutes of Health: the National Institute on Aging (RF1AG074372) and the National Institute of Allergy and Infectious Diseases (R01AI165535).


Subject(s)
Alzheimer Disease , Humans , Alzheimer Disease/diagnosis , Retrospective Studies , Algorithms , Machine Learning , Electronic Health Records
7.
Front Neurosci ; 17: 1148191, 2023.
Article in English | MEDLINE | ID: mdl-37090797

ABSTRACT

Sign languages are visual languages used as the primary communication medium for the Deaf community. The signs comprise manual and non-manual articulators such as hand shapes, upper body movement, and facial expressions. Sign Language Recognition (SLR) aims to learn spatial and temporal representations from the videos of the signs. Most SLR studies focus on manual features often extracted from the shape of the dominant hand or the entire frame. However, facial expressions combined with hand and body gestures may also play a significant role in discriminating the context represented in the sign videos. In this study, we propose an isolated SLR framework based on Spatial-Temporal Graph Convolutional Networks (ST-GCNs) and Multi-Cue Long Short-Term Memorys (MC-LSTMs) to exploit multi-articulatory (e.g., body, hands, and face) information for recognizing sign glosses. We train an ST-GCN model for learning representations from the upper body and hands. Meanwhile, spatial embeddings of hand shape and facial expression cues are extracted from Convolutional Neural Networks (CNNs) pre-trained on large-scale hand and facial expression datasets. Thus, the proposed framework coupling ST-GCNs with MC-LSTMs for multi-articulatory temporal modeling can provide insights into the contribution of each visual Sign Language (SL) cue to recognition performance. To evaluate the proposed framework, we conducted extensive analyzes on two Turkish SL benchmark datasets with different linguistic properties, BosphorusSign22k and AUTSL. While we obtained comparable recognition performance with the skeleton-based state-of-the-art, we observe that incorporating multiple visual SL cues improves the recognition performance, especially in certain sign classes where multi-cue information is vital. The code is available at: https://github.com/ogulcanozdemir/multicue-slr.

8.
J Assoc Res Otolaryngol ; 24(2): 197-215, 2023 04.
Article in English | MEDLINE | ID: mdl-36795196

ABSTRACT

Most accounts of single- and multi-unit responses in auditory cortex under anesthetized conditions have emphasized V-shaped frequency tuning curves and low-pass sensitivity to rates of repeated sounds. In contrast, single-unit recordings in awake marmosets also show I-shaped and O-shaped response areas having restricted tuning to frequency and (for O units) sound level. That preparation also demonstrates synchrony to moderate click rates and representation of higher click rates by spike rates of non-synchronized tonic responses, neither of which are commonly seen in anesthetized conditions. The spectral and temporal representation observed in the marmoset might reflect special adaptations of that species, might be due to single- rather than multi-unit recording, or might indicate characteristics of awake-versus-anesthetized recording conditions. We studied spectral and temporal representation in the primary auditory cortex of alert cats. We observed V-, I-, and O-shaped response areas like those demonstrated in awake marmosets. Neurons could synchronize to click trains at rates about an octave higher than is usually seen with anesthesia. Representations of click rates by rates of non-synchronized tonic responses exhibited dynamic ranges that covered the entire range of tested click rates. The observation of these spectral and temporal representations in cats demonstrates that they are not unique to primates and, indeed, might be widespread among mammalian species. Moreover, we observed no significant difference in stimulus representation between single- and multi-unit recordings. It appears that the principal factor that has hindered observations of high spectral and temporal acuity in the auditory cortex has been the use of general anesthesia.


Subject(s)
Auditory Cortex , Wakefulness , Cats , Animals , Acoustic Stimulation , Auditory Cortex/physiology , Callithrix , Neurons/physiology , Mammals
9.
Big Data ; 10(5): 440-452, 2022 10.
Article in English | MEDLINE | ID: mdl-35527683

ABSTRACT

Big data has been satisfactorily used to solve social issues in several parts of the word. Social event prediction is related to social stability and sustainable development. However, current research rarely takes into account the dynamic connections between event actors and learning robust feature representations of social events. Inspired by the graph neural network, we propose a novel Siamese Spatial and Temporal Dynamic Network for predicting social events. Specifically, we use multimodal data containing news articles and global events to construct dynamic graphs based on word co-occurrences and interactions between event actors. Dynamic graphs can model the evolution of social events. By employing the fusion of spatial and temporal dynamic graph representations from heterogeneous historical data, our proposed model predicts the occurrence of future social events for the target country. Qualitative and quantitative analysis of experiment results on multiple real-word datasets shows that our proposed method is competitive against several approaches for social event prediction.


Subject(s)
Neural Networks, Computer , Spatial Analysis
10.
Stud Health Technol Inform ; 294: 337-341, 2022 May 25.
Article in English | MEDLINE | ID: mdl-35612092

ABSTRACT

Representing temporal information is a recurrent problem for biomedical ontologies. We propose a foundational ontology that combines the so-called three-dimensional and four-dimensional approaches in order to be able to track changes in an individual and to trace his or her medical history. This requires, on the one hand, associating with any representation of an individual the representation of his or her life course and, on the other hand, distinguishing the properties that characterize this individual from those that characterize his or her life course.


Subject(s)
Biological Ontologies , Knowledge Management , Humans , Time Factors
11.
Cereb Cortex ; 32(18): 4080-4097, 2022 09 04.
Article in English | MEDLINE | ID: mdl-35029654

ABSTRACT

Temporal processing is crucial for auditory perception and cognition, especially for communication sounds. Previous studies have shown that the auditory cortex and the thalamus use temporal and rate representations to encode slowly and rapidly changing time-varying sounds. However, how the primate inferior colliculus (IC) encodes time-varying sounds at the millisecond scale remains unclear. In this study, we investigated the temporal processing by IC neurons in awake marmosets to Gaussian click trains with varying interclick intervals (2-100 ms). Strikingly, we found that 28% of IC neurons exhibited rate representation with nonsynchronized responses, which is in sharp contrast to the current view that the IC only uses a temporal representation to encode time-varying signals. Moreover, IC neurons with rate representation exhibited response properties distinct from those with temporal representation. We further demonstrated that reversible inactivation of the primary auditory cortex modulated 17% of the stimulus-synchronized responses and 21% of the nonsynchronized responses of IC neurons, revealing that cortico-colliculus projections play a role, but not a crucial one, in temporal processing in the IC. This study has significantly advanced our understanding of temporal processing in the IC of awake animals and provides new insights into temporal processing from the midbrain to the cortex.


Subject(s)
Auditory Cortex , Inferior Colliculi , Acoustic Stimulation , Animals , Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Callithrix , Inferior Colliculi/physiology , Wakefulness/physiology
12.
Anim Cogn ; 24(6): 1299-1304, 2021 Nov.
Article in English | MEDLINE | ID: mdl-33983542

ABSTRACT

Trace conditioning involves the pairing of a neutral conditioned stimulus (CS), followed by a short interval with a motivationally significant unconditioned stimulus (UCS). Recently, trace conditioning has been proposed as a test for animal consciousness due to its correlation in humans with subjective report of the CS-UCS connection. We argue that the distractor task in the Clark and Squire (1998) study on trace conditioning has been overlooked. Attentional inhibition played a crucial role in disrupting trace conditioning and awareness of the CS-UCS contingency in the human participants of that study. These results may be understood within the framework of the Temporal Representation Theory that asserts consciousness serves the function of selecting information into a representation of the present moment. While neither sufficient nor necessary, attentional processes are the primary means to select stimuli for consciousness. Consciousness and attention are both needed by an animal capable of flexible behavioral response. Consciousness keeps track of the current situation; attention amplifies task-relevant stimuli and inhibits irrelevant stimuli. In light of these joint functions, we hypothesize that the failure to trace condition under distraction in an organism known to successfully trace condition otherwise can be one of several tests that indicates animal consciousness. Successful trace conditioning is widespread and by itself does not indicate consciousness.


Subject(s)
Conditioning, Classical , Consciousness , Animals , Attention
13.
Front Vet Sci ; 8: 785256, 2021.
Article in English | MEDLINE | ID: mdl-34977218

ABSTRACT

A challenge to developing a model for testing animal consciousness is the pull of opposite intuitions. On one extreme, the anthropocentric view holds that consciousness is a highly sophisticated capacity involving self-reflection and conceptual categorization that is almost certainly exclusive to humans. At the opposite extreme, an anthropomorphic view attributes consciousness broadly to any behavior that involves sensory responsiveness. Yet human experience and observation of diverse species suggest that the most plausible case is that consciousness functions between these poles. In exploring the middle ground, we discuss the pros and cons of "high level" approaches such as the dual systems approach. According to this model, System 1 can be thought of as unconscious; processing is fast, automatic, associative, heuristic, parallel, contextual, and likely to be conserved across species. Consciousness is associated with System 2 processing that is slow, effortful, rule-based, serial, abstract, and exclusively human. An advantage of this model is the clear contrast between heuristic and decision-based responses, but it fails to include contextual decision-making in novel conditions which falls in between these two categories. We also review a "low level" model involving trace conditioning, which is a trained response to the first of two paired stimuli separated by an interval. This model highlights the role of consciousness in maintaining a stimulus representation over a temporal span, though it overlooks the importance of attention in subserving and also disrupting trace conditioning in humans. Through a critical analysis of these two extremes, we will develop the case for flexible behavioral response to the stimulus environment as the best model for demonstrating animal consciousness. We discuss a methodology for gauging flexibility across a wide variety of species and offer a case study in spatial navigation to illustrate our proposal. Flexibility serves the evolutionary function of enabling the complex evaluation of changing conditions, where motivation is the basis for goal valuation, and attention selects task-relevant stimuli to aid decision-making processes. We situate this evolutionary function within the Temporal Representation Theory of consciousness, which proposes that consciousness represents the present moment in order to facilitate flexible action.

14.
Hum Brain Mapp ; 41(8): 2077-2091, 2020 06 01.
Article in English | MEDLINE | ID: mdl-32048380

ABSTRACT

In the absence of vision, spatial representation may be altered. When asked to compare the relative distances between three sounds (i.e., auditory spatial bisection task), blind individuals demonstrate significant deficits and do not show an event-related potential response mimicking the visual C1 reported in sighted people. However, we have recently demonstrated that the spatial deficit disappears if coherent time and space cues are presented to blind people, suggesting that they may use time information to infer spatial maps. In this study, we examined whether the modification of temporal cues during space evaluation altered the recruitment of the visual and auditory cortices in blind individuals. We demonstrated that the early (50-90 ms) occipital response, mimicking the visual C1, is not elicited by the physical position of the sound, but by its virtual position suggested by its temporal delay. Even more impressively, in the same time window, the auditory cortex also showed this pattern and responded to temporal instead of spatial coordinates.


Subject(s)
Auditory Cortex/physiopathology , Auditory Perception/physiology , Blindness/physiopathology , Cues , Evoked Potentials/physiology , Space Perception/physiology , Time Perception/physiology , Visual Cortex/physiopathology , Adult , Electroencephalography , Humans
15.
Behav Brain Res ; 376: 112185, 2019 12 30.
Article in English | MEDLINE | ID: mdl-31472192

ABSTRACT

Vision is the most accurate sense for spatial representation, whereas audition is for temporal representation. However, how different sensory modalities shape the development of spatial and temporal representations is still unclear. Here, 45 children aged 11-13 years were tested to investigate the abilities to evaluate spatial features of auditory stimuli during bisection tasks, while conflicting or non-conflicting spatial and temporal information was delivered. Since audition is fundamental for temporal representation, the hypothesis was that temporal information could influence auditory spatial representation development. Results show a strong interaction between the temporal and the spatial domain. Younger children are not able to build complex spatial representations when the temporal domain is uninformative about space. However, when the spatial information is coherent with the temporal information children of all age are able to decode complex spatial relationships. When spatial and temporal cues are conflicting, younger children are strongly attracted by the temporal instead of spatial information, while older participants result unaffected by the cross-domain conflict. These findings suggest that during development temporal representation of events is used to infer spatial coordinates of the environment, offering important opportunities for new teaching and rehabilitation strategies.


Subject(s)
Auditory Perception/physiology , Spatial Processing/physiology , Time Perception/physiology , Adolescent , Adult , Child , Cues , Female , Humans , Male , Reaction Time/physiology , Space Perception/physiology , Time , Vision, Ocular , Visual Perception/physiology
16.
Sensors (Basel) ; 19(1)2018 Dec 23.
Article in English | MEDLINE | ID: mdl-30583609

ABSTRACT

Individual recognition based on skeletal sequence is a challenging computer vision task with multiple important applications, such as public security, human⁻computer interaction, and surveillance. However, much of the existing work usually fails to provide any explicit quantitative differences between different individuals. In this paper, we propose a novel 3D spatio-temporal geometric feature representation of locomotion on Riemannian manifold, which explicitly reveals the intrinsic differences between individuals. To this end, we construct mean sequence by aligning related motion sequences on the Riemannian manifold. The differences in respect to this mean sequence are modeled as spatial state descriptors. Subsequently, a temporal hierarchy of covariance are imposed on the state descriptors, making it a higher-order statistical spatio-temporal feature representation, showing unique biometric characteristics for individuals. Finally, we introduce a kernel metric learning method to improve the classification accuracy. We evaluated our method on two public databases: the CMU Mocap database and the UPCV Gait database. Furthermore, we also constructed a new database for evaluating running and analyzing two major influence factors of walking. As a result, the proposed approach achieves promising results in all experiments.

17.
Front Psychol ; 9: 2609, 2018.
Article in English | MEDLINE | ID: mdl-30622495

ABSTRACT

Temporal and spatial representations are not independent of each other. Two conflicting theories provide alternative hypotheses concerning the specific interrelations between temporal and spatial representations. The asymmetry hypothesis (based on the conceptual metaphor theory, Lakoff and Johnson, 1980) predicts that temporal and spatial representations are asymmetrically interrelated such that spatial representations have a stronger impact on temporal representations than vice versa. In contrast, the symmetry hypothesis (based on a theory of magnitude, Walsh, 2003) predicts that temporal and spatial representations are symmetrically interrelated. Both theoretical approaches have received empirical support. From an embodied cognition perspective, we argue that taking sensorimotor processes into account may be a promising steppingstone to explain the contradictory findings. Notably, different modalities are differently sensitive to the processing of time and space. For instance, auditory information processing is more sensitive to temporal than spatial information, whereas visual information processing is more sensitive to spatial than temporal information. Consequently, we hypothesized that different sensorimotor tasks addressing different modalities may account for the contradictory findings. To test this, we critically reviewed relevant literature to examine which modalities were addressed in time-space mapping studies. Results indicate that the majority of the studies supporting the asymmetry hypothesis applied visual tasks for both temporal and spatial representations. Studies supporting the symmetry hypothesis applied mainly auditory tasks for the temporal domain, but visual tasks for the spatial domain. We conclude that the use of different tasks addressing different modalities may be the primary reason for (a)symmetric effects of space on time, instead of a genuine (a)symmetric mapping.

18.
Sensors (Basel) ; 17(12)2017 Nov 27.
Article in English | MEDLINE | ID: mdl-29186887

ABSTRACT

The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.


Subject(s)
Gait , Female , Humans , Male , Neural Networks, Computer
19.
J Med Syst ; 41(1): 13, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27889874

ABSTRACT

The automatic interpretation of clinical recommendations is a difficult task, even more so when it involves the processing of complex temporal constraints. In order to address this issue, a web-based system is presented herein. Its underlying model provides a comprehensive representation of temporal constraints in Clinical Practice Guidelines. The expressiveness and range of the model are shown through a case study featuring a Clinical Practice Guideline for the diagnosis and management of colon cancer. The proposed model was sufficient to represent the temporal constraints in the guideline, especially those that defined periodic events and placed temporal constraints on the assessment of patient states. The web-based tool acts as a health care assistant to health care professionals, combining the roles of focusing attention and providing patient-specific advice.


Subject(s)
Decision Making, Computer-Assisted , Internet , Practice Guidelines as Topic , Humans , Models, Theoretical , Time Factors
20.
Comput Methods Programs Biomed ; 128: 52-68, 2016 May.
Article in English | MEDLINE | ID: mdl-27040831

ABSTRACT

BACKGROUND AND OBJECTIVE: We live our lives by the calendar and the clock, but time is also an abstraction, even an illusion. The sense of time can be both domain-specific and complex, and is often left implicit, requiring significant domain knowledge to accurately recognize and harness. In the clinical domain, the momentum gained from recent advances in infrastructure and governance practices has enabled the collection of tremendous amount of data at each moment in time. Electronic health records (EHRs) have paved the way to making these data available for practitioners and researchers. However, temporal data representation, normalization, extraction and reasoning are very important in order to mine such massive data and therefore for constructing the clinical timeline. The objective of this work is to provide an overview of the problem of constructing a timeline at the clinical point of care and to summarize the state-of-the-art in processing temporal information of clinical narratives. METHODS: This review surveys the methods used in three important area: modeling and representing of time, medical NLP methods for extracting time, and methods of time reasoning and processing. The review emphasis on the current existing gap between present methods and the semantic web technologies and catch up with the possible combinations. RESULTS: The main findings of this review are revealing the importance of time processing not only in constructing timelines and clinical decision support systems but also as a vital component of EHR data models and operations. CONCLUSIONS: Extracting temporal information in clinical narratives is a challenging task. The inclusion of ontologies and semantic web will lead to better assessment of the annotation task and, together with medical NLP techniques, will help resolving granularity and co-reference resolution problems.


Subject(s)
Decision Support Systems, Clinical , Natural Language Processing , Data Collection , Data Mining/methods , Electronic Health Records , Humans , Information Storage and Retrieval , Internet , Machine Learning , Models, Theoretical , Semantics , Time
SELECTION OF CITATIONS
SEARCH DETAIL