Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters










Publication year range
1.
IEEE Trans Biomed Circuits Syst ; 17(4): 808-817, 2023 08.
Article in English | MEDLINE | ID: mdl-37318976

ABSTRACT

Sweat secreted by the human eccrine sweat glands can provide valuable biomarker information during exercise. Real-time non-invasive biomarker recordings are therefore useful for evaluating the physiological conditions of an athlete such as their hydration status during endurance exercise. This work describes a wearable sweat biomonitoring patch incorporating printed electrochemical sensors into a plastic microfluidic sweat collector and data analysis that shows the real-time recorded sweat biomarkers can be used to predict a physiological biomarker. The system was placed on subjects carrying out an hour-long exercise session and results were compared to a wearable system using potentiometric robust silicon-based sensors and to commercially available HORIBA-LAQUAtwin devices. Both prototypes were applied to the real-time monitoring of sweat during cycling sessions and showed stable readings for around an hour. Analysis of the sweat biomarkers collected from the printed patch prototype shows that their real-time measurements correlate well (correlation coefficient ≥ 0.65) with other physiological biomarkers such as heart rate and regional sweat rate collected in the same session. We show for the first time, that the real-time sweat sodium and potassium concentration biomarker measurements from the printed sensors can be used to predict the core body temperature with root mean square error (RMSE) of 0.02 °C which is 71% lower compared to the use of only the physiological biomarkers. These results show that these wearable patch technologies are promising for real-time portable sweat monitoring analytical platforms, especially for athletes performing endurance exercise.


Subject(s)
Biosensing Techniques , Wearable Electronic Devices , Humans , Sweat/chemistry , Body Temperature , Electrolytes , Biomarkers/analysis
2.
Article in English | MEDLINE | ID: mdl-35687629

ABSTRACT

Long short-term memory (LSTM) recurrent networks are frequently used for tasks involving time-sequential data, such as speech recognition. Unlike previous LSTM accelerators that either exploit spatial weight sparsity or temporal activation sparsity, this article proposes a new accelerator called "Spartus" that exploits spatio-temporal sparsity to achieve ultralow latency inference. Spatial sparsity is induced using a new column-balanced targeted dropout (CBTD) structured pruning method, producing structured sparse weight matrices for a balanced workload. The pruned networks running on Spartus hardware achieve weight sparsity levels of up to 96% and 94% with negligible accuracy loss on the TIMIT and the Librispeech datasets. To induce temporal sparsity in LSTM, we extend the previous DeltaGRU method to the DeltaLSTM method. Combining spatio-temporal sparsity with CBTD and DeltaLSTM saves on weight memory access and associated arithmetic operations. The Spartus architecture is scalable and supports real-time online speech recognition when implemented on small and large FPGAs. Spartus per-sample latency for a single DeltaLSTM layer of 1024 neurons averages 1 µ s. Exploiting spatio-temporal sparsity on our test LSTM network using the TIMIT dataset leads to 46 × speedup of Spartus over its theoretical hardware performance to achieve 9.4-TOp/s effective batch-1 throughput and 1.1-TOp/s/W power efficiency.

3.
IEEE J Biomed Health Inform ; 26(9): 4725-4732, 2022 09.
Article in English | MEDLINE | ID: mdl-35749337

ABSTRACT

Improper hydration routines can reduce athletic performance. Recent studies show that data from noninvasive biomarker recordings can help to evaluate the hydration status of subjects during endurance exercise. These studies are usually carried out on multiple subjects. In this work, we present the first study on predicting hydration status using machine learning models from single-subject experiments, which involve 32 exercise sessions of constant moderate intensity performed with and without fluid intake. During exercise, we measured four noninvasive physiological and sweat biomarkers including heart rate, core temperature, sweat sodium concentration, and whole-body sweat rate. Sweat sodium concentration was measured from six body regions using absorbent patches. We used three machine learning models to determine the percentage of body weight loss as an indicator of dehydration with these biomarkers and compared the prediction accuracy. The results on this single subject show that these models gave similar mean absolute errors, while in general the nonlinear models slightly outperformed the linear model in most of the experiments. The prediction accuracy of using the whole-body sweat rate or heart rate was higher than using core temperature or sweat sodium concentration. In addition, the model trained on the sweat sodium concentration collected from the arms gave slightly better accuracy than from the other five body regions. This exploratory work paves the way for the use of these machine learning models to develop personalized health monitoring together with emerging, noninvasive wearable sensor devices.


Subject(s)
Sweat , Sweating , Biomarkers , Humans , Machine Learning , Sodium
4.
Front Neurosci ; 15: 771480, 2021.
Article in English | MEDLINE | ID: mdl-34955722

ABSTRACT

Liquid analysis is key to track conformity with the strict process quality standards of sectors like food, beverage, and chemical manufacturing. In order to analyse product qualities online and at the very point of interest, automated monitoring systems must satisfy strong requirements in terms of miniaturization, energy autonomy, and real time operation. Toward this goal, we present the first implementation of artificial taste running on neuromorphic hardware for continuous edge monitoring applications. We used a solid-state electrochemical microsensor array to acquire multivariate, time-varying chemical measurements, employed temporal filtering to enhance sensor readout dynamics, and deployed a rate-based, deep convolutional spiking neural network to efficiently fuse the electrochemical sensor data. To evaluate performance we created MicroBeTa (Microsensor Beverage Tasting), a new dataset for beverage classification incorporating 7 h of temporal recordings performed over 3 days, including sensor drifts and sensor replacements. Our implementation of artificial taste is 15× more energy efficient on inference tasks than similar convolutional architectures running on other commercial, low power edge-AI inference devices, achieving over 178× lower latencies than the sampling period of the sensor readout, and high accuracy (97%) on a single Intel Loihi neuromorphic research processor included in a USB stick form factor.

5.
Neuroimage ; 223: 117282, 2020 12.
Article in English | MEDLINE | ID: mdl-32828921

ABSTRACT

Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of "neuro-steered" hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS)1 in which the information about the attended speech, as decoded from the subject's brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices.


Subject(s)
Brain/physiology , Electroencephalography , Signal Processing, Computer-Assisted , Speech Acoustics , Speech Perception/physiology , Acoustic Stimulation , Adult , Algorithms , Deep Learning , Hearing Loss/physiopathology , Humans , Middle Aged
6.
Neural Comput ; 32(1): 261-279, 2020 01.
Article in English | MEDLINE | ID: mdl-31703173

ABSTRACT

It is well known in machine learning that models trained on a training set generated by a probability distribution function perform far worse on test sets generated by a different probability distribution function. In the limit, it is feasible that a continuum of probability distribution functions might have generated the observed test set data; a desirable property of a learned model in that case is its ability to describe most of the probability distribution functions from the continuum equally well. This requirement naturally leads to sampling methods from the continuum of probability distribution functions that lead to the construction of optimal training sets. We study the sequential prediction of Ornstein-Uhlenbeck processes that form a parametric family. We find empirically that a simple deep network trained on optimally constructed training sets using the methods described in this letter can be robust to changes in the test set distribution.

8.
IEEE Trans Neural Netw Learn Syst ; 30(3): 644-656, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30047912

ABSTRACT

Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from 1×1 to 7×7 . NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq field-programmable gate array (FPGA) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Postsynthesis simulations using Mentor Modelsim in a 28-nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the multiply-accumulate units, and achieves a power efficiency of over 3 TOp/s/W in a core area of 6.3 mm2. As further proof of NullHop's usability, we interfaced its FPGA implementation with a neuromorphic event camera for real-time interactive demonstrations.

9.
Front Neurosci ; 12: 160, 2018.
Article in English | MEDLINE | ID: mdl-29643760

ABSTRACT

This paper presents a real-time, low-complexity neuromorphic speech recognition system using a spiking silicon cochlea, a feature extraction module and a population encoding method based Neural Engineering Framework (NEF)/Extreme Learning Machine (ELM) classifier IC. Several feature extraction methods with varying memory and computational complexity are presented along with their corresponding classification accuracies. On the N-TIDIGITS18 dataset, we show that a fixed bin size based feature extraction method that votes across both time and spike count features can achieve an accuracy of 95% in software similar to previously report methods that use fixed number of bins per sample while using ~3× less energy and ~25× less memory for feature extraction (~1.5× less overall). Hardware measurements for the same topology show a slightly reduced accuracy of 94% that can be attributed to the extra correlations in hardware random weights. The hardware accuracy can be increased by further increasing the number of hidden nodes in ELM at the cost of memory and energy.

10.
Front Neurosci ; 12: 23, 2018.
Article in English | MEDLINE | ID: mdl-29479300

ABSTRACT

Event-driven neuromorphic spiking sensors such as the silicon retina and the silicon cochlea encode the external sensory stimuli as asynchronous streams of spikes across different channels or pixels. Combining state-of-art deep neural networks with the asynchronous outputs of these sensors has produced encouraging results on some datasets but remains challenging. While the lack of effective spiking networks to process the spike streams is one reason, the other reason is that the pre-processing methods required to convert the spike streams to frame-based features needed for the deep networks still require further investigation. This work investigates the effectiveness of synchronous and asynchronous frame-based features generated using spike count and constant event binning in combination with the use of a recurrent neural network for solving a classification task using N-TIDIGITS18 dataset. This spike-based dataset consists of recordings from the Dynamic Audio Sensor, a spiking silicon cochlea sensor, in response to the TIDIGITS audio dataset. We also propose a new pre-processing method which applies an exponential kernel on the output cochlea spikes so that the interspike timing information is better preserved. The results from the N-TIDIGITS18 dataset show that the exponential features perform better than the spike count features, with over 91% accuracy on the digit classification task. This accuracy corresponds to an improvement of at least 2.5% over the use of spike count features, establishing a new state of the art for this dataset.

11.
Front Neurosci ; 11: 682, 2017.
Article in English | MEDLINE | ID: mdl-29375284

ABSTRACT

Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

12.
Front Neurosci ; 9: 347, 2015.
Article in English | MEDLINE | ID: mdl-26528113

ABSTRACT

Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < -5 dB) gives a better classification performance than the original SNR input in this word recognition task.

13.
Neural Comput ; 27(10): 2231-59, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26313599

ABSTRACT

This letter addresses the problem of separating two speakers from a single microphone recording. Three linear methods are tested for source separation, all of which operate directly on sound spectrograms: (1) eigenmode analysis of covariance difference to identify spectro-temporal features associated with large variance for one source and small variance for the other source; (2) maximum likelihood demixing in which the mixture is modeled as the sum of two gaussian signals and maximum likelihood is used to identify the most likely sources; and (3) suppression-regression, in which autoregressive models are trained to reproduce one source and suppress the other. These linear approaches are tested on the problem of separating a known male from a known female speaker. The performance of these algorithms is assessed in terms of the residual error of estimated source spectrograms, waveform signal-to-noise ratio, and perceptual evaluation of speech quality scores. This work shows that the algorithms compare favorably to nonlinear approaches such as nonnegative sparse coding in terms of simplicity, performance, and suitability for real-time implementations, and they provide benchmark solutions for monaural source separation tasks.

14.
Front Neurosci ; 9: 222, 2015.
Article in English | MEDLINE | ID: mdl-26217169

ABSTRACT

Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

15.
Front Neurosci ; 9: 206, 2015.
Article in English | MEDLINE | ID: mdl-26106288

ABSTRACT

Spike-based neuromorphic sensors such as retinas and cochleas, change the way in which the world is sampled. Instead of producing data sampled at a constant rate, these sensors output spikes that are asynchronous and event driven. The event-based nature of neuromorphic sensors implies a complete paradigm shift in current perception algorithms toward those that emphasize the importance of precise timing. The spikes produced by these sensors usually have a time resolution in the order of microseconds. This high temporal resolution is a crucial factor in learning tasks. It is also widely used in the field of biological neural networks. Sound localization for instance relies on detecting time lags between the two ears which, in the barn owl, reaches a temporal resolution of 5 µs. Current available neuromorphic computation platforms such as SpiNNaker often limit their users to a time resolution in the order of milliseconds that is not compatible with the asynchronous outputs of neuromorphic sensors. To overcome these limitations and allow for the exploration of new types of neuromorphic computing architectures, we introduce a novel software framework on the SpiNNaker platform. This framework allows for simulations of spiking networks and plasticity mechanisms using a completely asynchronous and event-based scheme running with a microsecond time resolution. Results on two example networks using this new implementation are presented.

16.
IEEE Trans Biomed Circuits Syst ; 9(2): 207-16, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25879969

ABSTRACT

Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.


Subject(s)
Image Interpretation, Computer-Assisted , Motion , Silicon/chemistry , Algorithms , Biomimetics , Models, Neurological , Retina , Vision, Ocular
17.
Neural Comput ; 27(4): 845-97, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25734494

ABSTRACT

This letter presents a spike-based model that employs neurons with functionally distinct dendritic compartments for classifying high-dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron the capacity to perform a large number of input-output mappings. The model uses sparse synaptic connectivity, where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin-enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems, and its performance is compared against that achieved using support vector machine and extreme learning machine techniques. Our proposed method attains comparable performance while using 10% to 50% less in computational resource than the other reported techniques.


Subject(s)
Action Potentials/physiology , Dendrites/physiology , Models, Neurological , Neurons/cytology , Support Vector Machine , Synapses/physiology , Algorithms , Animals , Nonlinear Dynamics
18.
IEEE Trans Biomed Circuits Syst ; 8(4): 453-64, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24216772

ABSTRACT

This paper proposes an integrated event-based binaural silicon cochlea system aimed at efficient spatial audition and auditory scene analysis. The cochlea chip has a matched pair of digitally-calibrated 64-stage cascaded analog second-order filter banks with 512 pulse-frequency modulated (PFM) address-event representation (AER) outputs. The quality factors (Qs) of channels are individually adjusted by local DACs. The 2P4M 0.35 um CMOS chip consumes an average power of 14 mW including its integrated microphone preamplifiers and biasing circuits. Typical speech data rates are 10 k to 100 k events per second (eps) with peak output rates of 10 Meps. The event timing jitter is 2 us for a 250 mVpp input. It is shown that the computational cost of an event-driven source localization application can be up to 40 times lower when compared to a conventional cross-correlation approach.


Subject(s)
Equipment Design , Algorithms , Hearing , Signal Processing, Computer-Assisted , Silicon/chemistry
19.
Front Neurosci ; 8: 428, 2014.
Article in English | MEDLINE | ID: mdl-25653579

ABSTRACT

The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4(th)) order topology.

20.
Front Neurosci ; 7: 178, 2013.
Article in English | MEDLINE | ID: mdl-24115919

ABSTRACT

Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.

SELECTION OF CITATIONS
SEARCH DETAIL
...