Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Biomed Circuits Syst ; 18(2): 423-437, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37956014

RESUMO

Developing precise artificial retinas is crucial because they hold the potential to restore vision, improve visual prosthetics, and enhance computer vision systems. Emulating the luminance and contrast adaption features of the retina is essential to improve visual perception and efficiency to provide an environment realistic representation to the user. In this article, we introduce an artificial retina model that leverages its potent adaptation to luminance and contrast to enhance vision sensing and information processing. The model has the ability to achieve the realization of both tonic and phasic cells in the simplest manner. We have implemented the retina model using 0.18 µm process technology and validated the accuracy of the hardware implementation through circuit simulation that closely matches the software retina model. Additionally, we have characterized a single pixel fabricated using the same 0.18 µm process. This pixel demonstrates an 87.7-% ratio of variance with the temporal software model and operates with a power consumption of 369 nW.


Assuntos
Retina , Silício , Visão Ocular , Inteligência Artificial , Simulação por Computador
2.
IEEE Trans Biomed Circuits Syst ; 17(2): 192-201, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-37022890

RESUMO

Healthcare technology is evolving from a conventional hub-based system to a personalized healthcare system accelerated by rapid advancements in smart fitness trackers. Modern fitness trackers are mostly lightweight wearables and can monitor the user's health round the clock, supporting ubiquitous connectivity and real-time tracking. However, prolonged skin contact with wearable trackers can cause discomfort. They are susceptible to false results and breach of privacy due to the exchange of user's personal data over the internet. We propose tinyRadar, a novel on-edge millimeter wave (mmWave) radar-based fitness tracker that solves the issues of discomfortness, and privacy risk in a small form factor, making it an ideal choice for a smart home setting. This work uses the Texas Instruments IWR1843 mmWave radar board to recognize the exercise type and measure its repetition counts, using signal processing and Convolutional Neural Network (CNN) implemented on board. The radar board is interfaced with ESP32 to transfer the results to the user's smartphone over Bluetooth Low Energy (BLE). Our dataset comprises eight exercises collected from fourteen human subjects. Data from ten subjects were used to train an 8-bit quantized CNN model. tinyRadar provides real-time repetition counts with 96% average accuracy and has an overall subject-independent classification accuracy of 97% when evaluated on the rest of the four subjects. CNN has a memory utilization of 11.36 KB, which includes only 1.46 KB for the model parameters (weights and biases) and the remaining for output activations.


Assuntos
Exercício Físico , Monitores de Aptidão Física , Humanos , Software , Processamento de Sinais Assistido por Computador , Redes Neurais de Computação
3.
Nat Nanotechnol ; 18(4): 380-389, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36690737

RESUMO

Neuromorphic cameras are a new class of dynamic-vision-inspired sensors that encode the rate of change of intensity as events. They can asynchronously record intensity changes as spikes, independent of the other pixels in the receptive field, resulting in sparse measurements. This recording of such sparse events makes them ideal for imaging dynamic processes, such as the stochastic emission of isolated single molecules. Here we show the application of neuromorphic detection to localize nanoscale fluorescent objects below the diffraction limit, with a precision below 20 nm. We demonstrate a combination of neuromorphic detection with segmentation and deep learning approaches to localize and track fluorescent particles below 50 nm with millisecond temporal resolution. Furthermore, we show that combining information from events resulting from the rate of change of intensities improves the classical limit of centroid estimation of single fluorescent objects by nearly a factor of two. Additionally, we validate that using post-processed data from the neuromorphic detector at defined windows of temporal integration allows a better evaluation of the fractalized diffusion of single particle trajectories. Our observations and analysis is useful for event sensing by nonlinear neuromorphic devices to ameliorate real-time particle localization approaches at the nanoscale.

4.
IEEE Sens J ; 22(19): 18437-18445, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36416744

RESUMO

The development of a cost-efficient device to rapidly detect pandemic viruses is paramount. Hence, an innovative and scalable synthesis of metal nanoparticles followed by its usage for rapid detection of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) has been reported in this work. The simple synthesis of metal nanoparticles utilizing tin as a solid-state reusable reducing agent is used for the SARS-CoV-2 ribonucleic acid (RNA) detection. Moreover, the solid-state reduction process occurs faster and leads to the enhanced formation of silver and gold nanoparticles (AuNPs) with voltage. By adding tin as a solid-state reducing agent with the precursor, the nanoparticles are formed within 30 s. This synthesis method can be easily scaled up for a commercially viable process to obtain different-sized metal nanoparticles. This is the first disclosure of the usage of tin as a reusable solid-state reducing agent for metal nanoparticle synthesis. An electronic device, consisting of AuNPs functionalized with a deoxyribonucleic acid (DNA)-based aptamer, can detect SARS-CoV-2 RNA in less than 5 min. With an increase in SARS-CoV-2 variants, such as Delta and Omicron, the detection device could be used for identifying the nucleic acids of the COVID-19 variants by modifying the aptamer sequence. The reported work overcomes the drawbacks of complex instrumentation, trained labor, and increased turnaround time.

5.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2676-2685, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34125686

RESUMO

The human brain has evolved to perform complex and computationally expensive cognitive tasks, such as audio-visual perception and object detection, with ease. For instance, the brain can recognize speech in different dialects and perform other cognitive tasks, such as attention, memory, and motor control, with just 20 W of power consumption. Taking inspiration from neural systems, we propose a low-power neuromorphic hardware architecture to perform classification on temporal data at the edge. The proposed architecture uses a neuromorphic cochlea model for feature extraction and reservoir computing (RC) framework as a classifier. In the proposed hardware architecture, the RC framework is modified for on-the-fly generation of reservoir connectivity, along with binary feedforward and reservoir weights. Also, a large reservoir is split into multiple small reservoirs for efficient use of hardware resources. These modifications reduce the computational and memory resources required, thereby resulting in a lower power budget. The proposed classifier is validated for speech and human activity recognition (HAR) tasks. We have prototyped our hardware architecture using Intel's cyclone-10 low-power series field-programmable gate array (FPGA), consuming only 4790 logic elements (LEs) and 34.9-kB memory, making it a perfect candidate for edge computing applications. Moreover, we have implemented a complete system for speech recognition with the feature extraction block (cochlea model) and the proposed classifier, utilizing 15 532 LEs and 38.4-kB memory. By using the proposed idea of multiple small reservoirs along with on-the-fly generation of reservoir binary weights, our architecture can reduce the power consumption and memory requirement by order of magnitude compared to existing FPGA models for speech recognition tasks with similar complexity.


Assuntos
Computadores , Redes Neurais de Computação , Encéfalo
6.
Front Neurosci ; 15: 699003, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34393712

RESUMO

Event-based cameras are bio-inspired novel sensors that asynchronously record changes in illumination in the form of events. This principle results in significant advantages over conventional cameras, such as low power utilization, high dynamic range, and no motion blur. Moreover, by design, such cameras encode only the relative motion between the scene and the sensor and not the static background to yield a very sparse data structure. In this paper, we leverage these advantages of an event camera toward a critical vision application-video anomaly detection. We propose an anomaly detection solution in the event domain with a conditional Generative Adversarial Network (cGAN) made up of sparse submanifold convolution layers. Video analytics tasks such as anomaly detection depend on the motion history at each pixel. To enable this, we also put forward a generic unsupervised deep learning solution to learn a novel memory surface known as Deep Learning (DL) memory surface. DL memory surface encodes the temporal information readily available from these sensors while retaining the sparsity of event data. Since there is no existing dataset for anomaly detection in the event domain, we also provide an anomaly detection event dataset with a set of anomalies. We empirically validate our anomaly detection architecture, composed of sparse convolutional layers, on this proposed and online dataset. Careful analysis of the anomaly detection network reveals that the presented method results in a massive reduction in computational complexity with good performance compared to previous state-of-the-art conventional frame-based anomaly detection networks.

7.
IEEE Trans Biomed Circuits Syst ; 15(3): 580-594, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34133287

RESUMO

Computing and attending to salient regions of a visual scene is an innate and necessary preprocessing step for both biological and engineered systems performing high-level visual tasks including object detection, tracking, and classification. Computational bandwidth and speed are improved by preferentially devoting computational resources to salient regions of the visual field. The human brain computes saliency effortlessly, but modeling this task in engineered systems is challenging. We first present a neuromorphic dynamic saliency model, which is bottom-up, feed-forward, and based on the notion of proto-objects with neurophysiological spatio-temporal features requiring no training. Our neuromorphic model outperforms state-of-the-art dynamic visual saliency models in predicting human eye fixations (i.e., ground truth saliency). Secondly, we present a hybrid FPGA implementation of the model for real-time applications, capable of processing 112×84 resolution frames at 18.71 Hz running at a 100 MHz clock rate - a 23.77× speedup from the software implementation. Additionally, our fixed-point model of the FPGA implementation yields comparable results to the software implementation.


Assuntos
Fixação Ocular , Software , Humanos
8.
Neural Netw ; 139: 45-63, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33677378

RESUMO

The mammalian spatial navigation system is characterized by an initial divergence of internal representations, with disparate classes of neurons responding to distinct features including location, speed, borders and head direction; an ensuing convergence finally enables navigation and path integration. Here, we report the algorithmic and hardware implementation of biomimetic neural structures encompassing a feed-forward trimodular, multi-layer architecture representing grid-cell, place-cell and decoding modules for navigation. The grid-cell module comprised of neurons that fired in a grid-like pattern, and was built of distinct layers that constituted the dorsoventral span of the medial entorhinal cortex. Each layer was built as an independent continuous attractor network with distinct grid-field spatial scales. The place-cell module comprised of neurons that fired at one or few spatial locations, organized into different clusters based on convergent modular inputs from different grid-cell layers, replicating the gradient in place-field size along the hippocampal dorso-ventral axis. The decoding module, a two-layer neural network that constitutes the convergence of the divergent representations in preceding modules, received inputs from the place-cell module and provided specific coordinates of the navigating object. After vital design optimizations involving all modules, we implemented the tri-modular structure on Zynq Ultrascale+ field-programmable gate array silicon chip, and demonstrated its capacity in precisely estimating the navigational trajectory with minimal overall resource consumption involving a mere 2.92% Look Up Table utilization. Our implementation of a biomimetic, digital spatial navigation system is stable, reliable, reconfigurable, real-time with execution time of about 32 s for 100k input samples (in contrast to 40 minutes on Intel Core i7-7700 CPU with 8 cores clocking at 3.60 GHz) and thus can be deployed for autonomous-robotic navigation without requiring additional sensors.


Assuntos
Biomimética/métodos , Células de Grade/fisiologia , Redes Neurais de Computação , Células de Lugar/fisiologia , Navegação Espacial/fisiologia , Animais , Córtex Entorrinal/citologia , Córtex Entorrinal/fisiologia , Hipocampo/citologia , Hipocampo/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Ratos
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 3403-3406, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018734

RESUMO

Optical recording of genetically encoded calcium indicator (GECI) allows neuroscientists to study the activity of genetically labeled neuron populations, but our current tools lack the resolution, stability and are often too invasive. Here we present the design concepts, prototypes, and preliminary measurement results of a super-miniaturized wireless image sensor built using a 32nm Silicon-on-Insulator process. SOI process is optimal for wireless applications, and we can further thin the substrate to reduce overall device thickness to ~25µm and operate the pixels using back-side illumination. The proposed device is 300µm × 300µm. Our prototype is built on a 3 × 3mm die.


Assuntos
Encéfalo , Silício , Testes Diagnósticos de Rotina , Iluminação , Neurônios
10.
IEEE Trans Circuits Syst I Regul Pap ; 67(6): 1803-1814, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36845010

RESUMO

Digital cameras expose and readout all pixels in accordance with a global sample clock. This rigid global control of exposure and sampling is problematic for capturing scenes with large variance in brightness and motion, and may cause regions of motion blur, under- and overexposure. To address these issues, we developed a CMOS imaging system that automatically adjusts each pixel's exposure and sampling rate to fit local motion and brightness. This system consists of an image sensor with pixel-addressable exposure configurability in combination with a real-time, per-pixel exposure controller. It operates in a closed-loop to sample, detect and optimize each pixel's exposure and sampling rate for optimal acquisition. Per-pixel exposure control is implemented using all-integrated electronics without external optical modulation. This reduces system complexity and power consumption compared to existing solutions. Implemented using standard 130nm CMOS process, the chip has 256 × 256 pixels and consumes 7.31mW. To evaluate performance, we used this system to capture scenes with complex lighting and motion conditions that would lead to loss of information for globally-exposed cameras. These results demonstrate the advantage of pixel-wise adaptive imaging for a range of computer vision tasks such as segmentation, motion estimation and object recognition.

11.
Sci Rep ; 9(1): 15604, 2019 10 30.
Artigo em Inglês | MEDLINE | ID: mdl-31666557

RESUMO

Neuromorphic architectures have become essential building blocks for next-generation computational systems, where intelligence is embedded directly onto low power, small area, and computationally efficient hardware devices. In such devices, realization of neural algorithms requires storage of weights in digital memories, which is a bottleneck in terms of power and area. We hereby propose a biologically inspired low power, hybrid architectural framework for wake-up systems. This architecture utilizes our novel high-performance, ultra-low power molybdenum disulphide (MoS2) based two-dimensional synaptic memtransistor as an analogue memory. Furthermore, it exploits random device mismatches to implement the population coding scheme. Power consumption per CMOS neuron block was found to be 3 nw in the 65 nm process technology, while the energy consumption per cycle was 0.3 pJ for potentiation and 20 pJ for depression cycles of the synaptic device. The proposed framework was demonstrated for classification and regression tasks, using both off-chip and simplified on-chip sign-based learning techniques.

12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 2740-2743, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946461

RESUMO

Recent advances in the unsupervised and generative models of deep learning have shown promise for application in biomedical signal processing. In this work, we present a portable resource-constrained ultrasound (US) system trained using Variational Autoencoder (VAE) network which performs compressive-sensing on pre-beamformed RF signals. The encoder network compresses the RF data, which is further transmitted to the cloud. At the cloud, the decoder reconstructs back the ultrasound image, which can be used for inferencing. The compression is done with an undersampling ratio of 1/2, 1/3, 1/5 and 1/10 without significant loss of the resolution. We also compared the model by state-of-the-art compressive-sensing reconstruction algorithm and it shows significant improvement in terms of PSNR and MSE. The innovation in this approach resides in training with binary weights at the encoder, shows its feasibility for the hardware implementation at the edge. In the future, we plan to include our field-programmable gate array (FPGA) based design directly interfaced with sensors for real-time analysis of Ultrasound images during medical procedures.


Assuntos
Compressão de Dados , Aprendizado Profundo , Algoritmos , Processamento de Sinais Assistido por Computador , Ultrassonografia
13.
Front Neurosci ; 12: 891, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30559644

RESUMO

Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems, and this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.

14.
IEEE Trans Neural Syst Rehabil Eng ; 26(6): 1121-1130, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29877836

RESUMO

We propose an unsupervised compressed sensing (CS)-based framework to compress, recover, and cluster neural action potentials. This framework can be easily integrated into high-density multi-electrode neural recording VLSI systems. Embedding spectral clustering and group structures in dictionary learning, we extend the proposed framework to unsupervised spike sorting without prior label information. Additionally, we incorporate group sparsity concepts in the dictionary learning to enable the framework for multi-channel neural recordings, as in tetrodes. To further improve spike sorting success rates in the CS framework, we embed template matching in sparse coding to jointly predict clusters of spikes. Our experimental results demonstrate that the proposed CS-based framework can achieve a high compression ratio (8:1 to 20:1), with a high quality reconstruction performance (>8 dB) and a high spike sorting accuracy (>90%).


Assuntos
Potenciais de Ação/fisiologia , Algoritmos , Neurônios/fisiologia , Análise por Conglomerados , Compressão de Dados , Eletrodos , Humanos , Aprendizado de Máquina , Microcomputadores
15.
Front Neurosci ; 12: 198, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29692700

RESUMO

This paper presents a digital implementation of the Cascade of Asymmetric Resonators with Fast-Acting Compression (CAR-FAC) cochlear model. The CAR part simulates the basilar membrane's (BM) response to sound. The FAC part models the outer hair cell (OHC), the inner hair cell (IHC), and the medial olivocochlear efferent system functions. The FAC feeds back to the CAR by moving the poles and zeros of the CAR resonators automatically. We have implemented a 70-section, 44.1 kHz sampling rate CAR-FAC system on an Altera Cyclone V Field Programmable Gate Array (FPGA) with 18% ALM utilization by using time-multiplexing and pipeline parallelizing techniques and present measurement results here. The fully digital reconfigurable CAR-FAC system is stable, scalable, easy to use, and provides an excellent input stage to more complex machine hearing tasks such as sound localization, sound segregation, speech recognition, and so on.

16.
Front Neurosci ; 12: 213, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29692702

RESUMO

This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 µW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

18.
IEEE Trans Biomed Circuits Syst ; 11(3): 574-584, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28436888

RESUMO

We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.


Assuntos
Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Bases de Dados Factuais , Neurônios
19.
Front Neurosci ; 10: 104, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27047326

RESUMO

In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

20.
Front Neurosci ; 9: 309, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26388721

RESUMO

The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...