Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Acoust Soc Am ; 140(2): 1130, 2016 08.
Article in English | MEDLINE | ID: mdl-27586743

ABSTRACT

Magnuson [J. Acoust. Soc. Am. 137, 1481-1492 (2015)] makes claims for Interactive Activation (IA) models and against Adaptive Resonance Theory (ART) models of speech perception. Magnuson also presents simulations that claim to show that the TRACE model can simulate phonemic restoration, which was an explanatory target of the cARTWORD ART model. The theoretical analysis and review herein show that these claims are incorrect. More generally, the TRACE and cARTWORD models illustrate two diametrically opposed types of neural models of speech and language. The TRACE model embodies core assumptions with no analog in known brain processes. The cARTWORD model defines a hierarchy of cortical processing regions whose networks embody cells in laminar cortical circuits as part of the paradigm of laminar computing. cARTWORD further develops ART speech and language models that were introduced in the 1970s. It builds upon Item-Order-Rank working memories, which activate learned list chunks that unitize sequences to represent phonemes, syllables, and words. Psychophysical and neurophysiological data support Item-Order-Rank mechanisms and contradict TRACE representations of time, temporal order, silence, and top-down processing that exhibit many anomalous properties, including hallucinations of non-occurring future phonemes. Computer simulations of the TRACE model are presented that demonstrate these failures.


Subject(s)
Models, Neurological , Speech Perception/physiology , Speech/physiology , Brain/physiology , Humans , Language , Memory
2.
Front Psychol ; 5: 1053, 2014.
Article in English | MEDLINE | ID: mdl-25339918

ABSTRACT

How are sequences of events that are temporarily stored in a cognitive working memory unitized, or chunked, through learning? Such sequential learning is needed by the brain in order to enable language, spatial understanding, and motor skills to develop. In particular, how does the brain learn categories, or list chunks, that become selectively tuned to different temporal sequences of items in lists of variable length as they are stored in working memory, and how does this learning process occur in real time? The present article introduces a neural model that simulates learning of such list chunks. In this model, sequences of items are temporarily stored in an Item-and-Order, or competitive queuing, working memory before learning categorizes them using a categorization network, called a Masking Field, which is a self-similar, multiple-scale, recurrent on-center off-surround network that can weigh the evidence for variable-length sequences of items as they are stored in the working memory through time. A Masking Field hereby activates the learned list chunks that represent the most predictive item groupings at any time, while suppressing less predictive chunks. In a network with a given number of input items, all possible ordered sets of these item sequences, up to a fixed length, can be learned with unsupervised or supervised learning. The self-similar multiple-scale properties of Masking Fields interacting with an Item-and-Order working memory provide a natural explanation of George Miller's Magical Number Seven and Nelson Cowan's Magical Number Four. The article explains why linguistic, spatial, and action event sequences may all be stored by Item-and-Order working memories that obey similar design principles, and thus how the current results may apply across modalities. Item-and-Order properties may readily be extended to Item-Order-Rank working memories in which the same item can be stored in multiple list positions, or ranks, as in the list ABADBD. Comparisons with other models, including TRACE, MERGE, and TISK, are made.

3.
Front Neurorobot ; 7: 9, 2013.
Article in English | MEDLINE | ID: mdl-23755011

ABSTRACT

Curiosity Driven Modular Incremental Slow Feature Analysis (CD-MISFA;) is a recently introduced model of intrinsically-motivated invariance learning. Artificial curiosity enables the orderly formation of multiple stable sensory representations to simplify the agent's complex sensory input. We discuss computational properties of the CD-MISFA model itself as well as neurophysiological analogs fulfilling similar functional roles. CD-MISFA combines 1. unsupervised representation learning through the slowness principle, 2. generation of an intrinsic reward signal through learning progress of the developing features, and 3. balancing of exploration and exploitation to maximize learning progress and quickly learn multiple feature sets for perceptual simplification. Experimental results on synthetic observations and on the iCub robot show that the intrinsic value system is essential for representation learning. Representations are typically explored and learned in order from least to most costly, as predicted by the theory of curiosity.

4.
J Acoust Soc Am ; 130(1): 440-60, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21786911

ABSTRACT

How are laminar circuits of neocortex organized to generate conscious speech and language percepts? How does the brain restore information that is occluded by noise, or absent from an acoustic signal, by integrating contextual information over many milliseconds to disambiguate noise-occluded acoustical signals? How are speech and language heard in the correct temporal order, despite the influence of contexts that may occur many milliseconds before or after each perceived word? A neural model describes key mechanisms in forming conscious speech percepts, and quantitatively simulates a critical example of contextual disambiguation of speech and language; namely, phonemic restoration. Here, a phoneme deleted from a speech stream is perceptually restored when it is replaced by broadband noise, even when the disambiguating context occurs after the phoneme was presented. The model describes how the laminar circuits within a hierarchy of cortical processing stages may interact to generate a conscious speech percept that is embodied by a resonant wave of activation that occurs between acoustic features, acoustic item chunks, and list chunks. Chunk-mediated gating allows speech to be heard in the correct temporal order, even when what is heard depends upon future context.


Subject(s)
Auditory Cortex/physiology , Consciousness , Language , Models, Neurological , Neocortex/physiology , Noise/adverse effects , Perceptual Masking , Speech Perception , Acoustic Stimulation , Computer Simulation , Humans , Memory , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...