Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-23653593

ABSTRACT

The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (µECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point.


Subject(s)
Analog-Digital Conversion , Electrophysiological Phenomena/physiology , Neurons/physiology , Signal Processing, Computer-Assisted , Software/trends , Animals , Auditory Cortex/physiology , Electroencephalography/methods , Electroencephalography/trends , Ferrets , Magnetoencephalography/methods , Magnetoencephalography/trends
2.
J Neurophysiol ; 85(3): 1220-34, 2001 Mar.
Article in English | MEDLINE | ID: mdl-11247991

ABSTRACT

To understand the neural representation of broadband, dynamic sounds in primary auditory cortex (AI), we characterize responses using the spectro-temporal response field (STRF). The STRF describes, predicts, and fully characterizes the linear dynamics of neurons in response to sounds with rich spectro-temporal envelopes. It is computed from the responses to elementary "ripples," a family of sounds with drifting sinusoidal spectral envelopes. The collection of responses to all elementary ripples is the spectro-temporal transfer function. The complex spectro-temporal envelope of any broadband, dynamic sound can expressed as the linear sum of individual ripples. Previous experiments using ripples with downward drifting spectra suggested that the transfer function is separable, i.e., it is reducible into a product of purely temporal and purely spectral functions. Here we measure the responses to upward and downward drifting ripples, assuming reparability within each direction, to determine if the total bidirectional transfer function is fully separable. In general, the combined transfer function for two directions is not symmetric, and hence units in AI are not, in general, fully separable. Consequently, many AI units have complex response properties such as sensitivity to direction of motion, though most inseparable units are not strongly directionally selective. We show that for most neurons, the lack of full separability stems from differences between the upward and downward spectral cross-sections but not from the temporal cross-sections; this places strong constraints on the neural inputs of these AI units.


Subject(s)
Auditory Cortex/physiology , Ferrets/physiology , Pitch Perception/physiology , Reaction Time/physiology , Acoustic Stimulation/methods , Action Potentials/physiology , Animals , Auditory Cortex/cytology , Auditory Threshold/physiology , Models, Neurological , Neurons/physiology , Reproducibility of Results
3.
J Comput Neurosci ; 9(1): 85-111, 2000.
Article in English | MEDLINE | ID: mdl-10946994

ABSTRACT

The spectrotemporal receptive field (STRF) is a functional descriptor of the linear processing of time-varying acoustic spectra by the auditory system. By cross-correlating sustained neuronal activity with the dynamic spectrum of a spectrotemporally rich stimulus ensemble, one obtains an estimate of the STRF. In this article, the relationship between the spectrotemporal structure of any given stimulus and the quality of the STRF estimate is explored and exploited. Invoking the Fourier theorem, arbitrary dynamic spectra are described as sums of basic sinusoidal components--that is, moving ripples. Accurate estimation is found to be especially reliant on the prominence of components whose spectral and temporal characteristics are of relevance to the auditory locus under study and is sensitive to the phase relationships between components with identical temporal signatures. These and other observations have guided the development and use of stimuli with deterministic dynamic spectra composed of the superposition of many temporally orthogonal moving ripples having a restricted, relevant range of spectral scales and temporal rates. The method, termed sum-of-ripples, is similar in spirit to the white-noise approach but enjoys the same practical advantages--which equate to faster and more accurate estimation--attributable to the time-domain sum-of-sinusoids method previously employed in vision research. Application of the method is exemplified with both modeled data and experimental data from ferret primary auditory cortex (AI).


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Models, Neurological , Neurons/physiology , Acoustic Stimulation/methods , Action Potentials/physiology , Algorithms , Animals , Auditory Cortex/cytology , Ferrets/anatomy & histology , Ferrets/physiology , Fourier Analysis , Neural Inhibition/physiology , Neurons/cytology , Nonlinear Dynamics , Reaction Time/physiology , Signal Transduction/physiology , Time Factors
4.
J Acoust Soc Am ; 103(5 Pt 1): 2502-14, 1998 May.
Article in English | MEDLINE | ID: mdl-9604344

ABSTRACT

Responses to various steady-state vowels were recorded in single units in the primary auditory cortex (AI) of the barbiturate-anaesthetized ferret. Six vowels were presented (/a/, /epsilon/, 2 different /i/'s, and 2 different /u/'s) in a natural voiced and a synthetic unvoiced mode. In addition, the responses to broadband stimuli with a sinusoidally shaped spectral envelope (called ripple stimuli) were recorded in each cell, and the response field (RF), which consists of both excitatory and inhibitory regions, was derived from the ripple transfer function. We examined whether the vowel responses could be predicted using a linear ripple analysis method [Shamma et al., Auditory Neurosci. 1, 233-254 (1995)], i.e., by cross correlating the RF of the single unit, and the smoothed spectral envelope of the vowel. We found that for most AI cells (71%) the relative responses to natural vowels could be predicted on the basis of this method. Responses and prediction results for unvoiced and voiced vowels were very similar, suggesting that the spectral fine structure may not play a significant role in the neuron's response to the vowels. Predictions on the basis of the entire RF were significantly better than based solely on best frequency (BF) (or "place"). These findings confirm the ripple analysis method as a valid method to characterize AI responses to broadband sounds as we proposed in a previous paper using synthesized spectra [Shamma and Versnel, Auditory Neurosci. 1, 255-270 (1995)].


Subject(s)
Auditory Cortex/physiology , Speech Perception/physiology , Animals , Evoked Potentials, Auditory , Ferrets/physiology , Models, Biological , Phonetics
5.
J Neurophysiol ; 76(5): 3524-34, 1996 Nov.
Article in English | MEDLINE | ID: mdl-8930290

ABSTRACT

1. Responses of single units and multiunit clusters were recorded in the ferret primary auditory cortex (AI) with the use of broadband complex dynamic spectra. Previous work has demonstrated that simpler spectra consisting of single moving ripples (i.e., sinusoidally modulated spectral profiles that travel at a constant velocity along the logarithmic frequency axis) could be used effectively to characterize the response fields and transfer functions of AI cells. 2. A complex dynamic spectral profile can be thought of as being the sum of moving ripple spectra. Such a decomposition can be computed from a two-dimensional spectrotemporal Fourier transform of the dynamic spectral profile with moving ripples as the basis function. 3. Therefore, if AI units were essentially linear, satisfying the superposition principle, then their responses to arbitrary dynamic spectra could be predicted from the responses to single moving ripples, i.e., from the units' response fields and transfer functions (spectral and temporal impulse response functions, respectively). 4. This conjecture was tested and confirmed with data from 293 combinations of moving ripples, involving complex spectra composed of up to 15 moving ripples of different ripple frequencies and velocities. For each case, response predictions based on the unit transfer functions were compared with measured responses. The correlation between predicted and measured responses was found to be consistently high (84% with rho > 0.6). 5. The distribution of response parameters suggests that AI cells may encode the profile of a dynamic spectrum by performing a multiscale spectrotemporal decomposition of the dynamic spectral profile in a largely linear manner.


Subject(s)
Acoustic Stimulation , Auditory Cortex/physiology , Membrane Potentials/physiology , Reaction Time/physiology , Animals , Ferrets
6.
J Neurophysiol ; 76(5): 3503-23, 1996 Nov.
Article in English | MEDLINE | ID: mdl-8930289

ABSTRACT

1. Auditory stimuli referred to as moving ripples are used to characterize the responses of both single and multiple units in the ferret primary auditory cortex. Moving ripples are broadband complex sounds with a sinusoidal spectral profile that drift along the logarithmic frequency axis at a constant velocity. 2. Neuronal responses to moving ripples are locked to the phase of the ripple, i.e., they exhibit the same periodicity as that of the moving ripple profile. Neural responses are characterized as a function of ripple velocity (temporal property) and ripple frequency (spectral property). Transfer functions describing the response to these temporal and spectral modulations are constructed. Temporal transfer functions are inverse Fourier transformed to obtain impulse response functions that reflect the cell's temporal characteristics. Ripple transfer functions are inverse Fourier transformed to obtain the response field, a measure analogous to the cell's response area. These operations assume linearity in the cell's response to moving ripples. 3. Transfer functions and other response functions are shown to be fairly independent on the overall level or depth of modulation of the ripple stimuli. Only downward moving ripples were used in this study. 4. The temporal and ripple transfer functions are found to be separable, in that their shapes remain unchanged for different test parameters. Thus ripple transfer functions and response fields remain statistically similar in shape (to within an overall scale factor) regardless of the ripple velocity or whether stationary or moving ripples are used in the measurement. The same stability in shape holds for the temporal transfer functions and the impulse response functions measured with different ripple frequencies. Separability implies that the combined spectrotemporal transfer function of a cell can be written as the product of a purely ripple and a purely temporal transfer functions, and thus that the neuron can be computationally modeled as processing spectral and temporal information in two separate and successive stages. 5. The ripple parameters that characterize cortical cells are distributed somewhat evenly, with the characteristic ripple frequencies ranging from 0.2 to > 2 cycles/octave and the characteristic angular frequency typically ranging from 2 to 20 Hz. 6. Many responses exhibit periodicities in the spectral envelope of the stimulus. These periodicities are of two types. Slow rebounds, not found in the spectral envelope, and with a period of approximately 150 ms, appear with various strengths in approximately 30% of the cells. Fast regular firings with interspike intervals of approximately 10 ms are much less common and appear to correspond to interactions between the component tones that make up a ripple.


Subject(s)
Acoustic Stimulation , Auditory Cortex/physiology , Behavior, Animal/physiology , Reaction Time/physiology , Animals , Ferrets
7.
J Neurosci Methods ; 58(1-2): 209-20, 1995 May.
Article in English | MEDLINE | ID: mdl-7475229

ABSTRACT

Using silicon-integrated circuit technology, we have fabricated a flexible multi-electrode array and used it for measuring evoked potentials at the surface of the ferret primary auditory cortex (AI). Traditionally, maps of cortical activity are recorded from numerous sequential penetrations with a single electrode. A common problem with this approach is that the state of the cortex (defined in part by level of anesthesia and number of active cells) changes during the time required to generate these maps. The multi-electrode array reduces this problem by allowing the recording of 24 locations simultaneously. The specific array described in this report is designed to record cortical activity over a 1 mm2 area. It is comprised of 24 gold electrodes (40 x 40 microns2) each spaced 210 microns apart. These electrodes are connected to contact pads via gold leads (5 cm in length). The electrodes, leads, and contact pads are sandwiched between two layers of polyimide. The polyimide passivates the device and makes the device flexible enough to conform to the shape of the cortex. The fabrication procedures described here allow various other layouts and areas to be readily implemented. Measurements of the electrical properties of the electrodes, together with details of the multichannel amplification, acquisition, and display of the data are also discussed. Finally, results of AI mapping experiments with these arrays are illustrated.


Subject(s)
Auditory Cortex/physiology , Electrophysiology/instrumentation , Evoked Potentials, Auditory/physiology , Microelectrodes , Acoustic Stimulation , Amplifiers, Electronic , Animals , Electric Stimulation , Electrocardiography , Ferrets , Platinum , Transistors, Electronic
8.
J Neurophysiol ; 73(4): 1513-23, 1995 Apr.
Article in English | MEDLINE | ID: mdl-7643163

ABSTRACT

1. Characteristics of an anterior auditory field (AAF) in the ferret auditory cortex are described in terms of its electrophysiological responses to tonal stimuli and compared with those of primary auditory cortex (AI). Ferrets were barbiturate-anesthetized and tungsten microelectrodes were used to record single-unit responses from both AI and AAF fields. Units in both areas were presented with the same stimulus paradigms and their responses analyzed in the same manner so that a direct comparison of responses was possible. 2. The AAF is located dorsal and rostral to AI on the ectosylvian gyrus and extends into the suprasylvian sulcus rostral to AI. The tonotopicity is organized with high frequencies at the top of the sulcus bordering the high-frequency area of AI, then reversing with lower BFs extending down into the sulcus. AAF contained single units that responded to a frequency range of 0.3-30 kHz. 3. Stimuli consisted of single-tone bursts, two-tone bursts and frequency-modulated (FM) stimuli swept in both directions at various rates. Best frequency (BF) range, rate-level functions at BF, FM directional sensitivity, and variation in asymmetries of response areas were all comparable characteristics between AAF and AI. Responses in both areas were primarily phasic. 4. The characteristics that were different between the two cortical areas were: latency to tone onset, excitatory bandwidth 20 dB above threshold (BW20), and preferred FM rate as parameterized with the centroid (a weighted average of spike counts). The mean latency of AAF units was shorter than in AI (AAF: 16.8 ms, AI: 19.4 ms). BW20 measurements in AAF were typically twice as large as those found in AI (AAF: 2.5 octaves, AI 1.3 octaves). The AI centroid population had a significantly larger standard deviation than the AAF centroid population. 5. We examined the relationship between centroid and BW20 to see whether wider bandwidths were a factor in a unit's ability to detect fast sweeps. There was significant (P < 0.05) linear correlation in AAF but not in AI. In both fields the variance of the centroid population decreased with increasing BW20. BW20 decreased as BF increased for units in both auditory fields.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Ferrets/physiology , Acoustic Stimulation , Animals , Evoked Potentials, Auditory/physiology , Sound Localization/physiology
9.
J Neurophysiol ; 69(2): 367-83, 1993 Feb.
Article in English | MEDLINE | ID: mdl-8459273

ABSTRACT

1. We studied the topographic organization of the response areas obtained from single- and multiunit recordings along the isofrequency planes of the primary auditory cortex in the barbiturate-anesthetized ferret. 2. Using a two-tone stimulus, we determined the excitatory and inhibitory portions of the response areas and then parameterized them in terms of an asymmetry index. The index measures the balance of excitatory and inhibitory influences around the best frequency (BF). 3. The sensitivity of responses to the direction of a frequency-modulated (FM) tone was tested and found to correlate strongly with the asymmetry index of the response areas. Specifically, cells with strong inhibition from frequencies above the BF preferred upward sweeps, and those from frequencies below the BF preferred downward sweeps. 4. Responses to spectrally shaped noise were also consistent with the asymmetry of the response areas. For instance, cells that were strongly inhibited by frequencies higher than the BF responded best to stimuli that contained least spectral energy above the BF, i.e., stimuli with the opposite asymmetry. 5. Columnar organization of the response area types was demonstrated in 66 single units from 16 penetrations. Consistent with this finding, it was also shown that response area asymmetry measured from recordings of a cluster of cells corresponded closely with those measured from its single-unit constituents. Thus, in a local region, most cells exhibited similar response area types and other response features, e.g., FM directional sensitivity. 6. The distribution of the asymmetry index values along the isofrequency planes revealed systematic changes in the symmetry of the response areas. At the center, response areas with narrow and symmetric inhibitory sidebands predominated. These gave way to asymmetric inhibition, with high-frequency inhibition (relative to the BF) becoming more effective caudally and low-frequency inhibition more effective rostrally. These response types tended to cluster along repeated bands that paralleled the tonotopic axis. 7. Response features that correlated with the response area types were also mapped along the isofrequency planes. Thus, in four animals, a map of FM directional sensitivity was shown to be superimposed on the response area map. Similarly, it was demonstrated in six animals that the spectral gradient of the most effective noise stimulus varied systematically along the isofrequency planes. 8. One functional implication of the response area organization is that cortical responses encode the locally averaged gradient of the acoustic spectrum by their differential distribution along the isofrequency planes. This enhances the representation of such features as the symmetry of spectral peaks and edges and the spectral envelope.(ABSTRACT TRUNCATED AT 400 WORDS)


Subject(s)
Auditory Cortex/physiology , Acoustic Stimulation , Action Potentials/physiology , Animals , Auditory Cortex/anatomy & histology , Brain Mapping , Ferrets , Microelectrodes , Neurons/physiology , Noise , Sound Localization/physiology
10.
Biol Cybern ; 65(3): 171-9, 1991.
Article in English | MEDLINE | ID: mdl-1912010

ABSTRACT

A minimum mean square error (MMSE) estimation scheme is employed to identify the synaptic connectivity in neural networks. This new approach can substantially reduce the amount of data and the computational cost involved in the conventional correlation methods, and is suitable for both nonstationary and stationary neuronal firings. Two algorithms are proposed to estimate the synaptic connectivities recursively, one for nonlinear filtering, the other for linear filtering. In addition, the lower and upper bounds for the MMSE estimator are determined. It is shown that the estimators are consistent in quadratic mean. We also demonstrate that the conventional cross-interval histogram is an asymptotic linear MMSE estimator with an inappropriate initial value. Finally, simulations of both nonlinear and linear (Kalman filter) estimate demonstrate that the true connectivity values are approached asymptotically.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neurons/physiology , Synapses/physiology , Algorithms , Animals , Mathematics
11.
Biophys J ; 57(5): 987-99, 1990 May.
Article in English | MEDLINE | ID: mdl-2340346

ABSTRACT

Analytical and experimental methods are provided for estimating synaptic connectivities from simultaneous recordings of multiple neurons. The results are based on detailed, yet flexible neuron models in which spike trains are modeled as general doubly stochastic point processes. The expressions derived can be used with nonstationary or stationary records, and can be readily extended from pairwise to multineuron estimates. Furthermore, we show analytically how the estimates are improved as more neurons are sampled, and derive the appropriate normalizations to eliminate stimulus-related correlations. Finally, we illustrate the use and interpretation of the analytical expressions on simulated spike trains and neural networks, and give explicit confidence measures on the estimates.


Subject(s)
Models, Neurological , Neural Conduction , Neurons/physiology , Animals , Mathematics
12.
J Acoust Soc Am ; 86(3): 989-1006, 1989 Sep.
Article in English | MEDLINE | ID: mdl-2794252

ABSTRACT

A neural network model is proposed for the binaural processing of interaural-time and level cues. The two-dimensional network measures interaural differences by detecting the spatial disparities between the instantaneous outputs of the two ears. The network requires no neural delay lines to generate such attributes of binaural hearing as the lateralization of all frequencies, and the detection and enhancement of noisy signals. It achieves this by comparing systematically, at various horizontal shifts, the spatiotemporal responses of the tonotopically ordered array of auditory-nerve fibers. An alternative view of the network operation is that it computes approximately the cross correlation between the responses of the two cochleas by combining an ipsilateral input at a given characteristic frequency (CF) with contralateral inputs from locally off-CF locations. Thus the network utilizes the delays already present in the traveling waves of the basilar membrane to extract the correlation function. Simulations of the network operation with various signals are presented as are comparisons to computational schemes suggested for stereopsis in vision. Physiological arguments in support of this scheme are also discussed.


Subject(s)
Auditory Perception/physiology , Computer Simulation , Functional Laterality/physiology , Models, Neurological , Reaction Time/physiology , Artificial Intelligence , Nerve Net/physiology
14.
J Acoust Soc Am ; 81(5): 1486-98, 1987 May.
Article in English | MEDLINE | ID: mdl-3584687

ABSTRACT

A minimal biophysical model of the cochlea is used to investigate the validity of the hypothesis that a single compressive nonlinearity at the hair cell level can explain some of the suppression phenomena in cochlear responses to complex stimuli. The dependencies of the model responses on the amplitudes and frequencies of two-tone stimuli resemble in many respects the behavior of the experimental data, and can be traced to explicit biophysical parameters in the model. Most discrepancies between theory and experiment stem from simplifications in parameters of the minimal model that play no direct role in the hypothesis. The analysis and simulations predict further results which, pending experimental verification, may provide a more direct test of the influence of the compressive nonlinearity on the relative amplitudes of the synchronous response components, and hence of its role in synchrony suppression. For instance, regardless of the overall absolute levels of a two-tone stimulus applied to this type of model, the ratio of the amplitudes at the input and the ratio of the corresponding responses at the output remain approximately constant and equal (the output ratio changes by at most 6 dB in favor of the stronger tone). Other nonlinear responses to multitonal stimuli can also be reproduced, such as "spectral edge enhancement" [Horst et al., Peripheral Auditory Mechanisms (Springer, Berlin, 1985)] and some aspects of three-tone suppression [Javel et al., Mechanisms of Hearing (Monash U.P., Australia, 1983)]. In contrast to the complex behavior of suppression with increasing sound intensity and the drastic influence of the compressive nonlinearity on the absolute response measures on the auditory nerve (e.g., average rate and synchrony profiles), the percepts of complex sounds are relatively stable. This suggests that the invariant relative response measures are more likely used in the encoding and CNS extraction of the spectrum of complex stimuli such as speech.


Subject(s)
Cochlea/physiology , Models, Biological , Acoustic Stimulation , Animals , Hair Cells, Auditory, Inner/physiology , Hearing , Humans , Sound
15.
J Acoust Soc Am ; 80(1): 133-45, 1986 Jul.
Article in English | MEDLINE | ID: mdl-3745659

ABSTRACT

A mathematical model of cochlear processing is developed to account for the nonlinear dependence of frequency selectivity on intensity in inner hair cell and auditory nerve fiber responses. The model describes the transformation from acoustic stimulus to intracellular hair cell potentials in the cochlea. It incorporates a linear formulation of basilar membrane mechanics and subtectorial fluid-cilia displacement coupling, and a simplified description of the inner hair cell nonlinear transduction process. The analysis at this stage is restricted to low-frequency single tones. The computed responses to single tone inputs exhibit the experimentally observed nonlinear effects of increasing intensity such as the increase in the bandwidth of frequency selectivity and the downward shift of the best frequency. In the model, the first effect is primarily due to the saturating effect of the hair cell nonlinearity. The second results from the combined effects of both the nonlinearity and of the inner hair cell low-pass transfer function. In contrast to these shifts along the frequency axis, the model does not exhibit intensity dependent shifts of the spatial location along the cochlea of the peak response for a given single tone. The observed shifts therefore do not contradict an intensity invariant tonotopic code.


Subject(s)
Cochlea/physiology , Models, Biological , Acoustic Stimulation , Basilar Membrane/physiology , Cilia/physiology , Hair Cells, Auditory, Inner/physiology , Hearing , Humans , Mathematics , Models, Neurological , Nerve Fibers/physiology , Tectorial Membrane/physiology , Vestibulocochlear Nerve/physiology
16.
J Acoust Soc Am ; 78(5): 1622-32, 1985 Nov.
Article in English | MEDLINE | ID: mdl-3840813

ABSTRACT

A biologically realistic model of a uniform lateral inhibitory network (LIN) is shown capable of extracting from the complex spatio-temporal firing patterns of the cat's auditory nerve the formants and low-order harmonics of synthetic voiced speech stimuli. The model provides a realistic mechanism to utilize the temporal aspects of the firing and thus supports the hypothesis that the neural coding of complex sounds in terms of average rates can be supplemented by the information coded in the synchronous firing. At low levels of intensity the LIN can sharpen the average rate profiles. At moderate and high levels the LIN uses the cues available in the distribution of phases of the synchronous activity which exhibit rapid relative phase shifts at specific characteristic frequency (CF) locations (corresponding to the frequencies of the low-order harmonics in the stimulus). These temporal phase shifts manifest themselves at the input of the LIN as steep and localized spatial discontinuities in the instantaneous pattern of activity across the fiber array. The LIN enhances its output from these spatially steep input regions while suppressing its output from spatially smooth input regions (where little phase shifts occur). In this manner the LIN recreates from the response patterns a representation of the stimulus spectrum using the temporal cues as spatial markers of the stimulus components rather than as absolute measures of their frequencies. Similar results are obtained with various lateral inhibitory topologies, e.g., recurrent versus nonrecurrent, single versus double layer, and linear versus nonlinear.


Subject(s)
Speech Perception/physiology , Vestibulocochlear Nerve/physiology , Animals , Cats , Cochlear Implants , Evoked Potentials, Auditory , Models, Biological , Speech Acoustics
17.
J Acoust Soc Am ; 78(5): 1612-21, 1985 Nov.
Article in English | MEDLINE | ID: mdl-4067077

ABSTRACT

In a previous paper the speech evoked spatio-temporal response patterns recorded in large populations of auditory-nerve fibers in the cat were examined [M.I. Miller and M.B. Sachs, J. Acoust. Soc. Am. 74, 502-517 (1983)]. The distribution of the relative phases of synchronized activity emerges as an important response feature reflecting the stimulus spectral parameters. Specifically, each strong low-order harmonic of the stimulus (less than or equal to 1.5-2 kHz) dominates the synchrony of a relatively broad segment of fibers near its corresponding characteristic frequency (CF) location in a pattern which mirrors the underlying traveling wave component. Each such fiber segment can be roughly subdivided into two regions: (1) a region basal to the point of resonance of the harmonic where the fiber PST histograms accumulate only small delays (or phase shifts) relative to each other reflecting the fast speed of propagation of the traveling wave, and (2) a region at or very near the point of resonance where the responses exhibit drastic relative phase shifts owing to the sudden slow down of the traveling wave and the consequent rapid accumulation of phase shifts. These rapid phase shifts thus manifest themselves as steep and localized spatial discontinuities in an otherwise relatively uniform instantaneous pattern of activity across the fiber array, all occurring at the CF locations corresponding to the low-order harmonics of the stimulus.


Subject(s)
Speech Perception/physiology , Vestibulocochlear Nerve/physiology , Acoustic Stimulation , Animals , Cats , Evoked Potentials, Auditory , Noise , Speech Acoustics
18.
Hear Res ; 19(1): 1-13, 1985.
Article in English | MEDLINE | ID: mdl-4066511

ABSTRACT

Two-tone interactions are recorded in the responses of single units in the superior temporal gyrus to contralateral acoustic stimulation of the awake squirrel monkey. Four response types are distinguished based primarily on the nature of the inhibitory responses elicited by two-tone stimuli, and secondarily on such criteria as the patterns of response to single tones and noise stimuli, thresholds, and spontaneous activity levels. Type A units display strong lateral inhibitory influences which may extend up to 2 octaves on either side, or both sides, of the BF. They are sharply tuned at all intensities and exhibit sustained response to single tone stimuli at the BF. The units have nonmonotonic rate-level functions, and show little or no response to broad band noise. Type A units have low spontaneous rates (less than 3 spikes/s) and relatively high thresholds (greater than or equal to 30 dB SPL). Type B units are characterized by relatively high spontaneous rates of activity (greater than 20 spikes/s) and inhibitory responses to single tone stimuli. Broad band noise may evoke strong excitatory response. Type C units summate the responses to the two-tone stimulus, and show little or no inhibitory influences. They have V-shaped tuning curves, monotonic rate-level functions, low thresholds (greater than or equal to 30 dB SPL), moderate spontaneous rates (ca. 10 spikes/s), and a strong and sustained response to noise and single tone stimuli. Type D units show 'temporal inhibition' to two-tone stimuli, in that an excitatory response to the first tone suppresses (adapts or inhibits) the response to the second tone. These units generally have moderate to broad frequency tuning and phasic responses to single tone stimuli. Histological examination of electrode tracks suggests that Type A units are restricted to A1 (and possibly the rostral field) while other types are distributed over all auditory fields.


Subject(s)
Auditory Cortex/physiology , Neural Inhibition , Acoustic Stimulation , Action Potentials , Animals , Functional Laterality/physiology , Models, Neurological , Neurons/physiology , Saimiri , Wakefulness/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...