A problem with auditory processing? (1/574)

Recent studies have found associations between auditory processing deficits and language disorders such as dyslexia; but whether the former cause the latter, or simply co-occur with them, is still an open question.  (+info)

A possible neurophysiological basis of the octave enlargement effect. (2/574)

Although the physical octave is defined as a simple ratio of 2:1, listeners prefer slightly greater octave ratios. Ohgushi [J. Acoust. Soc. Am. 73, 1694-1700 (1983)] suggested that a temporal model for octave matching would predict this octave enlargement effect because, in response to pure tones, auditory-nerve interspike intervals are slightly larger than the stimulus period. In an effort to test Ohgushi's hypothesis, auditory-nerve single-unit responses to pure-tone stimuli were collected from Dial-anesthetized cats. It was found that although interspike interval distributions show clear phase-locking to the stimulus, intervals systematically deviate from integer multiples of the stimulus period. Due to refractory effects, intervals smaller than 5 msec are slightly larger than the stimulus period and deviate most for small intervals. On the other hand, first-order intervals are smaller than the stimulus period for stimulus frequencies less than 500 Hz. It is shown that this deviation is the combined effect of phase-locking and multiple spikes within one stimulus period. A model for octave matching was implemented which compares frequency estimates of two tones based on their interspike interval distributions. The model quantitatively predicts the octave enlargement effect. These results are consistent with the idea that musical pitch is derived from auditory-nerve interspike interval distributions.  (+info)

Auditory cortical responses to the interactive effects of interaural intensity disparities and frequency. (3/574)

Under natural conditions, stimuli reaching the two ears contain multiple acoustic components. Rarely does a stimulus containing only one component (e.g. pure tone burst) exist outside the realm of the laboratory. For example, in sound localization the simultaneous presence of multiple cues (spectral content, level, phase, etc.) serves to increase the number of available cues and provide the listener with more information, thereby helping to reduce errors in locating the sound source. The present study was designed to explore the relationship between two acoustic parameters: stimulus frequency and interaural intensity disparities (IIDs). By varying both stimulus frequency and IIDs for each cell, we hoped to gain insight into how multiple cues are processed. To this end, we examined the responses of neurons in cat primary auditory cortex (AI) to determine if their sensitivity to IIDs changed as a function of stimulus frequency. IIDs ranging from +30 to -30 dB were presented at different frequencies (frequency was always the same in the two ears). We found that approximately half of the units examined exhibited responses to IIDs that varied as a function of stimulus frequency (i.e. displayed some form of IID x Freq dependency). The remaining units displayed IID responses that were not clearly related to stimulus frequency.  (+info)

Temporal coding of periodicity pitch in the auditory system: an overview. (4/574)

This paper outlines a taxonomy of neural pulse codes and reviews neurophysiological evidence for interspike interval-based representations for pitch and timbre in the auditory nerve and cochlear nucleus. Neural pulse codes can be divided into channel-based codes, temporal-pattern codes, and time-of-arrival codes. Timings of discharges in auditory nerve fibers reflect the time structure of acoustic waveforms, such that the interspike intervals that are produced precisely convey information concerning stimulus periodicities. Population-wide inter-spike interval distributions are constructed by summing together intervals from the observed responses of many single Type I auditory nerve fibers. Features in such distributions correspond closely with pitches that are heard by human listeners. The most common all-order interval present in the auditory nerve array almost invariably corresponds to the pitch frequency, whereas the relative fraction of pitch-related intervals amongst all others qualitatively corresponds to the strength of the pitch. Consequently, many diverse aspects of pitch perception are explained in terms of such temporal representations. Similar stimulus-driven temporal discharge patterns are observed in major neuronal populations of the cochlear nucleus. Population-interval distributions constitute an alternative time-domain strategy for representing sensory information that complements spatially organized sensory maps. Similar autocorrelation-like representations are possible in other sensory systems, in which neural discharges are time-locked to stimulus waveforms.  (+info)

Neural responses to overlapping FM sounds in the inferior colliculus of echolocating bats. (5/574)

The big brown bat, Eptesicus fuscus, navigates and hunts prey with echolocation, a modality that uses the temporal and spectral differences between vocalizations and echoes from objects to build spatial images. Closely spaced surfaces ("glints") return overlapping echoes if two echoes return within the integration time of the cochlea ( approximately 300-400 micros). The overlap results in spectral interference that provides information about target structure or texture. Previous studies have shown that two acoustic events separated in time by less than approximately 500 micros evoke only a single response from neural elements in the auditory brain stem. How does the auditory system encode multiple echoes in time when only a single response is available? We presented paired FM stimuli with delay separations from 0 to 24 micros to big brown bats and recorded local field potentials (LFPs) and single-unit responses from the inferior colliculus (IC). These stimuli have one or two interference notches positioned in their spectrum as a function of two-glint separation. For the majority of single units, response counts decreased for two-glint separations when the resulting FM signal had a spectral notch positioned at the cell's best frequency (BF). The smallest two-glint separation that reliably evoked a decrease in spike count was 6 micros. In addition, first-spike latency increased for two-glint stimuli with notches positioned nearby BF. The N(4) potential of averaged LFPs showed a decrease in amplitude for two-glint separations that had a spectral notch near the BF of the recording site. Derived LFPs were computed by subtracting a common-mode signal from each LFP evoked by the two-glint FM stimuli. The derived LFP records show clear changes in both the amplitude and latency as a function of two-glint separation. These observations in relation with the single-unit data suggest that both response amplitude and latency can carry information about two-glint separation in the auditory system of E. fuscus.  (+info)

Reorganization of the frequency map of the auditory cortex evoked by cortical electrical stimulation in the big brown bat. (6/574)

In a search phase of echolocation, big brown bats, Eptesicus fuscus, emit biosonar pulses at a rate of 10/s and listen to echoes. When a short acoustic stimulus was repetitively delivered at this rate, the reorganization of the frequency map of the primary auditory cortex took place at and around the neurons tuned to the frequency of the acoustic stimulus. Such reorganization became larger when the acoustic stimulus was paired with electrical stimulation of the cortical neurons tuned to the frequency of the acoustic stimulus. This reorganization was mainly due to the decrease in the best frequencies of the neurons that had best frequencies slightly higher than those of the electrically stimulated cortical neurons or the frequency of the acoustic stimulus. Neurons with best frequencies slightly lower than those of the acoustically and/or electrically stimulated neurons slightly increased their best frequencies. These changes resulted in the over-representation of repetitively delivered acoustic stimulus. Because the over-representation resulted in under-representation of other frequencies, the changes increased the contrast of the neural representation of the acoustic stimulus. Best frequency shifts for over-representation were associated with sharpening of frequency-tuning curves of 25% of the neurons studied. Because of the increases in both the contrast of neural representation and the sharpness of tuning, the over-representation of the acoustic stimulus is accompanied with an improvement of analysis of the acoustic stimulus.  (+info)

Interdependence of spatial and temporal coding in the auditory midbrain. (7/574)

To date, most physiological studies that investigated binaural auditory processing have addressed the topic rather exclusively in the context of sound localization. However, there is strong psychophysical evidence that binaural processing serves more than only sound localization. This raises the question of how binaural processing of spatial cues interacts with cues important for feature detection. The temporal structure of a sound is one such feature important for sound recognition. As a first approach, we investigated the influence of binaural cues on temporal processing in the mammalian auditory system. Here, we present evidence that binaural cues, namely interaural intensity differences (IIDs), have profound effects on filter properties for stimulus periodicity of auditory midbrain neurons in the echolocating big brown bat, Eptesicus fuscus. Our data indicate that these effects are partially due to changes in strength and timing of binaural inhibitory inputs. We measured filter characteristics for the periodicity (modulation frequency) of sinusoidally frequency modulated sounds (SFM) under different binaural conditions. As criteria, we used 50% filter cutoff frequencies of modulation transfer functions based on discharge rate as well as synchronicity of discharge to the sound envelope. The binaural conditions were contralateral stimulation only, equal stimulation at both ears (IID = 0 dB), and more intense at the ipsilateral ear (IID = -20, -30 dB). In 32% of neurons, the range of modulation frequencies the neurons responded to changed considerably comparing monaural and binaural (IID =0) stimulation. Moreover, in approximately 50% of neurons the range of modulation frequencies was narrower when the ipsilateral ear was favored (IID = -20) compared with equal stimulation at both ears (IID = 0). In approximately 10% of the neurons synchronization differed when comparing different binaural cues. Blockade of the GABAergic or glycinergic inputs to the cells recorded from revealed that inhibitory inputs were at least partially responsible for the observed changes in SFM filtering. In 25% of the neurons, drug application abolished those changes. Experiments using electronically introduced interaural time differences showed that the strength of ipsilaterally evoked inhibition increased with increasing modulation frequencies in one third of the cells tested. Thus glycinergic and GABAergic inhibition is at least one source responsible for the observed interdependence of temporal structure of a sound and spatial cues.  (+info)

Frequency and intensity response properties of single neurons in the auditory cortex of the behaving macaque monkey. (8/574)

Response properties of auditory cortical neurons measured in anesthetized preparations have provided important information on the physiological differences between neurons in different auditory cortical areas. Studies in the awake animal, however, have been much less common, and the physiological differences noted may reflect differences in the influence of anesthetics on neurons in different cortical areas. Because the behaving monkey is gaining popularity as an animal model in studies exploring auditory cortical function, it has become critical to physiologically define the response properties of auditory cortical neurons in this preparation. This study documents the response properties of single cortical neurons in the primary and surrounding auditory cortical fields in monkeys performing an auditory discrimination task. We found that neurons with the shortest latencies were located in the primary auditory cortex (AI). Neurons in the rostral field had the longest latencies and the narrowest intensity and frequency tuning, neurons in the caudomedial field had the broadest frequency tuning, and neurons in the lateral field had the most monotonic rate/level functions of the four cortical areas studied. These trends were revealed by comparing response properties across the population of studied neurons, but there was considerable variability between neurons for each response parameter other than characteristic frequency (CF) in each cortical area. Although the neuronal CFs showed a systematic spatial organization across AI, no such systematic organization was apparent for any other response property in AI or the adjacent cortical areas. The results of this study indicate that there are physiological differences between auditory cortical fields in the behaving monkey consistent with previous studies in the anesthetized animal and provide insights into the functional role of these cortical areas in processing acoustic information.  (+info)