Coding of sound intensity in the chick cochlear nerve.
(65/910)
Tuning curves, spontaneous activity, and rate-intensity (RI) functions were obtained from units in the chick cochlear nerve. The characteristic frequency (CF) was determined from each tuning curve. The shape of each RI function was subjectively evaluated and assigned to one of four RI types. The breakpoint, discharge rate at the highest SPLs, and slopes of the primary and secondary segments were quantified for each function. The CF and RI type were then related to these variables. A new RI function was observed in which the discharge activity in the secondary segment diminished as stimulus level increased above the breakpoint. This function was called a "sloping-down" type. In 959 units, saturating, sloping-up, sloping-down, and straight RI types were identified in 39.2, 35.5, 12.6, and 12.7% of the sample, respectively. The slope of the primary segment was nearly the same in each of the four types and averaged 5.48 S. s(-1). dB(-1) across all units. The slopes of the secondary segments formed four groupings when segregated by RI type based on the subjective assignments and averaged 0.03, 1.22, -0.90, and 3.95 S. s(-1). dB(-1) in the saturating, sloping-up, sloping-down, and straight types, respectively. The data describing the secondary segments of all units were fit with a multi-compartment polynomial and showed a continuous distribution that segregated, with some overlap, into the different RI categories. The proportion of RI types, as well as the secondary and primary slopes were approximately constant across CFs. In addition, it would appear that the other parameters that define the four types were, for the most part, homogeneously distributed across the frequency axis of the chick inner ear. Finally, a comparison of RI functions having a common CF suggested that the compressive nonlinearity that determines RI type may be a phenomenon localized to individual hair cells in the bird ear. (+info)
Changes of AI receptive fields with sound density.
(66/910)
Primates engage in auditory behaviors under a broad range of signal-to-noise conditions. In this study, optimal linear receptive fields were measured in alert primate primary auditory cortex (A1) in response to stimuli that vary in spectrotemporal density. As density increased, A1 excitatory receptive fields systematically changed. Receptive field sensitivity, expressed as the expected change in firing rate after a tone pip onset, decreased by an order of magnitude. Spectral selectivity more than doubled. Inhibitory subfields, which were rarely recorded at low sound densities, emerged at higher sound densities. The ratio of excitatory to inhibitory population strength changed from 14.4:1 to 1.4:1. At low sound densities, the sound associated with the evocation of an action potential from an A1 neuron was broad in spectrum and time. At high sound densities, a spike-evoking sound was more likely to be a spectral or temporal edge and was narrower in time and frequency range. Receptive fields were used to predict responses to a novel high-noise-density stimulus. The predictions were highly correlated with the actual responses to the 2-s complex sound excerpt. The structure of prediction failures revealed that neurons with prominent inhibitory fields had relatively poor linear predictions. Further, the finding that stochastic variance is limiting in prediction even after averaging 150 repetitions means that high-fidelity representations of simple sounds in A1 must be distributed over at least hundreds of neurons. Auditory context alters A1 responses across multiple parameter spaces; this presents a challenge for reconstructing neural codes. (+info)
Auditory neuroscience: the salience of looming sounds.
(67/910)
Sounds that move towards us have a greater biological salience than those that move away. Recent studies in human and non-human primates have demonstrated a perceptual and behavioural priority for such looming sounds that is also reflected in an asymmetric pattern of cortical activation. (+info)
Neural resources for processing language and environmental sounds: evidence from aphasia.
(68/910)
Although aphasia is often characterized as a selective impairment in language function, left hemisphere lesions may cause impairments in semantic processing of auditory information, not only in verbal but also in nonverbal domains. We assessed the 'online' relationship between verbal and nonverbal auditory processing by examining the ability of 30 left hemisphere-damaged aphasic patients to match environmental sounds and linguistic phrases to corresponding pictures. The verbal and nonverbal task components were matched carefully through a norming study; 21 age-matched controls and five right hemisphere-damaged patients were also tested to provide further reference points. We found that, while the aphasic groups were impaired relative to normal controls, they were impaired to the same extent in both domains, with accuracy and reaction time for verbal and nonverbal trials revealing unusually high correlations (r = 0.74 for accuracy, r = 0.95 for reaction time). Severely aphasic patients tended to perform worse in both domains, but lesion size did not correlate with performance. Lesion overlay analysis indicated that damage to posterior regions in the left middle and superior temporal gyri and to the inferior parietal lobe was a predictor of deficits in processing for both speech and environmental sounds. The lesion mapping and further statistical assessments reliably revealed a posterior superior temporal region (Wernicke's area, traditionally considered a language-specific region) as being differentially more important for processing nonverbal sounds compared with verbal sounds. These results suggest that, in most cases, processing of meaningful verbal and nonverbal auditory information break down together in stroke and that subsequent recovery of function applies to both domains. This suggests that language shares neural resources with those used for processing information in other domains. (+info)
Turbulent blood flow in humans: its primary role in the production of ejection murmurs.
(69/910)
To clarify the postulate that turbulence may produce ejection murmurs, point velocity and sound were measured in the ascending aorta of 13 subjects: six with normal aortic valves, six with aortic valvular disease, and one with a Bjork-Shiley prosthetic aortic valve. Velocity was measured with a catheter-tip hot film anemometer probe, and sound was measured with a catheter-tip micromanometer. Ejection murmurs detected intra-arterially were always found to be associated with turbulent or highly disturbed flow. Conversely, in the absence of intra-arterial sound during ejection, only minor disturbances of flow were detected. A linear relation between the sound energy density and turbulent energy density was shown (r = 0.92) and a linear relation between the acoustic power output (sound intensity) and turbulent power supply (r = 0.87) also was shown. Studies in vitro of sound and point velocity distal to a porcine valve inserted within a cast of the aorta, which permitted precise centering of the transducers along the axis of flow, confirmed these observations. When the power generated by the turbulence exceeded 3 ergs/sec per cm2, the murmurs were audible at the chest wall. The clinical gradation of the intensity of the murmurs increased as the power of turbulence increased. In conclusion, in this study we have demonstrated a clear association between turbulent blood flow and systolic ejection murmurs. (+info)
Vestibular activation by bone conducted sound.
(70/910)
OBJECTIVE: To examine the properties and potential clinical uses of myogenic potentials to bone conducted sound. METHODS: Myogenic potentials were recorded from normal volunteers, using bone conducted tone bursts of 7 ms duration and 250-2000 Hz frequencies delivered over the mastoid processes by a B 71 clinical bone vibrator. Biphasic positive-negative (p1n1) responses were recorded from both sternocleidomastoid (SCM) muscles using averaged unrectified EMG. The best location for stimulus delivery, optimum stimulus frequency, stimulus thresholds, and the effect of aging on evoked response amplitudes and thresholds were systematically examined. Subjects with specific lesions were studied. Vestibular evoked myogenic potentials (VEMP) to air conducted 0.1 ms clicks, 7 ms/250-2000 Hz tones, and forehead taps were measured for comparison. RESULTS: Bone conducted sound evoked short latency p1n1 responses in both SCM muscles. Ipsilateral responses occurred earlier and were usually larger. Mean (SD) p1 and n1 latencies were 13.6 (1.8) and 22.3 (1.2) ms ipsilaterally and 14.9 (2.1) and 23.7 (2.7) ms contralaterally. Stimuli of 250 Hz delivered over the mastoid process, posterosuperior to the external acoustic meatus, yielded the largest amplitude responses. Like VEMP in response to air conducted clicks and tones, p1n1 responses were absent ipsilaterally in subjects with selective vestibular neurectomy and preserved in those with severe sensorineural hearing loss. However, p1n1 responses were preserved in conductive hearing loss, whereas VEMP to air conducted sound were abolished or attenuated. Bone conducted response thresholds were 97.5 (3.9) dB SPL/30.5 dB HL, significantly lower than thresholds to air conducted clicks (131.7 (4.9) dB SPL/86.7 dB HL) and tones (114.0 (5.3) dB SPL/106 dB HL). CONCLUSIONS: Bone conducted sound evokes p1n1 responses (bone conducted VEMP) which are a useful measure of vestibular function, especially in the presence of conductive hearing loss. For a given perceptual intensity, bone conducted sound activates the vestibular apparatus more effectively than air conducted sound. (+info)
Temporomandibular disorders, occlusion and orthodontic treatment.
(71/910)
OBJECTIVES: To prospectively and longitudinally study symptoms and signs of temporomandibular disorders (TMD) and occlusal changes in girls with Class II malocclusion receiving orthodontic fixed appliance treatment in comparison with untreated Class II malocclusions and with normal occlusion subjects. DESIGN: Prospective observational cohort. SUBJECTS: Sixty-five girls with Class II malocclusion who received orthodontic treatment, 58 girls with no treatment, and 60 girls with normal occlusion. METHOD: The girls were examined for symptoms and signs of TMD and re-examined 2 years later. Additional records were taken in the orthodontic group during active treatment and 1 year after treatment RESULTS: All three groups included subjects with more or less pronounced TMD, which showed individual fluctuation during the ongoing study. In the orthodontic group, the prevalence of muscular signs of TMD was significantly less common post-treatment. Temporomandibular joint clicking increased in all three groups over the 2 years, but was less common in the normal group. The normal group also had a lower overall prevalence of TMD than the orthodontic and the Class II group at both registrations. Functional occlusal interferences decreased in the orthodontic group, but remained the same in the other groups over the 2 years. CONCLUSIONS: (i) Orthodontic treatment either with or without extractions did not increase the prevalence or worsen pre-treatment symptoms and signs of TMD. (ii) Individually, TMD fluctuated substantially over time with no predictable pattern. However, on a group basis, the type of occlusion may play a role as a contributing factor for the development of TMD. (iii) The large fluctuation of TMD over time leads us to suggest a conservative treatment approach when stomatognathic treatment in children and adolescents is considered. (+info)
Noise and the classical musician.
(72/910)
OBJECTIVES: To test the hypothesis that noise exposure may cause hearing loss in classical musicians. DESIGN: Comparison of hearing levels between two risk groups identified during the study by measuring sound levels. SETTING: Symphony orchestra and occupational health department in the west Midlands. MAIN OUTCOME MEASURES: Hearing level as measured by clinical pure tone audiometry. RESULTS: Trumpet and piccolo players received a noise dose of 160% and 124%, respectively, over mean levels during part of the study. Comparison of the hearing levels of 18 woodwind and brass musicians with 18 string musicians matched for age and sex did not show a significant difference in hearing, the mean difference in the hearing levels at the high (2, 4, and 8 KHz) audiometric frequencies being 1.02 dB (95% confidence interval -2.39 to 4.43). CONCLUSIONS: This study showed that there is a potential for occupational hearing loss in classical orchestral musicians. (+info)