Desynchronizing responses to correlated noise: A mechanism for binaural masking level differences at the inferior colliculus. (1/1528)

We examined the adequacy of decorrelation of the responses to dichotic noise as an explanation for the binaural masking level difference (BMLD). The responses of 48 low-frequency neurons in the inferior colliculus of anesthetized guinea pigs were recorded to binaurally presented noise with various degrees of interaural correlation and to interaurally correlated noise in the presence of 500-Hz tones in either zero or pi interaural phase. In response to fully correlated noise, neurons' responses were modulated with interaural delay, showing quasiperiodic noise delay functions (NDFs) with a central peak and side peaks, separated by intervals roughly equivalent to the period of the neuron's best frequency. For noise with zero interaural correlation (independent noises presented to each ear), neurons were insensitive to the interaural delay. Their NDFs were unmodulated, with the majority showing a level of activity approximately equal to the mean of the peaks and troughs of the NDF obtained with fully correlated noise. Partial decorrelation of the noise resulted in NDFs that were, in general, intermediate between the fully correlated and fully decorrelated noise. Presenting 500-Hz tones simultaneously with fully correlated noise also had the effect of demodulating the NDFs. In the case of tones with zero interaural phase, this demodulation appeared to be a saturation process, raising the discharge at all noise delays to that at the largest peak in the NDF. In the majority of neurons, presenting the tones in pi phase had a similar effect on the NDFs to decorrelating the noise; the response was demodulated toward the mean of the peaks and troughs of the NDF. Thus the effect of added tones on the responses of delay-sensitive inferior colliculus neurons to noise could be accounted for by a desynchronizing effect. This result is entirely consistent with cross-correlation models of the BMLD. However, in some neurons, the effects of an added tone on the NDF appeared more extreme than the effect of decorrelating the noise, suggesting the possibility of additional inhibitory influences.  (+info)

A pilot study on the human body vibration induced by low frequency noise. (2/1528)

To understand the basic characteristics of the human body vibration induced by low frequency noise and to use it to evaluate the effects on health, we designed a measuring method with a miniature accelerometer and carried out preliminary measurements. Vibration was measured on the chest and abdomen of 6 male subjects who were exposed to pure tones in the frequency range of 20 to 50 Hz, where the method we designed was proved to be sensitive enough to detect vibration on the body surface. The level and rate of increase with frequency of the vibration turned out to be higher on the chest than on the abdomen. This difference was considered to be due to the mechanical structure of the human body. It also turned out that the measured noise-induced vibration negatively correlated with the subject's BMI (Body Mass Index), which suggested that the health effects of low frequency noise depended not only on the mechanical structure but also on the physical constitution of the human body.  (+info)

Inhalation exposure of animals. (3/1528)

Relative advantages and disadvantages and important design criteria for various exposure methods are presented. Five types of exposures are discussed: whole-body chambers, head-only exposures, nose or mouth-only methods, lung-only exposures, and partial-lung exposures. Design considerations covered include: air cleaning and conditioning; construction materials; losses of exposure materials; evenness of exposure; sampling biases; animal observation and care; noise and vibration control, safe exhausts, chamber loading, reliability, pressure fluctuations; neck seals, masks, animal restraint methods; and animal comfort. Ethical considerations in use of animals in inhalation experiments are also discussed.  (+info)

Expiratory time determined by individual anxiety levels in humans. (4/1528)

We have previously found that individual anxiety levels influence respiratory rates in physical load and mental stress (Y. Masaoka and I. Homma. Int. J. Psychophysiol. 27: 153-159, 1997). On the basis of that study, in the present study we investigated the metabolic outputs during tests and analyzed the respiratory timing relationship between inspiration and expiration, taking into account individual anxiety levels. Disregarding anxiety levels, there were correlations between O2 consumption (VO2) and minute ventilation (VE) and between VO2 and tidal volume in the physical load test, but no correlations were observed in the noxious audio stimulation test. There was a volume-based increase in respiratory patterns in physical load; however, VE increased not only for the adjustment of metabolic needs but also for individual mental factors; anxiety participated in this increase. In the high-anxiety group, the VE-to-VO2 ratio, indicating ventilatory efficiency, increased in both tests. In the high-anxiety group, increases in respiratory rate contributed to a VE increase, and there were negative correlations between expiratory time and anxiety scores in both tests. In an awake state, the higher neural structure may dominantly affect the mechanism of respiratory rhythm generation. We focus on the relationship between expiratory time and anxiety and show diagrams of respiratory output, allowing for individual personality.  (+info)

Neural correlates of gap detection in three auditory cortical fields in the Cat. (5/1528)

Neural correlates of gap detection in three auditory cortical fields in the cat. Mimimum detectable gaps in noise in humans are independent of the position of the gap, whereas in cat primary auditory cortex (AI) they are position dependent. The position dependence in other cortical areas is not known and may resolve this contrast. This study presents minimum detectable gap-in-noise values for which single-unit (SU), multiunit (MU) recordings and local field potentials (LFPs) show an onset response to the noise after the gap. The gap, which varied in duration between 5 and 70 ms, was preceded by a noise burst of either 5 ms (early gap) or 500 ms (late gap) duration. In 10 cats, simultaneous recordings were made with one electrode each in AI, anterior auditory field (AAF), and secondary auditory cortex (AII). In nine additional cats, two electrodes were inserted in AI and one in AAF. Minimum detectable gaps based on SU, MU, or LFP data in each cortical area were the same. In addition, very similar minimum early-gap values were found in all three areas (means, 36.1-41.7 ms). The minimum late-gap values were also similar in AI and AII (means, 11.1 and 11.7 ms), whereas AAF showed significantly larger minimum late-gap durations (mean 21.5 ms). For intensities >35 dB SPL, distributions of minimum early-gap durations in AAF and AII had modal values at approximately 45 ms. In AI, the distribution was more uniform. Distributions for minimum late-gap duration were skewed toward low values (mode at 5 ms), but high values (+info)

Comparison of four methods for assessing airway sealing pressure with the laryngeal mask airway in adult patients. (6/1528)

We have compared four tests for assessing airway sealing pressure with the laryngeal mask airway (LMA) to test the hypothesis that airway sealing pressure and inter-observer reliability differ between tests. We studied 80 paralysed, anaesthetized adult patients. Four different airway sealing pressure tests were performed in random order on each patient by two observers blinded to each other's measurements: test 1 involved detection of an audible noise; test 2 was detection of end-tidal carbon dioxide in the oral cavity; test 3 was observation of the aneroid manometer dial as the pressure increased to note the airway pressure at which the dial reached stability; and test 4 was detection of an audible noise by neck auscultation. Mean airway sealing pressure ranged from 19.5 to 21.3 cm H2O and intra-class correlation coefficient was 0.95-0.99. Inter-observer reliability of all tests was classed as excellent. The manometric stability test had a higher mean airway sealing pressure (P < 0.0001) and better inter-observer reliability (P < 0.0001) compared with the three other tests. We conclude that for clinical purposes all four tests are excellent, but that the manometric stability test may be more appropriate for researchers comparing airway sealing pressures.  (+info)

The supporting-cell antigen: a receptor-like protein tyrosine phosphatase expressed in the sensory epithelia of the avian inner ear. (7/1528)

After noise- or drug-induced hair-cell loss, the sensory epithelia of the avian inner ear can regenerate new hair cells. Few molecular markers are available for the supporting-cell precursors of the hair cells that regenerate, and little is known about the signaling mechanisms underlying this regenerative response. Hybridoma methodology was used to obtain a monoclonal antibody (mAb) that stains the apical surface of supporting cells in the sensory epithelia of the inner ear. The mAb recognizes the supporting-cell antigen (SCA), a protein that is also found on the apical surfaces of retinal Muller cells, renal tubule cells, and intestinal brush border cells. Expression screening and molecular cloning reveal that the SCA is a novel receptor-like protein tyrosine phosphatase (RPTP), sharing similarity with human density-enhanced phosphatase, an RPTP thought to have a role in the density-dependent arrest of cell growth. In response to hair-cell damage induced by noise in vivo or hair-cell loss caused by ototoxic drug treatment in vitro, some supporting cells show a dramatic decrease in SCA expression levels on their apical surface. This decrease occurs before supporting cells are known to first enter S-phase after trauma, indicating that it may be a primary rather than a secondary response to injury. These results indicate that the SCA is a signaling molecule that may influence the potential of nonsensory supporting cells to either proliferate or differentiate into hair cells.  (+info)

Influence of head position on the spatial representation of acoustic targets. (8/1528)

Sound localization in humans relies on binaural differences (azimuth cues) and monaural spectral shape information (elevation cues) and is therefore the result of a neural computational process. Despite the fact that these acoustic cues are referenced with respect to the head, accurate eye movements can be generated to sounds in complete darkness. This ability necessitates the use of eye position information. So far, however, sound localization has been investigated mainly with a fixed head position, usually straight ahead. Yet the auditory system may rely on head motor information to maintain a stable and spatially accurate representation of acoustic targets in the presence of head movements. We therefore studied the influence of changes in eye-head position on auditory-guided orienting behavior of human subjects. In the first experiment, we used a visual-auditory double-step paradigm. Subjects made saccadic gaze shifts in total darkness toward brief broadband sounds presented before an intervening eye-head movement that was evoked by an earlier visual target. The data show that the preceding displacements of both eye and head are fully accounted for, resulting in spatially accurate responses. This suggests that auditory target information may be transformed into a spatial (or body-centered) frame of reference. To further investigate this possibility, we exploited the unique property of the auditory system that sound elevation is extracted independently from pinna-related spectral cues. In the absence of such cues, accurate elevation detection is not possible, even when head movements are made. This is shown in a second experiment where pure tones were localized at a fixed elevation that depended on the tone frequency rather than on the actual target elevation, both under head-fixed and -free conditions. To test, in a third experiment, whether the perceived elevation of tones relies on a head- or space-fixed target representation, eye movements were elicited toward pure tones while subjects kept their head in different vertical positions. It appeared that each tone was localized at a fixed, frequency-dependent elevation in space that shifted to a limited extent with changes in head elevation. Hence information about head position is used under static conditions too. Interestingly, the influence of head position also depended on the tone frequency. Thus tone-evoked ocular saccades typically showed a partial compensation for changes in static head position, whereas noise-evoked eye-head saccades fully compensated for intervening changes in eye-head position. We propose that the auditory localization system combines the acoustic input with head-position information to encode targets in a spatial (or body-centered) frame of reference. In this way, accurate orienting responses may be programmed despite intervening eye-head movements. A conceptual model, based on the tonotopic organization of the auditory system, is presented that may account for our findings.  (+info)