The Nord-Trondelag Norway Audiometric Survey 1996-98: unscreened thresholds and prevalence of hearing impairment for adults > 20 years. (49/570)

As supplement to a general health screening examination (HUNT-II), we conducted a puretone audiometry study in 1996-98 on adults (>20 years) in 17 of 23 municipalities in Nord-Trondelag, Norway, including questionnaires on occupational and leisure noise exposure, medical history, and symptoms of hearing impairment. The study aims to contribute to updated normative hearing thresholds for age and gender, while evaluating the effects of noise exposure, medical history, and familial or genetic influences on hearing. This paper presents the unscreened hearing threshold data and prevalence of hearing impairment for different age groups and by gender. Valid audiometric data were collected from 62% (n=50,723) of 82,141 unscreened invited subjects (age-range 20-101 years, mean=50.2 years, SD=17.0 years). Two ambulant audiometric teams each conducted 5 parallel self-administered, pure-tone hearing threshold examinations with the standard test frequencies 0.25-0.5-1-2-3-4-6-8 kHz (manual procedure when needed). Tracking audiometers were used in dismountable booths with in-booth noise levels well within ISO criteria, except being at the criterion around 200 Hz. The data were electronically transferred to a personal computer. Test-retest correlations for 99 randomly drawn subjects examined twice were high. The mean thresholds recorded were some dB elevated from "audiometric zero" even for age group 20-24 years. As also found in other studies, this might indicate too restrictive audiometric reference thresholds. Males had slightly better hearing < or =0.5 kHz for all age groups. Mean thresholds were poorer in males > or = 30 years from > or =2 kHz, with maximal gender differences of approximately 20 dB at 3-4 kHz for subjects aged 55-74 years. Weighted prevalence data averaged over 0.5-1-2-4 kHz showed hearing impairment >25 dB hearing threshold level of 18.8% (better ear) and 27.2% (worse ear) for the total population--for males 22.2% and 32.0%, for females 15.9% and 23.0%, respectively. Mean hearing loss > or =10 dB at 6 kHz registered for both genders even in age groups 20-24 years may be partly due to calibration artefacts, but might possibly also reflect noise-related socio-acusis.  (+info)

Electromyographic analysis of the orbicularis oris muscle in oralized deaf individuals. (50/570)

Electromyography has been used to evaluate the performance of the peribuccal musculature in mastication, swallowing and speech, and is an important tool for analysis of physiopathological changes affecting this musculature. Many investigations have been conducted in patients with auditory and speech deficiencies, but none has evaluated the musculature responsible for the speech. This study compared the electromyographic measurements of the superior and inferior fascicles of the orbicularis oris muscle in patients with profound bilateral neurosensorial hearing deficiency (deafness) and healthy volunteers. Electromyographic analysis was performed on recordings from 20 volunteers (mean age of 18.5 years) matched for gender and age. Subjects were assigned to two groups, as follows: a group formed by 10 individuals with profound bilateral neurosensorial hearing deficiency (deaf individuals) and a second group formed by 10 healthy individuals (hearers). Five clinical conditions were evaluated: suction, blowing, lip projection and compression, and production of the syllable "Pa". It was found that the deaf patients presented muscle hyperactivity in all clinical conditions, and that the inferior fascicle of the orbicularis oris muscle showed higher electromyographic activity rates, suggesting the need for a hearing-speech treatment with emphasis on oral motricity.  (+info)

Determinants of hearing loss in perforations of the tympanic membrane. (51/570)

BACKGROUND: Although tympanic membrane perforations are common, there have been few systematic studies of the structural features determining the magnitude of the resulting conductive hearing loss. Our recent experimental and modeling studies predicted that the conductive hearing loss will increase with increasing perforation size, be independent of perforation location (contrary to popular otologic belief), and increase with decreasing size of the middle-ear and mastoid air space (an idea new to otology). OBJECTIVE: To test our predictions regarding determinants of conductive hearing loss in tympanic membrane perforations against clinical data gathered from patients. STUDY DESIGN: Prospective clinical study. SETTING: Tertiary referral center. INCLUSION CRITERIA: Patients with tympanic membrane perforations without other middle-ear disease. MAIN OUTCOME MEASURES: Size and location of perforation; air-bone gap at 250, 500, 1,000, 2,000, and 4,000 Hz; and tympanometric estimate of volume of the middle-ear air spaces. RESULTS: Isolated tympanic membrane perforations in 62 ears from 56 patients met inclusion criteria. Air-bone gaps were largest at the lower frequencies and decreased as frequency increased. Air-bone gaps increased with perforation size at each frequency. Ears with small middle-ear volumes, < or = 4.3 ml (n = 23), had significantly larger air-bone gaps than ears with large middle-ear volumes, > 4.3 ml (n = 39), except at 2,000 Hz. The mean air-bone gaps in ears with small volumes were 10 to 20 dB larger than in ears with large volumes. Perforations in anterior versus posterior quadrants showed no significant differences in air-bone gaps at any frequency, although anterior perforations had, on average, air-bone gaps that were smaller by 1 to 8 dB at lower frequencies. CONCLUSION: The conductive hearing loss resulting from a tympanic membrane perforation is frequency-dependent, with the largest losses occurring at the lowest sound frequencies; increases as size of the perforation increases; varies inversely with volume of the middle-ear and mastoid air space (losses are larger in ears with small volumes); and does not vary appreciably with location of the perforation. Effects of location, if any, are small.  (+info)

Using a combination of click- and tone burst-evoked auditory brain stem response measurements to estimate pure-tone thresholds. (52/570)

DESIGN: A retrospective medical record review of evoked potential and audiometric data were used to determine the accuracy with which click-evoked and tone burst-evoked auditory brain stem response (ABR) thresholds predict pure-tone audiometric thresholds. METHODS: The medical records were reviewed of a consecutive group of patients who were referred for ABR testing for audiometric purposes over the past 4 yrs. ABR thresholds were measured for clicks and for several tone bursts, including a single-cycle, Blackman-windowed, 250-Hz tone burst, which has a broad spectrum with little energy above 600 Hz. Typically, the ABR data were collected because the patients were unable to provide reliable estimates of hearing sensitivity, based on behavioral test techniques, due to developmental level. Data were included only if subsequently obtained behavioral audiometric data were available to which the ABR data could be compared. Almost invariably, the behavioral data were collected after the ABR results were obtained. Because of this, data were included on only those ears for which middle ear tests (tympanometry, otoscopic examination, pure-tone air- and bone-conduction thresholds) indicated that middle ear status was similar at the times of both tests. With these inclusion criteria, data were available on 140 ears of 77 subjects. RESULTS: Correlation was 0.94 between click-evoked ABR thresholds and the average pure-tone threshold at 2 and 4 kHz. Correlations exceeded 0.92 between ABR thresholds for the 250-Hz tone burst and low-frequency behavioral thresholds (250 Hz, 500 Hz, and the average pure-tone thresholds at 250 and 500 Hz). Similar or higher correlations were observed when ABR thresholds at other frequencies were compared with the pure-tone thresholds at corresponding frequencies. Differences between ABR and behavioral threshold depended on behavioral threshold, with ABR thresholds overestimating behavioral threshold in cases of normal hearing and underestimating behavioral threshold in cases of hearing loss. CONCLUSIONS: These results suggest that ABR thresholds can be used to predict pure-tone behavioral thresholds for a wide range of frequencies. Although controversial, the data reviewed in this paper suggest that click-evoked ABR thresholds result in reasonable predictions of the average behavioral thresholds at 2 and 4 kHz. However, there were cases for which click-evoked ABR thresholds underestimated hearing loss at these frequencies. There are several other reasons why click-evoked ABR measurements were made, including that they (1) generally result in well-formed responses, (2) assist in determining whether auditory neuropathy exists, and (3) can be obtained in a relatively brief amount of time. Low-frequency thresholds were predicted well by ABR thresholds to a single-cycle, 250-Hz tone burst. In combination, click-evoked and low-frequency tone burst-evoked ABR threshold measurements might be used to quickly provide important clinical information for both ends of the audiogram. These measurements could be supplemented by ABR threshold measurements at other frequencies, if time permits. However, it may be possible to plan initial intervention strategies based on data for these two stimuli.  (+info)

Auditory screening in the elderly: comparison between self-report and audiometry. (53/570)

Despite its high prevalence in the aged, hearing loss has been poorly investigated. Audiometry is the gold standard for evaluation of hearing loss, but large-scale use of the procedure involves operational difficulties. Thus, self-report may be an alternative. AIM: To determine if a single global question is valid for use in epidemiologic research. STUDY DESIGN: Systematic review. MATERIAL AND METHOD: A search of the medical literature from 1990 to 2004 was performed using MEDLINE and LILACS. The references of the articles identified in the electronic search were also reviewed. STUDY SELECTION AND DATA EXTRACTION: The articles that compared the results obtained with self-report to a single global question with those obtained by pure tone audiometry were selected. Data about the prevalence of hearing loss, and sensitivity, specificity and predictive values were extracted. DATA SYNTHESIS: Ten longitudinal studies were included. A single global question seems to be an acceptable indicator of hearing loss, sensitive and reasonably specific, mainly if the hearing loss is identified as the tone average that includes frequencies up to 2 or 4 kHz, at 40 dBHL level, in the best ear. CONCLUSION: A single global question shows good performance in identifying older persons with hearing loss and can be recommended for an epidemiologic study if audiometric measurements cannot be performed.  (+info)

Influence of primary-level and primary-frequency ratios on human distortion product otoacoustic emissions. (54/570)

The combined influence of primary-level differences (L1-L2) and primary-frequency ratio (f2/f1) on distortion product otoacoustic emission (DPOAE) level was investigated in 20 normal-hearing subjects. DPOAEs were recorded with continuously varying stimulus levels [Neely et al. J. Acoust. Soc. Am. 117, 1248-1259 (2005)] for the following stimulus conditions: f2= 1, 2, 4, and 8 kHz and f2/f1=1.05 to 1.4; various L1-L2, including one individually optimized to produce the largest DPOAE. For broadly spaced primary frequencies at low L2 levels, the largest DPOAEs were recorded when L1 was much higher than L2, with L1 remaining relatively constant as L2 increased. As f2/fl decreased, the largest DPOAEs were observed when L1 was closer to L2 and increased as L2 increased. Optimal values for L1-L2 and f2 f1 were derived from these data. In general, average DPOAE levels for the new L1-L2 and f2/f1 were equivalent to or larger than those observed for other stimulus combinations, including the L1-L2 described by Kummer et al. [J. Acoust. Soc. Am. 103, 3431-3444 (1998)] and those defined by Neely et al. in which L1-L2 was evaluated, but f2/f1 was fixed at 1.2.  (+info)

Effects of early auditory experience on the spoken language of deaf children at 3 years of age. (55/570)

OBJECTIVE: By age 3, typically developing children have achieved extensive vocabulary and syntax skills that facilitate both cognitive and social development. Substantial delays in spoken language acquisition have been documented for children with severe to profound deafness, even those with auditory oral training and early hearing aid use. This study documents the spoken language skills achieved by orally educated 3-yr-olds whose profound hearing loss was identified and hearing aids fitted between 1 and 30 mo of age and who received a cochlear implant between 12 and 38 mo of age. The purpose of the analysis was to examine the effects of age, duration, and type of early auditory experience on spoken language competence at age 3.5 yr. DESIGN: The spoken language skills of 76 children who had used a cochlear implant for at least 7 mo were evaluated via standardized 30-minute language sample analysis, a parent-completed vocabulary checklist, and a teacher language-rating scale. The children were recruited from and enrolled in oral education programs or therapy practices across the United States. Inclusion criteria included presumed deaf since birth, English the primary language of the home, no other known conditions that interfere with speech/language development, enrolled in programs using oral education methods, and no known problems with the cochlear implant lasting more than 30 days. RESULTS: Strong correlations were obtained among all language measures. Therefore, principal components analysis was used to derive a single Language Factor score for each child. A number of possible predictors of language outcome were examined, including age at identification and intervention with a hearing aid, duration of use of a hearing aid, pre-implant pure-tone average (PTA) threshold with a hearing aid, PTA threshold with a cochlear implant, and duration of use of a cochlear implant/age at implantation (the last two variables were practically identical because all children were tested between 40 and 44 mo of age). Examination of the independent influence of these predictors through multiple regression analysis revealed that pre-implant-aided PTA threshold and duration of cochlear implant use (i.e., age at implant) accounted for 58% of the variance in Language Factor scores. A significant negative coefficient associated with pre-implant-aided threshold indicated that children with poorer hearing before implantation exhibited poorer language skills at age 3.5 yr. Likewise, a strong positive coefficient associated with duration of implant use indicated that children who had used their implant for a longer period of time (i.e., who were implanted at an earlier age) exhibited better language at age 3.5 yr. Age at identification and amplification was unrelated to language outcome, as was aided threshold with the cochlear implant. A significant quadratic trend in the relation between duration of implant use and language score revealed a steady increase in language skill (at age 3.5 yr) for each additional month of use of a cochlear implant after the first 12 mo of implant use. The advantage to language of longer implant use became more pronounced over time. CONCLUSIONS: Longer use of a cochlear implant in infancy and very early childhood dramatically affects the amount of spoken language exhibited by 3-yr-old, profoundly deaf children. In this sample, the amount of pre-implant intervention with a hearing aid was not related to language outcome at 3.5 yr of age. Rather, it was cochlear implantation at a younger age that served to promote spoken language competence. The previously identified language-facilitating factors of early identification of hearing impairment and early educational intervention may not be sufficient for optimizing spoken language of profoundly deaf children unless it leads to early cochlear implantation.  (+info)

Results after revision stapedectomy with malleus grip prosthesis. (56/570)

Revision stapedectomy with a malleus grip prosthesis is a technically challenging otologic procedure. The prosthesis is usually longer and extends deeper into the vestibule than a conventional stapes prosthesis, creating the potential to affect the vestibular sense organs. The prosthesis also bypasses the ossicular joints, which are thought to play a role in protecting the inner ear from large changes in static pressure within the middle ear. The prosthesis is in close proximity to the tympanic membrane, thus increasing the risk for its extrusion. We reviewed our experience with revision stapedectomy with the Schuknecht Teflon-wire malleus grip prosthesis in 36 ears with a mean follow-up of 23 months. The air-bone gap was closed to within 10 dB in 16 ears (44%) and to within 20 dB in 26 ears (72%). The incidence of postoperative sensorineural hearing loss was 8% (3 ears). There were no dead ears. Extrusion of the prosthesis occurred in 1 case (3%). Nearly 50% of patients reported various degrees of vertigo or disequilibrium during the first 3 weeks after surgery. These vestibular symptoms resolved by 6 weeks in all but 1 case. We did not find evidence of damage to the inner ear due to the length of the prosthesis or due to the potential for direct transmission of changes in static pressures within the middle ear to the labyrinth. Our results are similar to those published in the literature for malleus attachment stapedectomy and conventional revision incus stapedectomy.  (+info)