Some computational analyses of the PBK test: effects of frequency and lexical density on spoken word recognition. (1/147)

OBJECTIVE: The Phonetically Balanced Kindergarten (PBK) Test (Haskins, Reference Note 2) has been used for almost 50 yr to assess spoken word recognition performance in children with hearing impairments. The test originally consisted of four lists of 50 words, but only three of the lists (lists 1, 3, and 4) were considered "equivalent" enough to be used clinically with children. Our goal was to determine if the lexical properties of the different PBK lists could explain any differences between the three "equivalent" lists and the fourth PBK list (List 2) that has not been used in clinical testing. DESIGN: Word frequency and lexical neighborhood frequency and density measures were obtained from a computerized database for all of the words on the four lists from the PBK Test as well as the words from a single PB-50 (Egan, 1948) word list. RESULTS: The words in the "easy" PBK list (List 2) were of higher frequency than the words in the three "equivalent" lists. Moreover, the lexical neighborhoods of the words on the "easy" list contained fewer phonetically similar words than the neighborhoods of the words on the other three "equivalent" lists. CONCLUSIONS: It is important for researchers to consider word frequency and lexical neighborhood frequency and density when constructing word lists for testing speech perception. The results of this computational analysis of the PBK Test provide additional support for the proposal that spoken words are recognized "relationally" in the context of other phonetically similar words in the lexicon. Implications of using open-set word recognition tests with children with hearing impairments are discussed with regard to the specific vocabulary and information processing demands of the PBK Test.  (+info)

Restoration of hearing with an auditory brainstem implant in a patient with neurofibromatosis type 2--case report. (2/147)

A 25-year-old male with neurofibromatosis type 2 had hearing restored with an auditory brainstem implant (ABI) after removal of an acoustic schwannoma. The ABI allows the patient to discern many different environment sounds and is a significant adjunct to lip-reading, enabling conversation with people who have clear pronunciation without the necessity for writing.  (+info)

Discrimination of non-native consonant contrasts varying in perceptual assimilation to the listener's native phonological system. (3/147)

Classic non-native speech perception findings suggested that adults have difficulty discriminating segmental distinctions that are not employed contrastively in their own language. However, recent reports indicate a gradient of performance across non-native contrasts, ranging from near-chance to near-ceiling. Current theoretical models argue that such variations reflect systematic effects of experience with phonetic properties of native speech. The present research addressed predictions from Best's perceptual assimilation model (PAM), which incorporates both contrastive phonological and noncontrastive phonetic influences from the native language in its predictions about discrimination levels for diverse types of non-native contrasts. We evaluated the PAM hypotheses that discrimination of a non-native contrast should be near-ceiling if perceived as phonologically equivalent to a native contrast, lower though still quite good if perceived as a phonetic distinction between good versus poor exemplars of a single native consonant, and much lower if both non-native segments are phonetically equivalent in goodness of fit to a single native consonant. Two experiments assessed native English speakers' perception of Zulu and Tigrinya contrasts expected to fit those criteria. Findings supported the PAM predictions, and provided evidence for some perceptual differentiation of phonological, phonetic, and nonlinguistic information in perception of non-native speech. Theoretical implications for non-native speech perception are discussed, and suggestions are made for further research.  (+info)

Perceptual "vowel spaces" of cochlear implant users: implications for the study of auditory adaptation to spectral shift. (4/147)

Cochlear implant (CI) users differ in their ability to perceive and recognize speech sounds. Two possible reasons for such individual differences may lie in their ability to discriminate formant frequencies or to adapt to the spectrally shifted information presented by cochlear implants, a basalward shift related to the implant's depth of insertion in the cochlea. In the present study, we examined these two alternatives using a method-of-adjustment (MOA) procedure with 330 synthetic vowel stimuli varying in F1 and F2 that were arranged in a two-dimensional grid. Subjects were asked to label the synthetic stimuli that matched ten monophthongal vowels in visually presented words. Subjects then provided goodness ratings for the stimuli they had chosen. The subjects' responses to all ten vowels were used to construct individual perceptual "vowel spaces." If CI users fail to adapt completely to the basalward spectral shift, then the formant frequencies of their vowel categories should be shifted lower in both F1 and F2. However, with one exception, no systematic shifts were observed in the vowel spaces of CI users. Instead, the vowel spaces differed from one another in the relative size of their vowel categories. The results suggest that differences in formant frequency discrimination may account for the individual differences in vowel perception observed in cochlear implant users.  (+info)

Mice and humans perceive multiharmonic communication sounds in the same way. (5/147)

Vowels and voiced consonants of human speech and most mammalian vocalizations consist of harmonically structured sounds. The frequency contours of formants in the sounds determine their spectral shape and timbre and carry, in human speech, important phonetic and prosodic information to be communicated. Steady-state partitions of vowels are discriminated and identified mainly on the basis of harmonics or formants having been resolved by the critical-band filters of the auditory system and then grouped together. Speech-analog processing and perception of vowel-like communication sounds in mammalian vocal repertoires has not been demonstrated so far. Here, we synthesize 11 call models and a tape loop with natural wriggling calls of mouse pups and show that house mice perceive this communication call in the same way as we perceive speech vowels: they need the presence of a minimum number of formants (three formants-in this case, at 3.8 + 7.6 + 11.4 kHz), they resolve formants by the critical-band mechanism, group formants together for call identification, perceive the formant structure rather continuously, may detect the missing fundamental of a harmonic complex, and all of these occur in a natural communication situation without any training or behavioral constraints. Thus, wriggling-call perception in mice is comparable with unconditioned vowel discrimination and perception in prelinguistic human infants and points to evolutionary old rules of handling speech sounds in the human auditory system up to the perceptual level.  (+info)

Cortical activation during spoken-word segmentation in nonreading-impaired and dyslexic adults. (6/147)

We used magnetoencephalography to elucidate the cortical activation associated with the segmentation of spoken words in nonreading-impaired and dyslexic adults. The subjects listened to binaurally presented sentences where the sentence-ending words were either semantically appropriate or inappropriate to the preceding sentence context. Half of the inappropriate final words shared two or three initial phonemes with the highly expected semantically appropriate words. Two temporally and functionally distinct response patterns were detected in the superior temporal lobe. The first response peaked at approximately 100 msec in the supratemporal plane and showed no sensitivity to the semantic appropriateness of the final word. This presemantic N100m response was abnormally strong in the left hemisphere of dyslexic individuals. After the N100m response, the semantically inappropriate sentence-ending words evoked stronger activation than the expected endings in the superior temporal cortex in the vicinity of the auditory cortex. This N400m response was delayed for words starting with the same two or three first few phonemes as the expected words but only until the first evidence of acoustic-phonetic dissimilarity emerged. This subtle delay supports the notion of initial lexical access being based on phonemes or acoustic features. In dyslexic participants, this qualitative aspect of word processing appeared to be normal. However, for all words alike, the ascending slope of the semantic activation in the left hemisphere was delayed by approximately 50 msec as compared with control subjects. The delay in the auditory N400m response in dyslexic subjects is likely to result from presemantic-phonological deficits possibly reflected in the abnormal N100m response.  (+info)

Talker discrimination by prelingually deaf children with cochlear implants: preliminary results. (7/147)

Forty-four school-age children who had used a multichannel cochlear implant (CI) for at least 4 years were tested to assess their ability to discriminate differences between recorded pairs of female voices uttering sentences. Children were asked to respond "same voice" or "different voice" on each trial. Two conditions were examined. In one condition, the linguistic content of the sentence was always held constant and only the talker's voice varied from trial to trial. In another condition, the linguistic content of the utterance also varied so that to correctly respond "same voice," the child needed to recognize that two different sentences were spoken by the same talker. Data from normal-hearing children were used to establish that these tasks were well within the capabilities of children without hearing impairment. For the children with CIs, in the "fixed sentence condition" the mean proportion correct was 68%, which, although significantly different from the 50% score expected by chance, suggests that the children with CIs found this discrimination task rather difficult. In the "varied sentence condition," however, the mean proportion correct was only 57%, indicating that the children were essentially unable to recognize an unfamiliar talker's voice when the linguistic content of the paired sentences differed. Correlations with other speech and language outcome measures are also reported.  (+info)

Sudden deafness and anterior inferior cerebellar artery infarction. (8/147)

BACKGROUND AND PURPOSE: Acute ischemic stroke in the distribution of the anterior inferior cerebellar artery (AICA) is known to be associated with vertigo, nystagmus, facial weakness, and gait ataxia. Few reports have carefully examined the deafness associated with the AICA infarction. Furthermore, previous neurological reports have not emphasized the inner ear as a localization of sudden deafness. The aim of this study was to investigate the incidence of deafness associated with the AICA infarction and the sites predominantly involved in deafness. METHODS: Over 2 years, we prospectively identified 12 consecutive patients with unilateral AICA infarction diagnosed by brain MRI. Pure-tone audiogram, speech discrimination testing, stapedial reflex testing, and auditory brainstem response were performed to localize the site of lesion in the auditory pathways. Electronystagmography was also performed to evaluate the function of the vestibular system. RESULTS: The most common affected site on brain MRI was the middle cerebellar peduncle (n=11). Four patients had vertigo and/or acute auditory symptoms such as hearing loss or tinnitus as an isolated manifestation from 1 day to 2 months before infarction. Audiological testings confirmed sensorineural hearing loss in 11 patients (92%), predominantly cochlear in 6 patients, retrocochlear in 1 patient, and combined on the affected side cochlear and retrocochlear in 4 patients. Electronystagmography demonstrated no response to caloric stimulation in 10 patients (83%). CONCLUSIONS: In our series, sudden deafness was an important sign for the diagnosis of AICA infarction. Audiological examinations suggest that sudden deafness in AICA infarction is usually due to dysfunction of the cochlea resulting from ischemia to the inner ear.  (+info)