Development of [j] in young, midwestern, American children. (73/1097)

Beginning at the age of about 14 months, eight children who lived in a rhotic dialect region of the United States were recorded approximately every 2 months interacting with their parents. All were recorded until at least the age of 26 months, and some until the age of 31 months. Acoustic analyses of speech samples indicated that these young children acquired [inverted r] production ability at different ages for [inverted r]'s in different syllable positions. The children, as a group, had started to produce postvocalic and syllabic [inverted r] in an adult-like manner by the end of the recording sessions, but were not yet showing evidence of having acquired prevocalic [inverted r]. Articulatory limitations of young children are posited as a cause for the difference in development of [inverted r] according to syllable position. Specifically, it is speculated that adult-like prevocalic [inverted r] production requires two lingual constrictions: one in the mouth, and the other in the pharynx, while postvocalic and syllabic [inverted r] requires only one oral constriction. Two lingual constrictions could be difficult for young children to produce.  (+info)

Cherry pit primes Brad Pitt: Homophone priming effects on young and older adults' production of proper names. (74/1097)

This study investigated why proper names are difficult to retrieve, especially for older adults. On intermixed trials, young and older adults produced a word for a definition or a proper name for a picture of a famous person. Prior production of a homophone (e.g., pit) as the response on a definition trial increased correct naming and reduced tip-of-the-tongue experiences for a proper name (e.g., Pitt) on a picture-naming trial. Among participants with no awareness of the homophone manipulation, older but not young adults showed these homophone priming effects. With a procedure that reduced awareness effects (Experiment 2), prior production of a homophone improved correct naming only for older adults, but speeded naming latency for both age groups. We suggest that representations of proper names are susceptible to weak connections that cause deficits in the transmission of excitation, impairing retrieval especially in older adults. We conclude that homophone production strengthens phonological connections, increasing the transmission of excitation.  (+info)

Imitation of nonwords by hearing-impaired children with cochlear implants: segmental analyses. (75/1097)

The phonological processing skills of 24 pre-lingually deaf 8- and 9-year-old experienced cochlear implant users were measured using a nonword repetition task. The children heard recordings of 20 nonwords and were asked to repeat each pattern as accurately as possible. Detailed segmental analyses of the consonants in the children's imitation responses were carried out. Overall, 39% of the consonants were imitated correctly. Coronals were produced correctly more often than labials or dorsals. There was no difference in the proportion of correctly reproduced stops, fricatives, nasals, and liquids, or voiced and voiceless consonants. Although nonword repetition performance was not correlated with the children's demographic characteristics, the nonword repetition scores were strongly correlated with other measures of the component processes required for the immediate reproduction of a novel sound pattern: spoken word recognition, language comprehension, working memory, and speech production.  (+info)

Functional MR imaging study of language-related differences in bilingual cerebellar activation. (76/1097)

BACKGROUND AND PURPOSE: Reports in the monolingual literature suggest that the cerebellum has an important role in language processing. The purpose of this study was to determine whether bilingual cerebellar functional MR imaging (fMRI) activation differs during the performance of comparable tasks in subjects' primary and secondary languages. METHODS: Eight bilingual, right-handed individuals underwent echo-planar fMRI at 1.5 T. They performed semantic (noun-verb association) and phonological (rhyming) tasks in Spanish (primary language) and English (secondary language). Individual and group functional datasets were analyzed using Statistical Parametric Mapping software (SPM99; P <.001 with a 10-voxel spatial extent threshold) and overlaid on T1-weighted anatomic images normalized to a standard (Montreal Neurologic Institute) space. Analysis of variance was performed on laterality indices derived from voxel counts in cerebellar regions of interest (ROIs). Subtraction of group-averaged normalized results from the combined Spanish tasks from the combined English tasks was also performed within SPM99 (P <.001 activation threshold). RESULTS: Significantly greater lateralilty indices were noted in the English tasks than in the Spanish tasks (mean Spanish LI, 0.3286; mean English LI, 0.5141 [P =.0143]). Overall, more robust activation was seen in the English tasks than in the Spanish tasks. Areas of significantly greater activation existed in the English tasks as compared with the Spanish tasks; these areas were more prominent in the left cerebellar hemisphere. CONCLUSION: Although both English and Spanish language tasks demonstrate left cerebellar dominance, English tasks demonstrate greater left hemispheric lateralization.  (+info)

The role of temporal and dynamic signal components in the perception of syllable-final stop voicing by children and adults. (77/1097)

Adults whose native languages permit syllable-final obstruents, and show a vocalic length distinction based on the voicing of those obstruents, consistently weight vocalic duration strongly in their perceptual decisions about the voicing of final stops, at least in laboratory studies using synthetic speech. Children, on the other hand, generally disregard such signal properties in their speech perception, favoring formant transitions instead. These age-related differences led to the prediction that children learning English as a native language would weight vocalic duration less than adults, but weight syllable-final transitions more in decisions of final-consonant voicing. This study tested that prediction. In the first experiment, adults and children (eight and six years olds) labeled synthetic and natural CVC words with voiced or voiceless stops in final C position. Predictions were strictly supported for synthetic stimuli only. With natural stimuli it appeared that adults and children alike weighted syllable-offset transitions strongly in their voicing decisions. The predicted age-related difference in the weighting of vocalic duration was seen for these natural stimuli almost exclusively when syllable-final transitions signaled a voiced final stop. A second experiment with adults and children (seven and five years old) replicated these results for natural stimuli with four new sets of natural stimuli. It was concluded that acoustic properties other than vocalic duration might play more important roles in voicing decisions for final stops than commonly asserted, sometimes even taking precedence over vocalic duration.  (+info)

Nonword imitation by children with cochlear implants: consonant analyses. (78/1097)

OBJECTIVES: To complete detailed linguistic analyses of archived recordings of pediatric cochlear implant users' imitations of nonwords; to gain insight into the children's developing phonological systems and the wide range of variability in nonword responses. DESIGN: Nonword repetition: repetition of 20 auditory-only English-sounding nonwords. SETTING: Central Institute for the Deaf "Education of the Deaf Child" research program, St Louis, Mo. PARTICIPANTS: Eighty-eight 8- to 10-year-old experienced pediatric cochlear implant users. MAIN OUTCOME MEASURES: Several different consonant accuracy scores based on the linguistic structure (voicing, place, and manner of articulation) of the consonants being imitated; analysis of the errors produced for all consonants imitated incorrectly. RESULTS: Seventy-six children provided a response to at least 75% of the nonword stimuli. In these children's responses, 33% of the target consonants were imitated correctly, 25% of the target consonants were deleted, and substitutions were provided for 42% of the target consonants. The children tended to correctly reproduce target consonants with coronal place (which involve a mid-vocal tract constriction) more often than other consonants. Poorer performers tended to produce more deletions than the better performers, but their production errors tended to follow the same patterns as the better performers. CONCLUSIONS: Poorer performance on labial consonants suggests that scores were affected by the lack of visual cues such as lip closure. Oral communication users tended to perform better than total communication users, indicating that oral communication methods are beneficial to the development of pediatric cochlear implant users' phonological processing skills.  (+info)

Intracortical responses in human and monkey primary auditory cortex support a temporal processing mechanism for encoding of the voice onset time phonetic parameter. (79/1097)

This study tests the hypothesis that temporal response patterns in primary auditory cortex are potentially relevant for voice onset time (VOT) encoding in two related experiments. The first experiment investigates whether temporal responses reflecting VOT are modulated in a way that can account for boundary shifts that occur with changes in first formant (F1) frequency, and by extension, consonant place of articulation. Evoked potentials recorded from Heschl's gyrus in a patient undergoing epilepsy surgery evaluation are examined. Representation of VOT varies in a manner that reflects the spectral composition of the syllables and the underlying tonotopic organization. Activity patterns averaged across extended regions of Heschl's gyrus parallel changes in the subject's perceptual boundaries. The second experiment investigates whether the physiological boundary for detecting the sequence of two acoustic elements parallels the psychoacoustic result of approximately 20 ms. Population responses evoked by two-tone complexes with variable tone onset times (TOTs) in primary auditory cortex of the monkey are examined. Onset responses evoked by both the first and second tones are detected at a TOT separation as short as 20 ms. Overall, parallels between perceptual and physiological results support the relevance of a population-based temporal processing mechanism for VOT encoding.  (+info)

Integration of letters and speech sounds in the human brain. (80/1097)

Most people acquire literacy skills with remarkable ease, even though the human brain is not evolutionarily adapted to this relatively new cultural phenomenon. Associations between letters and speech sounds form the basis of reading in alphabetic scripts. We investigated the functional neuroanatomy of the integration of letters and speech sounds using functional magnetic resonance imaging (fMRI). Letters and speech sounds were presented unimodally and bimodally in congruent or incongruent combinations. Analysis of single-subject data and group data aligned on the basis of individual cortical anatomy revealed that letters and speech sounds are integrated in heteromodal superior temporal cortex. Interestingly, responses to speech sounds in a modality-specific region of the early auditory cortex were modified by simultaneously presented letters. These results suggest that efficient processing of culturally defined associations between letters and speech sounds relies on neural mechanisms similar to those naturally evolved for integrating audiovisual speech.  (+info)