Imitation of nonwords by deaf children after cochlear implantation: preliminary findings. (33/1097)

Fourteen prelingually deafened pediatric users of the Nucleus-22 cochlear implant were asked to imitate auditorily presented nonwords. The children's utterances were recorded, digitized, and broadly transcribed. The target patterns and the children's imitations were then played back to normal-hearing adult listeners in order to obtain perceptual judgments of repetition accuracy. The results revealed wide variability in the children's ability to repeat the novel sound sequences. Individual differences in the component processes of encoding, memory, and speech production were strongly reflected in the nonword repetition scores. Duration of deafness before implantation also appeared to be a factor associated with imitation performance. Linguistic analyses of the initial consonants in the nonwords revealed that coronal stops were imitated best, followed by the coronal fricative /s/, and then the labial and velar stops. Labial fricatives were poorly imitated. The theoretical significance of the nonword repetition task as it has been used in past studies of working memory and vocabulary development in normal-hearing children is discussed.  (+info)

Methods for characterizing participants' nonmainstream dialect use in child language research. (34/1097)

Three different approaches to the characterization of research participants' nonmainstream dialect use can be found in the literature. They include listener judgment ratings, type-based counts of nonmainstream pattern use, and token-based counts. In this paper, we examined these three approaches, as well as shortcuts to these methods, using language samples from 93 children previously described in J. Oetting and J. McDonald (2001). Nonmainstream dialects represented in the samples included rural Louisiana versions of Southern White English (SWE) and Southern African American English (SAAE). Depending on the method and shortcut used, correct dialect classifications (SWE or SAAE) were made for 88% to 97% of the participants; however, regression algorithms had to be applied to the type- and token-based results to achieve these outcomes. For characterizing the rate at which the participants produced the nonmainstream patterns, the token-based methods were found to be superior to the others, but estimates from all approaches were moderately to highly correlated with each other. When type- and/or token-based methods were used to characterize participants' dialect type and rate, the number of patterns included in the analyses could be substantially reduced without significantly affecting the validity of the outcomes. These findings have important implications for future child language studies that are done within the context of dialect diversity.  (+info)

Speech segmentation by native and non-native speakers: the use of lexical, syntactic, and stress-pattern cues. (35/1097)

Varying degrees of plasticity in different subsystems of language have been demonstrated by studies showing that some aspects of language are processed similarly by native speakers and late-learners whereas other aspects are processed differently by the two groups. The study of speech segmentation provides a means by which the ability to process different types of linguistic information can be measured within the same task, because lexical, syntactic, and stress-pattern information can all indicate where one word ends and the next begins in continuous speech. In this study, native Japanese and native Spanish late-learners of English (as well as near-monolingual Japanese and Spanish speakers) were asked to determine whether specific sounds fell at the beginning or in the middle of words in English sentences. Similar to native English speakers, late-learners employed lexical information to perform the segmentation task. However, nonnative speakers did not use syntactic information to the same extent as native English speakers. Although both groups of late-learners of English used stress pattern as a segmentation cue, the extent to which this cue was relied upon depended on the stress-pattern characteristics of their native language. These findings support the hypothesis that learning a second language later in life has differential effects on subsystems within language.  (+info)

Timing interference to speech in altered listening conditions. (36/1097)

A theory is outlined that explains the disruption that occurs when auditory feedback is altered. The key part of the theory is that the number of, and relationship between, inputs to a timekeeper, operative during speech control, affects speech performance. The effects of alteration to auditory feedback depend on the extra input provided to the timekeeper. Different disruption is predicted for auditory feedback that is out of synchrony with other speech activity (e.g., delayed auditory feedback, DAF) compared with synchronous forms of altered feedback (e.g., frequency shifted feedback, FSF). Stimulus manipulations that can be made synchronously with speech are predicted to cause equivalent disruption to the synchronous form of altered feedback. Three experiments are reported. In all of them, subjects repeated a syllable at a fixed rate (Wing and Kristofferson, 1973). Overall timing variance was decomposed into the variance of a timekeeper (Cv) and the variance of a motor process (Mv). Experiment 1 validated Wing and Kristofferson's method for estimating Cv in a speech task by showing that only this variance component increased when subjects repeated syllables at different rates. Experiment 2 showed DAF increased Cv compared with when no altered sound occurred (experiment 1) and compared with FSF. In experiment 3, sections of the subject's output sequence were increased in amplitude. Subjects just heard this sound in one condition and made a duration decision about it in a second condition. When no response was made, results were like those with FSF. When a response was made, Cv increased at longer repetition periods. The findings that the principal effect of DAF, a duration decision and repetition period is on Cv whereas synchronous alterations that do not require a decision (amplitude increased sections where no response was made and FSF) do not affect Cv, support the hypothesis that the timekeeping process is affected by synchronized and asynchronized inputs in different ways.  (+info)

The influence of phonological similarity neighborhoods on speech production. (37/1097)

The influence of phonological similarity neighborhoods on the speed and accuracy of speech production was investigated with speech-error elicitation and picture-naming tasks. The results from 2 speech-error elicitation techniques-the spoonerisms of laboratory induced predisposition technique (B. J. Baars, 1992; B. J. Baars & M. T. Motley, 1974; M. T. Motley & B. J. Baars, 1976) and tongue twisters-showed that more errors were elicited for words with few similar sounding words (i.e., a sparse neighborhood) than for words with many similar sounding words (i.e., a dense neighborhood). The results from 3 picture-naming tasks showed that words with sparse neighborhoods were also named more slowly than words with dense neighborhoods. These findings demonstrate that multiple word forms are activated simultaneously and influence the speed and accuracy of speech production. The implications of these findings for current models of speech production are discussed.  (+info)

Non-word reading, lexical retrieval and stuttering: comments on Packman, Onslow, Coombes and Goodwin (2001). (38/1097)

A recent study by Packman, Onslow, Coombes and Goodwin (2001) employed a non-word-reading paradigm to test the contribution of the lexical retrieval process to stuttering. They consider that, with this material, the lexical retrieval process could not contribute to stuttering and that either anxiety and/or the motor demand of reading are the governing factors. This paper will discuss possible processes underlying non-word reading and it argues that the conclusion arrived at by Packman et al. does not stand up to close scrutiny. In their introduction, the authors acknowledge that the lexicalization process involves retrieval and encoding of words. In a non-word-reading task, the word retrieval component is eliminated. The possibility that the encoding component of the lexicalization process leads to stuttering is, however, completely ignored by the authors when they attribute stuttering to motor demands. As theories put forward by Postma and Kolk (the Covert Repair Hypothesis, 1993) and Howell and Au-Yeung (the EXPLAN theory, 2002) argue heavily for the role of the phonological encoding processes in stuttering, Packman et al.'s work does not evaluate such theories. Theoretical issues aside, Packman et al.'s arguments about reading rate and stuttering rate based on reading time is also questionable.  (+info)

Learning to perceive speech: how fricative perception changes, and how it stays the same. (39/1097)

A part of becoming a mature perceiver involves learning what signal properties provide relevant information about objects and events in the environment. Regarding speech perception, evidence supports the position that allocation of attention to various signal properties changes as children gain experience with their native language, and so learn what information is relevant to recognizing phonetic structure in that language. However, one weakness in that work has been that data have largely come from experiments that all use similarly designed stimuli and show similar age-related differences in labeling. In this study, two perception experiments were conducted that used stimuli designed differently from past experiments, with different predictions. In experiment 1, adults and children (4, 6, and 8 years of age) labeled stimuli with natural /f/ and /[see text]/ noises and synthetic vocalic portions that had initial formant transitions varying in appropriateness for /f/ or /[see text]/. The prediction was that similar labeling patterns would be found for all listeners. In experiment 2, adults and children labeled stimuli with initial /s/-like and /[see text]/-like noises and synthetic vocalic portions that had initial formant transitions varying in appropriateness for /s/ or /[see text]/. The prediction was that, as found before, children would weight formant transitions more and fricative noises less than adults, but that this age-related difference would elicit different patterns of labeling from those found previously. Results largely matched predictions, and so further evidence was garnered for the position that children learn which properties of the speech signal provide relevant information about phonetic structure in their native language.  (+info)

Common prefrontal regions coactivate with dissociable posterior regions during controlled semantic and phonological tasks. (40/1097)

One of the most ubiquitous findings in functional neuroimaging research is activation of left inferior prefrontal cortex (LIPC) during tasks requiring controlled semantic retrieval. Here we show that LIPC participates in the controlled retrieval of nonsemantic representations as well as semantic representations. Results also demonstrate that LIPC coactivates with dissociable posterior regions depending on the information retrieved: activating with left temporal cortex during the controlled retrieval of semantics and with left posterior frontal and parietal cortex during the controlled retrieval of phonology. Correlation of performance to LIPC activation suggests a processing role associated with mapping relatively ambiguous stimulus-to-representation relationships during both semantic and phonological tasks. These findings suggest that LIPC participates in controlled processing across multiple information domains collaborating with dissociable posterior regions depending upon the kind of information retrieved.  (+info)