Language outcome following multiple subpial transection for Landau-Kleffner syndrome.
Landau-Kleffner syndrome is an acquired epileptic aphasia occurring in normal children who lose previously acquired speech and language abilities. Although some children recover some of these abilities, many children with Landau-Kleffner syndrome have significant language impairments that persist. Multiple subpial transection is a surgical technique that has been proposed as an appropriate treatment for Landau-Kleffner syndrome in that it is designed to eliminate the capacity of cortical tissue to generate seizures or subclinical epileptiform activity, while preserving the cortical functions subserved by that tissue. We report on the speech and language outcome of 14 children who underwent multiple subpial transection for treatment of Landau-Kleffner syndrome. Eleven children demonstrated significant postoperative improvement on measures of receptive or expressive vocabulary. Results indicate that early diagnosis and treatment optimize outcome, and that gains in language function are most likely to be seen years, rather than months, after surgery. Since an appropriate control group was not available, and that the best predictor of postoperative improvements in language function was that of length of time since surgery, these data might best be used as a benchmark against other Landau-Kleffner syndrome outcome studies. We conclude that multiple subpial transection may be useful in allowing for a restoration of speech and language abilities in children diagnosed with Landau-Kleffner syndrome. (+info)
Infants' learning about words and sounds in relation to objects.
In acquiring language, babies learn not only that people can communicate about objects and events, but also that they typically use a particular kind of act as the communicative signal. The current studies asked whether 1-year-olds' learning of names during joint attention is guided by the expectation that names will be in the form of spoken words. In the first study, 13-month-olds were introduced to either a novel word or a novel sound-producing action (using a small noisemaker). Both the word and the sound were produced by a researcher as she showed the baby a new toy during a joint attention episode. The baby's memory for the link between the word or sound and the object was tested in a multiple choice procedure. Thirteen-month-olds learned both the word-object and sound-object correspondences, as evidenced by their choosing the target reliably in response to hearing the word or sound on test trials, but not on control trials when no word or sound was present. In the second study, 13-month-olds, but not 20-month-olds, learned a new sound-object correspondence. These results indicate that infants initially accept a broad range of signals in communicative contexts and narrow the range with development. (+info)
Exchange of stuttering from function words to content words with age.
Dysfluencies on function words in the speech of people who stutter mainly occur when function words precede, rather than follow, content words (Au-Yeung, Howell, & Pilgrim, 1998). It is hypothesized that such function word dysfluencies occur when the plan for the subsequent content word is not ready for execution. Repetition and hesitation on the function words buys time to complete the plan for the content word. Stuttering arises when speakers abandon the use of this delaying strategy and carry on, attempting production of the subsequent, partly prepared content word. To test these hypotheses, the relationship between dysfluency on function and content words was investigated in the spontaneous speech of 51 people who stutter and 68 people who do not stutter. These participants were subdivided into the following age groups: 2-6-year-olds, 7-9-year-olds, 10-12-year-olds, teenagers (13-18 years), and adults (20-40 years). Very few dysfluencies occurred for either fluency group on function words that occupied a position after a content word. For both fluency groups, dysfluency within each phonological word occurred predominantly on either the function word preceding the content word or on the content word itself, but not both. Fluent speakers had a higher percentage of dysfluency on initial function words than content words. Whether dysfluency occurred on initial function words or content words changed over age groups for speakers who stutter. For the 2-6-year-old speakers that stutter, there was a higher percentage of dysfluencies on initial function words than content words. In subsequent age groups, dysfluency decreased on function words and increased on content words. These data are interpreted as suggesting that fluent speakers use repetition of function words to delay production of the subsequent content words, whereas people who stutter carry on and attempt a content word on the basis of an incomplete plan. (+info)
Continuous speech recognition for clinicians.
The current generation of continuous speech recognition systems claims to offer high accuracy (greater than 95 percent) speech recognition at natural speech rates (150 words per minute) on low-cost (under $2000) platforms. This paper presents a state-of-the-technology summary, along with insights the authors have gained through testing one such product extensively and other products superficially. The authors have identified a number of issues that are important in managing accuracy and usability. First, for efficient recognition users must start with a dictionary containing the phonetic spellings of all words they anticipate using. The authors dictated 50 discharge summaries using one inexpensive internal medicine dictionary ($30) and found that they needed to add an additional 400 terms to get recognition rates of 98 percent. However, if they used either of two more expensive and extensive commercial medical vocabularies ($349 and $695), they did not need to add terms to get a 98 percent recognition rate. Second, users must speak clearly and continuously, distinctly pronouncing all syllables. Users must also correct errors as they occur, because accuracy improves with error correction by at least 5 percent over two weeks. Users may find it difficult to train the system to recognize certain terms, regardless of the amount of training, and appropriate substitutions must be created. For example, the authors had to substitute "twice a day" for "bid" when using the less expensive dictionary, but not when using the other two dictionaries. From trials they conducted in settings ranging from an emergency room to hospital wards and clinicians' offices, they learned that ambient noise has minimal effect. Finally, they found that a minimal "usable" hardware configuration (which keeps up with dictation) comprises a 300-MHz Pentium processor with 128 MB of RAM and a "speech quality" sound card (e.g., SoundBlaster, $99). Anything less powerful will result in the system lagging behind the speaking rate. The authors obtained 97 percent accuracy with just 30 minutes of training when using the latest edition of one of the speech recognition systems supplemented by a commercial medical dictionary. This technology has advanced considerably in recent years and is now a serious contender to replace some or all of the increasingly expensive alternative methods of dictation with human transcription. (+info)
Phonotactics, neighborhood activation, and lexical access for spoken words.
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. (+info)
Word recall correlates with sleep cycles in elderly subjects.
Morning recall of words presented before sleep was studied in relation to intervening night sleep measures in elderly subjects. Night sleep of 30 elderly subjects aged 61-75 years was recorded. Before sleep, subjects were presented with a list of paired non-related words and cued recall was asked immediately after the morning awakening. Recall positively correlated with average duration of NREM/REM cycles, and with the proportion of time spent in cycles (TCT) over total sleep time (TST). No significant correlations were found with other sleep or wake measures. These results suggest the importance of sleep structure for sleep-related memory processes in elderly adults. (+info)
Recognition of spoken words by native and non-native listeners: talker-, listener-, and item-related factors.
In order to gain insight into the interplay between the talker-, listener-, and item-related factors that influence speech perception, a large multi-talker database of digitally recorded spoken words was developed, and was then submitted to intelligibility tests with multiple listeners. Ten talkers produced two lists of words at three speaking rates. One list contained lexically "easy" words (words with few phonetically similar sounding "neighbors" with which they could be confused), and the other list contained lexically "hard" words (words with many phonetically similar sounding "neighbors"). An analysis of the intelligibility data obtained with native speakers of English (experiment 1) showed a strong effect of lexical similarity. Easy words had higher intelligibility scores than hard words. A strong effect of speaking rate was also found whereby slow and medium rate words had higher intelligibility scores than fast rate words. Finally, a relationship was also observed between the various stimulus factors whereby the perceptual difficulties imposed by one factor, such as a hard word spoken at a fast rate, could be overcome by the advantage gained through the listener's experience and familiarity with the speech of a particular talker. In experiment 2, the investigation was extended to another listener population, namely, non-native listeners. Results showed that the ability to take advantage of surface phonetic information, such as a consistent talker across items, is a perceptual skill that transfers easily from first to second language perception. However, non-native listeners had particular difficulty with lexically hard words even when familiarity with the items was controlled, suggesting that non-native word recognition may be compromised when fine phonetic discrimination at the segmental level is required. Taken together, the results of this study provide insight into the signal-dependent and signal-independent factors that influence spoken language processing in native and non-native listeners. (+info)
Cognitive modularity and genetic disorders.
This study challenges the use of adult neuropsychological models for explaining developmental disorders of genetic origin. When uneven cognitive profiles are found in childhood or adulthood, it is assumed that such phenotypic outcomes characterize infant starting states, and it has been claimed that modules subserving these abilities start out either intact or impaired. Findings from two experiments with infants with Williams syndrome (a phenotype selected to bolster innate modularity claims) indicate a within-syndrome double dissociation: For numerosity judgments, they do well in infancy but poorly in adulthood, whereas for language, they perform poorly in infancy but well in adulthood. The theoretical and clinical implications of these results could lead to a shift in focus for studies of genetic disorders. (+info)