Cross-modal plasticity underpins language recovery after cochlear implantation. (1/44)

Postlingually deaf subjects learn the meaning of sounds after cochlear implantation by forming new associations between sounds and their sources. Implants generate coarse frequency responses, preventing place-coding fine enough to discriminate sounds with similar temporal characteristics, e.g., buck/duck. This limitation imposes a dependency on visual cues, e.g., lipreading. We hypothesized that cross-modal facilitation results from engagement of the visual cortex by purely auditory tasks. In four functional neuroimaging experiments, we show recruitment of early visual cortex (V1/V2) when cochlear implant users listen to sounds with eyes closed. Activity in visual cortex evolved in a stimulus-specific manner as a function of time from implantation reflecting experience-dependent adaptations in the postimplant phase.  (+info)

Do you see what I'm saying? Interactions between auditory and visual cortices in cochlear implant users. (2/44)

Primary sensory cortices are generally thought to be devoted to one sensory modality-vision, hearing, or touch, for example. Surprising interactions between these sensory modes have recently been reported. One example demonstrates that people with cochlear implants show increased activity in visual cortex when listening to speech; this may be related to enhanced lipreading ability.  (+info)

Electrophysiology and brain imaging of biological motion. (3/44)

The movements of the faces and bodies of other conspecifics provide stimuli of considerable interest to the social primate. Studies of single cells, field potential recordings and functional neuroimaging data indicate that specialized visual mechanisms exist in the superior temporal sulcus (STS) of both human and non-human primates that produce selective neural responses to moving natural images of faces and bodies. STS mechanisms also process simplified displays of biological motion involving point lights marking the limb articulations of animate bodies and geometrical shapes whose motion simulates purposeful behaviour. Facial movements such as deviations in eye gaze, important for gauging an individual's social attention, and mouth movements, indicative of potential utterances, generate particularly robust neural responses that differentiate between movement types. Collectively such visual processing can enable the decoding of complex social signals and through its outputs to limbic, frontal and parietal systems the STS may play a part in enabling appropriate affective responses and social behaviour.  (+info)

A functional-anatomical model for lipreading. (4/44)

Regional cerebral blood flow (rCBF) PET scans were used to study the physiological bases of lipreading, a natural skill of extracting language from mouth movements, which contributes to speech perception in everyday life. Viewing connected mouth movements that could not be lexically identified and that evoke perception of isolated speech sounds (nonlexical lipreading) was associated with bilateral activation of the auditory association cortex around Wernicke's area, of left dorsal premotor cortex, and left opercular-premotor division of the left inferior frontal gyrus (Broca's area). The supplementary motor area was active as well. These areas have all been implicated in phonological processing, speech and mouth motor planning, and execution. In addition, nonlexical lipreading also differentially activated visual motion areas. Lexical access through lipreading was associated with a similar pattern of activation and with additional foci in ventral- and dorsolateral prefrontal cortex bilaterally and in left inferior parietal cortex. Linear regression analysis of cerebral blood flow and proficiency for lexical lipreading further clarified the role of these areas in gaining access to language through lipreading. The results suggest cortical activation circuits for lipreading from action representations that may differentiate lexical access from nonlexical processes.  (+info)

DEAFNESS. (5/44)

Dr T E T Weston describes his research into the effect of noise on hearing acuity and of deafness in the aged. He found that presbyacusis is associated with a multiplicity of factors, e.g. smoking, circulatory disturbance, urban domicile, heredity and occupational acoustic trauma.Miss W Galbraith describes the social implications of various degrees of deafness and the ways in which they can be overcome by such measures as lipreading, hearing aids and rehabilitation.Sir Terence Cawthorne discusses otosclerosis, nearly 1% of the population being affected by this type of deafness. He describes the modern operation of insertion of an artificial piston through the stapes and states that 90% of cases submitted to this operation will show immediate improvement, whilst 85% should still have retained this improvement at the end of two years.  (+info)

Cross-modal integration and plastic changes revealed by lip movement, random-dot motion and sign languages in the hearing and deaf. (6/44)

Sign language activates the auditory cortex of deaf subjects, which is evidence of cross-modal plasticity. Lip-reading (visual phonetics), which involves audio-visual integration, activates the auditory cortex of hearing subjects. To test whether audio-visual cross-modal plasticity occurs within areas involved in cross-modal integration, we used functional MRI to study seven prelingual deaf signers, 10 hearing non-signers and nine hearing signers. The visually presented tasks included mouth-movement matching, random-dot motion matching and sign-related motion matching. The mouth-movement tasks included conditions with or without visual phonetics, and the difference between these was used to measure the lip-reading effects. During the mouth-movement matching tasks, the deaf subjects showed more prominent activation of the left planum temporale (PT) than the hearing subjects. During dot-motion matching, the deaf showed greater activation in the right PT. Sign-related motion, with or without a lexical component, activated the left PT in the deaf signers more than in the hearing signers. These areas showed lip-reading effects in hearing subjects. These findings suggest that cross-modal plasticity is induced by auditory deprivation independent of the lexical processes or visual phonetics, and this plasticity is mediated in part by the neural substrates of audio-visual cross-modal integration.  (+info)

Phonological processing in deaf children: when lipreading and cues are incongruent. (7/44)

Deaf children exposed to Cued Speech (CS), either before age two (early) or later at school (late), were presented with pseudowords with and without CS. The main goal was to establish the way in which lipreading and CS combine to produce unitary percepts, similar to audiovisual integration in speech perception, when participants are presented with synchronized but different lipreading and auditory information (the McGurk paradigm). In the present experiment, lips and cues were sometimes congruent and sometimes incongruent. It was expected that incongruent cues would force the perceptual system to adopt solutions according to the weight attributed to different sources of phonological information. With congruent cues, performance improved, with improvements greater in the early than the late group. With incongruent cues, performance decreased relative to lipreading only, indicating that cues were not ignored, and it was observed that the effect of incongruent cues increased when the visibility of the target phoneme decreased. The results are compatible with the notion that the perceptual system integrates cues and lipreading according to principles similar to those evoked to explain audiovisual integration.  (+info)

Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants. (8/44)

OBJECTIVE: The present study investigated the development of audiovisual comprehension skills in prelingually deaf children who received cochlear implants. DESIGN: We analyzed results obtained with the Common Phrases (Robbins et al., 1995) test of sentence comprehension from 80 prelingually deaf children with cochlear implants who were enrolled in a longitudinal study, from pre-implantation to 5 years after implantation. RESULTS: The results revealed that prelingually deaf children with cochlear implants performed better under audiovisual (AV) presentation compared with auditory-alone (A-alone) or visual-alone (V-alone) conditions. AV sentence comprehension skills were found to be strongly correlated with several clinical outcome measures of speech perception, speech intelligibility, and language. Finally, pre-implantation V-alone performance on the Common Phrases test was strongly correlated with 3-year postimplantation performance on clinical outcome measures of speech perception, speech intelligibility, and language skills. CONCLUSIONS: The results suggest that lipreading skills and AV speech perception reflect a common source of variance associated with the development of phonological processing skills that is shared among a wide range of speech and language outcome measures.  (+info)