A comparison of language achievement in children with cochlear implants and children using hearing aids. (1/432)

English language achievement of 29 prelingually deaf children with 3 or more years of cochlear implant (CI) experience was compared to the achievement levels of prelingually deaf children who did not have such CI experience. Language achievement was measured by the Rhode Island Test of Language Structure (RITLS), a measure of signed and spoken sentence comprehension, and the Index of Productive Syntax (IPSyn), a measure of expressive (signed and spoken) English grammar. When the CI users were compared with their deaf age mates who contributed to the norms of the RITLS, it was found that CI users achieved significantly better scores. Likewise, we found that CI users performed better than 29 deaf children who used hearing aids (HAs) with respect to English grammar achievement as indexed by the IPSyn. Additionally, we found that chronological age highly correlated with IPSyn levels only among the non-CI users, whereas length of CI experience was significantly correlated with IPSyn scores for CI users. Finally, clear differences between those with and without CI experience were found by 2 years of post-implant experience. These data provide evidence that children who receive CIs benefit in the form of improved English language comprehension and production.  (+info)

Comparison of three-dimensional visualization techniques for depicting the scala vestibuli and scala tympani of the cochlea by using high-resolution MR imaging. (2/432)

BACKGROUND AND PURPOSE: Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. METHODS: A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. RESULTS: Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. CONCLUSION: Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.  (+info)

Cochlear implantations in Northern Ireland: an overview of the first five years. (3/432)

During the last few years cochlear implantation (CI) has made remarkable progress, developing from a mere research tool to a viable clinical application. The Centre for CI in the Northern Ireland was established in 1992 and has since been a provider of this new technology for rehabilitation of profoundly deaf patients in the region. Although individual performance with a cochlear implant cannot be predicted accurately, the overall success of CI can no longer be denied. Seventy one patients, 37 adults and 34 children, have received implants over the first five years of the Northern Ireland cochlear implant programme, which is located at the Belfast City Hospital. The complication rates and the post-implantation outcome of this centre compare favourably with other major centres which undertake the procedure. This paper aims to highlight the patient selection criteria, surgery, post-CI outcome, clinical and research developments within our centre, and future prospects of this recent modality of treatment.  (+info)

Prevalence of mitochondrial gene mutations among hearing impaired patients. (4/432)

The frequency of three mitochondrial point mutations, 1555A-->G, 3243A-->G, and 7445A-->G, known to be associated with hearing impairment, was examined using restriction fragment length polymorphism (RFLP) analysis in two Japanese groups: (1) 319 unrelated SNHL outpatients (including 21 with aminoglycoside antibiotic injection history), and (2) 140 cochlear implantation patients (including 22 with aminoglycoside induced hearing loss). Approximately 3% of the outpatients and 10% of the cochlear implantation patients had the 1555A-->G mutation. The frequency was higher in the patients with a history of aminoglycoside injection (outpatient group 33%, cochlear implantation group 59%). One outpatient (0.314%) had the 3243A-->G mutation, but no outpatients had the 7445A-->G mutation and neither were found in the cochlear implantation group. The significance of the 1555A-->G mutation, the most prevalent mitochondrial mutation found in this study of a hearing impaired population in Japan, among subjects with specific backgrounds, such as aminoglycoside induced hearing loss, is evident.  (+info)

Functional plasticity of language-related brain areas after cochlear implantation. (5/432)

Using PET, the cerebral network engaged by heard language processing in normal hearing subjects was compared with that in patients who received a cochlear implant after a period of profound deafness. The experimental conditions were words, syllables and environmental sounds, each controlled by a noise baseline. Four categories of effect were observed: (i) regions that were recruited by patients and controls under identical task conditions: the left and right superior temporal cortices and the left insula were activated in both groups in all conditions; (ii) new regions, which were recruited by patients only: the left dorsal occipital cortex showed systematic activation in all conditions versus noise baselines; (iii) regions that were recruited by both groups with a different functional specificity; e.g. Wernicke's area responded specifically to speech sounds in controls but was not specialized in patients; and (iv) regions that were activated in one group more than the other: the precuneus and parahippocampal gyrus (patients more than controls) and the left inferior frontal, left posterior inferior temporal and left and right temporoparietal junction regions (controls more than patients). These data provide evidence for altered functional specificity of the superior temporal cortex, flexible recruitment of brain regions located within and outside the classical language areas and automatic contribution of visual regions to sound recognition in implant patients.  (+info)

Use of audiovisual information in speech perception by prelingually deaf children with cochlear implants: a first report. (6/432)

OBJECTIVE: Although there has been a great deal of recent empirical work and new theoretical interest in audiovisual speech perception in both normal-hearing and hearing-impaired adults, relatively little is known about the development of these abilities and skills in deaf children with cochlear implants. This study examined how prelingually deafened children combine visual information available in the talker's face with auditory speech cues provided by their cochlear implants to enhance spoken language comprehension. DESIGN: Twenty-seven hearing-impaired children who use cochlear implants identified spoken sentences presented under auditory-alone and audiovisual conditions. Five additional measures of spoken word recognition performance were used to assess auditory-alone speech perception skills. A measure of speech intelligibility was also obtained to assess the speech production abilities of these children. RESULTS: A measure of audiovisual gain, "Ra," was computed using sentence recognition scores in auditory-alone and audiovisual conditions. Another measure of audiovisual gain, "Rv," was computed using scores in visual-alone and audiovisual conditions. The results indicated that children who were better at recognizing isolated spoken words through listening alone were also better at combining the complementary sensory information about speech articulation available under audiovisual stimulation. In addition, we found that children who received more benefit from audiovisual presentation also produced more intelligible speech, suggesting a close link between speech perception and production and a common underlying linguistic basis for audiovisual enhancement effects. Finally, an examination of the distribution of children enrolled in Oral Communication (OC) and Total Communication (TC) indicated that OC children tended to score higher on measures of audiovisual gain, spoken word recognition, and speech intelligibility. CONCLUSIONS: The relationships observed between auditory-alone speech perception, audiovisual benefit, and speech intelligibility indicate that these abilities are not based on independent language skills, but instead reflect a common source of linguistic knowledge, used in both perception and production, that is based on the dynamic, articulatory motions of the vocal tract. The effects of communication mode demonstrate the important contribution of early sensory experience to perceptual development, specifically, language acquisition and the use of phonological processing skills. Intervention and treatment programs that aim to increase receptive and productive spoken language skills, therefore, may wish to emphasize the inherent cross-correlations that exist between auditory and visual sources of information in speech perception.  (+info)

Cross-modal plasticity underpins language recovery after cochlear implantation. (7/432)

Postlingually deaf subjects learn the meaning of sounds after cochlear implantation by forming new associations between sounds and their sources. Implants generate coarse frequency responses, preventing place-coding fine enough to discriminate sounds with similar temporal characteristics, e.g., buck/duck. This limitation imposes a dependency on visual cues, e.g., lipreading. We hypothesized that cross-modal facilitation results from engagement of the visual cortex by purely auditory tasks. In four functional neuroimaging experiments, we show recruitment of early visual cortex (V1/V2) when cochlear implant users listen to sounds with eyes closed. Activity in visual cortex evolved in a stimulus-specific manner as a function of time from implantation reflecting experience-dependent adaptations in the postimplant phase.  (+info)

Some measures of verbal and spatial working memory in eight- and nine-year-old hearing-impaired children with cochlear implants. (8/432)

OBJECTIVE: The purpose of this study was to examine working memory for sequences of auditory and visual stimuli in prelingually deafened pediatric cochlear implant users with at least 4 yr of device experience. DESIGN: Two groups of 8- and 9-yr-old children, 45 normal-hearing and 45 hearing-impaired users of cochlear implants, completed a novel working memory task requiring memory for sequences of either visual-spatial cues or visual-spatial cues paired with auditory signals. In each sequence, colored response buttons were illuminated either with or without simultaneous auditory presentation of verbal labels (color-names or digit-names). The child was required to reproduce each sequence by pressing the appropriate buttons on the response box. Sequence length was varied and a measure of memory span corresponding to the longest list length correctly reproduced under each set of presentation conditions was recorded. Additional children completed a modified task that eliminated the visual-spatial light cues but that still required reproduction of auditory color-name sequences using the same response box. Data from 37 pediatric cochlear implant users were collected using this modified task. RESULTS: The cochlear implant group obtained shorter span scores on average than the normal-hearing group, regardless of presentation format. The normal-hearing children also demonstrated a larger "redundancy gain" than children in the cochlear implant group-that is, the normal-hearing group displayed better memory for auditory-plus-lights sequences than for the lights-only sequences. Although the children with cochlear implants did not use the auditory signals as effectively as normal-hearing children when visual-spatial cues were also available, their performance on the modified memory task using only auditory cues showed that some of the children were capable of encoding auditory-only sequences at a level comparable with normal-hearing children. CONCLUSIONS: The finding of smaller redundancy gains from the addition of auditory cues to visual-spatial sequences in the cochlear implant group as compared with the normal-hearing group demonstrates differences in encoding or rehearsal strategies between these two groups of children. Differences in memory span between the two groups even on a visual-spatial memory task suggests that atypical working memory development irrespective of input modality may be present in this clinical population.  (+info)