Language processing is strongly left lateralized in both sexes. Evidence from functional MRI.
Functional MRI (fMRI) was used to examine gender effects on brain activation during a language comprehension task. A large number of subjects (50 women and 50 men) was studied to maximize the statistical power to detect subtle differences between the sexes. To estimate the specificity of findings related to sex differences, parallel analyses were performed on two groups of randomly assigned subjects. Men and women showed very similar, strongly left lateralized activation patterns. Voxel-wise tests for group differences in overall activation patterns demonstrated no significant differences between women and men. In further analyses, group differences were examined by region of interest and by hemisphere. No differences were found between the sexes in lateralization of activity in any region of interest or in intrahemispheric cortical activation patterns. These data argue against substantive differences between men and women in the large-scale neural organization of language processes. (+info)
Effects of talker, rate, and amplitude variation on recognition memory for spoken words.
This study investigated the encoding of the surface form of spoken words using a continuous recognition memory task. The purpose was to compare and contrast three sources of stimulus variability--talker, speaking rate, and overall amplitude--to determine the extent to which each source of variability is retained in episodic memory. In Experiment 1, listeners judged whether each word in a list of spoken words was "old" (had occurred previously in the list) or "new." Listeners were more accurate at recognizing a word as old if it was repeated by the same talker and at the same speaking rate; however, there was no recognition advantage for words repeated at the same overall amplitude. In Experiment 2, listeners were first asked to judge whether each word was old or new, as before, and then they had to explicitly judge whether it was repeated by the same talker, at the same rate, or at the same amplitude. On the first task, listeners again showed an advantage in recognition memory for words repeated by the same talker and at same speaking rate, but no advantage occurred for the amplitude condition. However, in all three conditions, listeners were able to explicitly detect whether an old word was repeated by the same talker, at the same rate, or at the same amplitude. These data suggest that although information about all three properties of spoken words is encoded and retained in memory, each source of stimulus variation differs in the extent to which it affects episodic memory for spoken words. (+info)
Infants' learning about words and sounds in relation to objects.
In acquiring language, babies learn not only that people can communicate about objects and events, but also that they typically use a particular kind of act as the communicative signal. The current studies asked whether 1-year-olds' learning of names during joint attention is guided by the expectation that names will be in the form of spoken words. In the first study, 13-month-olds were introduced to either a novel word or a novel sound-producing action (using a small noisemaker). Both the word and the sound were produced by a researcher as she showed the baby a new toy during a joint attention episode. The baby's memory for the link between the word or sound and the object was tested in a multiple choice procedure. Thirteen-month-olds learned both the word-object and sound-object correspondences, as evidenced by their choosing the target reliably in response to hearing the word or sound on test trials, but not on control trials when no word or sound was present. In the second study, 13-month-olds, but not 20-month-olds, learned a new sound-object correspondence. These results indicate that infants initially accept a broad range of signals in communicative contexts and narrow the range with development. (+info)
Isolating the contributions of familiarity and source information to item recognition: a time course analysis.
Recognition memory may be mediated by the retrieval of distinct types of information, notably, a general assessment of familiarity and the recovery of specific source information. A response-signal speed-accuracy trade-off variant of an exclusion procedure was used to isolate the retrieval time course for familiarity and source information. In 2 experiments, participants studied spoken and read lists (with various numbers of presentations) and then performed an exclusion task, judging an item as old only if it was in the heard list. Dual-process fits of the time course data indicated that familiarity information typically is retrieved before source information. The implications that these data have for models of recognition, including dual-process and global memory models, are discussed. (+info)
PET imaging of cochlear-implant and normal-hearing subjects listening to speech and nonspeech.
Functional neuroimaging with positron emission tomography (PET) was used to compare the brain activation patterns of normal-hearing (NH) with postlingually deaf, cochlear-implant (CI) subjects listening to speech and nonspeech signals. The speech stimuli were derived from test batteries for assessing speech-perception performance of hearing-impaired subjects with different sensory aids. Subjects were scanned while passively listening to monaural (right ear) stimuli in five conditions: Silent Baseline, Word, Sentence, Time-reversed Sentence, and Multitalker Babble. Both groups showed bilateral activation in superior and middle temporal gyri to speech and backward speech. However, group differences were observed in the Sentence compared to Silence condition. CI subjects showed more activated foci in right temporal regions, where lateralized mechanisms for prosodic (pitch) processing have been well established; NH subjects showed a focus in the left inferior frontal gyrus (Brodmann's area 47), where semantic processing has been implicated. Multitalker Babble activated auditory temporal regions in the CI group only. Whereas NH listeners probably habituated to this multitalker babble, the CI listeners may be using a perceptual strategy that emphasizes 'coarse' coding to perceive this stimulus globally as speechlike. The group differences provide the first neuroimaging evidence suggesting that postlingually deaf CI and NH subjects may engage differing perceptual processing strategies under certain speech conditions. (+info)
Regulation of parkinsonian speech volume: the effect of interlocuter distance.
This study examined the automatic regulation of speech volume over distance in hypophonic patients with Parkinson's disease and age and sex matched controls. There were two speech settings; conversation, and the recitation of sequential material (for example, counting). The perception of interlocuter speech volume by patients with Parkinson's disease and controls over varying distances was also examined, and found to be slightly discrepant. For speech production, it was found that controls significantly increased overall speech volume for conversation relative to that for sequential material. Patients with Parkinson's disease were unable to achieve this overall increase for conversation, and consistently spoke at a softer volume than controls at all distances (intercept reduction). However, patients were still able to increase volume for greater distances in a similar way to controls for conversation and sequential material, thus showing a normal pattern of volume regulation (slope similarity). It is suggested that speech volume regulation is intact in Parkinson's disease, but rather the gain is reduced. These findings are reminiscent of skeletal motor control studies in Parkinson's disease, in which the amplitude of movement is diminished but the relation with another factor is preserved (stride length increases as cadence-that is, stepping rate, increases). (+info)
Specialization of left auditory cortex for speech perception in man depends on temporal coding.
Speech perception requires cortical mechanisms capable of analysing and encoding successive spectral (frequency) changes in the acoustic signal. To study temporal speech processing in the human auditory cortex, we recorded intracerebral evoked potentials to syllables in right and left human auditory cortices including Heschl's gyrus (HG), planum temporale (PT) and the posterior part of superior temporal gyrus (area 22). Natural voiced /ba/, /da/, /ga/) and voiceless (/pa/, /ta/, /ka/) syllables, spoken by a native French speaker, were used to study the processing of a specific temporally based acoustico-phonetic feature, the voice onset time (VOT). This acoustic feature is present in nearly all languages, and it is the VOT that provides the basis for the perceptual distinction between voiced and voiceless consonants. The present results show a lateralized processing of acoustic elements of syllables. First, processing of voiced and voiceless syllables is distinct in the left, but not in the right HG and PT. Second, only the evoked potentials in the left HG, and to a lesser extent in PT, reflect a sequential processing of the different components of the syllables. Third, we show that this acoustic temporal processing is not limited to speech sounds but applies also to non-verbal sounds mimicking the temporal structure of the syllable. Fourth, there was no difference between responses to voiced and voiceless syllables in either left or right areas 22. Our data suggest that a single mechanism in the auditory cortex, involved in general (not only speech-specific) temporal processing, may underlie the further processing of verbal (and non-verbal) stimuli. This coding, bilaterally localized in auditory cortex in animals, takes place specifically in the left HG in man. A defect of this mechanism could account for hearing discrimination impairments associated with language disorders. (+info)
Cochlear implantations in Northern Ireland: an overview of the first five years.
During the last few years cochlear implantation (CI) has made remarkable progress, developing from a mere research tool to a viable clinical application. The Centre for CI in the Northern Ireland was established in 1992 and has since been a provider of this new technology for rehabilitation of profoundly deaf patients in the region. Although individual performance with a cochlear implant cannot be predicted accurately, the overall success of CI can no longer be denied. Seventy one patients, 37 adults and 34 children, have received implants over the first five years of the Northern Ireland cochlear implant programme, which is located at the Belfast City Hospital. The complication rates and the post-implantation outcome of this centre compare favourably with other major centres which undertake the procedure. This paper aims to highlight the patient selection criteria, surgery, post-CI outcome, clinical and research developments within our centre, and future prospects of this recent modality of treatment. (+info)