Cortical activation patterns of affective speech processing depend on concurrent demands on the subvocal rehearsal system. A DC-potential study. (41/1550)

In order to delineate brain regions specifically involved in the processing of affective components of spoken language (affective or emotive prosody), we conducted two event-related potential experiments. Cortical activation patterns were assessed by recordings of direct current components of the EEG signal from the scalp. Right-handed subjects discriminated pairs of declarative sentences with either happy, sad or neutral intonation. Each stimulus pair was derived from two identical original utterances that, due to digital signal manipulations, slightly differed in fundamental frequency (F0) range or in duration of stressed syllables. In the first experiment, subjects were asked: (i) to denote the original emotional category of each sentence pair and (ii) to decide which of the two items displayed stronger emotional expressiveness. Participants in the second experiment were asked to repeat the utterances using inner speech during stimulus presentation in addition to the discrimination task. In the absence of inner speech, a predominant activation of right frontal regions was observed, irrespective of emotional category. In the second experiment, a bilateral activation with left frontal preponderance emerged from discrimination during additional performance of inner speech. Compared with the first experiment, a new pattern of acoustic signal processing arose. A relative decrease of brain activity during processing of F0 stimulus variants was observed together with increased activation during discrimination of duration-manipulated sentence pairs. Analysis of behavioural data revealed no significant differences in evaluation of expressiveness between the two experiments. We conclude that the topographical shift of cortical activity originates from left hemisphere (LH) mechanisms of speech processing that centre around the subvocal rehearsal system as an articulatory control component of the phonological loop. A strong coupling of acoustic input and (planned) verbal output channel in the LH is initiated by subvocal articulatory activity like inner speech. These neural networks may provide interpretations of verbal acoustic signals in terms of motor programs and facilitate continuous control of speech output by comparing the signal produced with that intended. Most likely, information on motor aspects of suprasegmental signal characteristics contributes to the evaluation of affective components of spoken language. In consequence, the right hemisphere (RH) holds a merely relative dominance, both for processing of F0 and for evaluation of emotional significance of sensory input. Psychophysically, an important determinant on expression of lateralization patterns seems to be given by the degree of communicative demands such as solely perceptive (RH) or perceptive and verbal-expressive (RH and LH).  (+info)

Contribution of a speech recognition system to a computerized pneumonia guideline in the emergency department. (42/1550)

OBJECTIVE: Evaluate the effect of a radiology speech recognition system on a real-time computerized guideline in the emergency department. METHODS: We collected all chest x-ray reports (n = 727) generated for patients in the emergency department during a six-week period. We divided the concurrently generated reports into those generated with speech recognition and those generated by traditional dictation. We compared the two sets of reports for availability during the patient's emergency department encounter and for readability. RESULTS: Reports generated by speech recognition were available seven times more often during the patients' encounters than reports generated by traditional dictation. Using speech recognition reduced the turnover time of reports from 12 hours 33 minutes to 2 hours 13 minutes. Readability scores were identical for both kinds of reports. CONCLUSION: Using speech recognition to generate chest x-ray reports reduces turnover time so reports are available while patients are in the emergency department.  (+info)

When seconds are counted: tools for mobile, high-resolution time-motion studies. (43/1550)

Time-motion (TM) studies are often considered the gold-standard for measurements of the impact of computer systems on task flow and duration. However, in many clinical environments tasks occur too rapidly and have too short of a duration to be captured with conventional paper-based TM methods. Observers may also with to categorize caregiver activities along multiple axes simultaneously. This multi-axial characteristic of clinical activity has been modeled as multiple, parallel finite-state sets and implemented in three computerized data collection tools. Radiology reporting is a domain in which tasks can be characterized by multiple attributes. A radiologist may also switch among multiple tasks in a single minute. The use of these tools to measure the impact of an Automated Speech Recognition (ASR) system on Radiology reporting is presented.  (+info)

Identification of a pathway for intelligible speech in the left temporal lobe. (44/1550)

It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension.  (+info)

Enhancement of declarative memory associated with emotional content in a Brazilian sample. (45/1550)

Several studies have documented that emotional arousal may enhance long-term memory. This is an adaptation of a paradigm previously used in North American and European samples in investigations of the influence of emotion on long-term retention. A sample of 46 healthy adults of high and low educational levels watched a slide presentation of stories. A randomly assigned group watched a story with an arousing content and another group watched a neutral story. The stories were matched for structure and comprehensibility and the set and order of the 11 slides were the same in both conditions. Immediately after viewing the slide presentation, the participants were asked to rate the emotionality of the narrative. The arousing narrative was rated as being more emotional than the neutral narrative (t(44) = -3.6, P<0.001). Ten days later subjects were asked to remember the story and answer a multiple-choice questionnaire about it. The subjects who watched the arousing story had higher scores in the free recall measure (t(44) = -2.59, P<0. 01). There were no differences between groups in the multiple-choice test of recognition memory (t(44) = 0.26). These findings confirm that an emotional arousing content enhances long-term declarative memory and indicate the possibility of applying this instrument to clinical samples of various cultural backgrounds.  (+info)

Computers in imaging and health care: now and in the future. (46/1550)

Early picture archiving and communication systems (PACS) were characterized by the use of very expensive hardware devices, cumbersome display stations, duplication of database content, lack of interfaces to other clinical information systems, and immaturity in their understanding of the folder manager concepts and workflow reengineering. They were implemented historically at large academic medical centers by biomedical engineers and imaging informaticists. PACS were nonstandard, home-grown projects with mixed clinical acceptance. However, they clearly showed the great potential for PACS and filmless medical imaging. Filmless radiology is a reality today. The advent of efficient softcopy display of images provides a means for dealing with the ever-increasing number of studies and number of images per study. Computer power has increased, and archival storage cost has decreased to the extent that the economics of PACS is justifiable with respect to film. Network bandwidths have increased to allow large studies of many megabytes to arrive at display stations within seconds of examination completion. PACS vendors have recognized the need for efficient workflow and have built systems with intelligence in the management of patient data. Close integration with the hospital information system (HIS)-radiology information system (RIS) is critical for system functionality. Successful implementation of PACS requires integration or interoperation with hospital and radiology information systems. Besides the economic advantages, secure rapid access to all clinical information on patients, including imaging studies, anytime and anywhere, enhances the quality of patient care, although it is difficult to quantify. Medical image management systems are maturing, providing access outside of the radiology department to images and clinical information throughout the hospital or the enterprise via the Internet. Small and medium-sized community hospitals, private practices, and outpatient centers in rural areas will begin realizing the benefits of PACS already realized by the large tertiary care academic medical centers and research institutions. Hand-held devices and the Worldwide Web are going to change the way people communicate and do business. The impact on health care will be huge, including radiology. Computer-aided diagnosis, decision support tools, virtual imaging, and guidance systems will transform our practice as value-added applications utilizing the technologies pushed by PACS development efforts. Outcomes data and the electronic medical record (EMR) will drive our interactions with referring physicians and we expect the radiologist to become the informaticist, a new version of the medical management consultant.  (+info)

The anatomy of language: contributions from functional neuroimaging. (47/1550)

This article illustrates how functional neuroimaging can be used to test the validity of neurological and cognitive models of language. Three models of language are described: the 19th Century neurological model which describes both the anatomy and cognitive components of auditory and visual word processing, and 2 20th Century cognitive models that are not constrained by anatomy but emphasise 2 different routes to reading that are not present in the neurological model. A series of functional imaging studies are then presented which show that, as predicted by the 19th Century neurologists, auditory and visual word repetition engage the left posterior superior temporal and posterior inferior frontal cortices. More specifically, the roles Wernicke and Broca assigned to these regions lie respectively in the posterior superior temporal sulcus and the anterior insula. In addition, a region in the left posterior inferior temporal cortex is activated for word retrieval, thereby providing a second route to reading, as predicted by the 20th Century cognitive models. This region and its function may have been missed by the 19th Century neurologists because selective damage is rare. The angular gyrus, previously linked to the visual word form system, is shown to be part of a distributed semantic system that can be accessed by objects and faces as well as speech. Other components of the semantic system include several regions in the inferior and middle temporal lobes. From these functional imaging results, a new anatomically constrained model of word processing is proposed which reconciles the anatomical ambitions of the 19th Century neurologists and the cognitive finesse of the 20th Century cognitive models. The review focuses on single word processing and does not attempt to discuss how words are combined to generate sentences or how several languages are learned and interchanged. Progress in unravelling these and other related issues will depend on the integration of behavioural, computational and neurophysiological approaches, including neuroimaging.  (+info)

Separate neural subsystems within 'Wernicke's area'. (48/1550)

Over time, both the functional and anatomical boundaries of 'Wernicke's area' have become so broad as to be meaningless. We have re-analysed four functional neuroimaging (PET) studies, three previously published and one unpublished, to identify anatomically separable, functional subsystems in the left superior temporal cortex posterior to primary auditory cortex. From the results we identified a posterior stream of auditory processing. One part, directed along the supratemporal cortical plane, responded to both non-speech and speech sounds, including the sound of the speaker's own voice. Activity in its most posterior and medial part, at the junction with the inferior parietal lobe, was linked to speech production rather than perception. The second, more lateral and ventral part lay in the posterior left superior temporal sulcus, a region that responded to an external source of speech. In addition, this region was activated by the recall of lists of words during verbal fluency tasks. The results are compatible with an hypothesis that the posterior superior temporal cortex is specialized for processes involved in the mimicry of sounds, including repetition, the specific role of the posterior left superior temporal sulcus being to transiently represent phonetic sequences, whether heard or internally generated and rehearsed. These processes are central to the acquisition of long- term lexical memories of novel words.  (+info)