Slow tonic muscle fibers in the thyroarytenoid muscles of human vocal folds; a possible specialization for speech. (17/1550)

Most of the sounds of human speech are produced by vibration of the vocal folds, yet the biomechanics and control of these vibrations are poorly understood. In this study the muscle within the vocal fold, the thyroarytenoid muscle (TA), was examined for the presence and distribution of slow tonic muscle fibers (STF), a rare muscle fiber type with unique contraction properties. Nine human TAs were frozen and serially sectioned in the frontal plane. The presence and distribution pattern of STF in each TA were examined by immunofluorescence microscopy using the monoclonal antibodies (mAb) ALD-19 and ALD-58 which react with the slow tonic myosin heavy chain (MyHC) isoform. In addition, TA muscle samples from adjacent frozen sections were also examined for slow tonic MyHC isoform by electrophoretic immunoblotting. STF were detected in all nine TAs and the presence of slow tonic MyHC isoform was confirmed in the immunoblots. The STF were distributed predominantly in the medial aspect of the TA, a distinct muscle compartment called the vocalis which is the vibrating part of the vocal fold. STF do not contract with a twitch like most muscle fibers, instead, their contractions are prolonged, stable, precisely controlled, and fatigue resistant. The human voice is characterized by a stable sound with a wide frequency spectrum that can be precisely modulated and the STF may contribute to this ability. At present, the evidence suggests that STF are not presented in the vocal folds of other mammals (including other primates), therefore STF may be a unique human specialization for speech.  (+info)

Effects of gravitational load on jaw movements in speech. (18/1550)

External loads arising as a result of the orientation of body segments relative to gravity can affect the achievement of movement goals. The degree to which subjects adjust control signals to compensate for these loads is a reflection of the extent to which forces affecting motion are represented neurally. In the present study we assessed whether subjects, when speaking, compensate for loads caused by the orientation of the head relative to gravity. We used a mathematical model of the jaw to predict the effects of control signals that are not adjusted for changes to head orientation. The simulations predicted a systematic change in sagittal plane jaw orientation and horizontal position resulting from changes to the orientation of the head. We conducted an empirical study in which subjects were tested under the same conditions. With one exception, empirical results were consistent with the simulations. In both simulation and empirical studies, the jaw was rotated closer to occlusion and translated in an anterior direction when the head was in the prone orientation. When the head was in the supine orientation, the jaw was rotated away from occlusion. The findings suggest that the nervous system does not completely compensate for changes in head orientation relative to gravity. A second study was conducted to assess possible changes in acoustical patterns attributable to changes in head orientation. The frequencies of the first (F1) and second (F2) formants associated with the steady-state portion of vowels were measured. As in the kinematic study, systematic differences in the values of F1 and F2 were observed with changes in head orientation. Thus the acoustical analysis further supports the conclusion that control signals are not completely adjusted to offset forces arising because of changes in orientation.  (+info)

Interarticulator phasing, locus equations, and degree of coarticulation. (19/1550)

A locus equation plots the frequency of the second formant at vowel onset against the target frequency of the same formant for the vowel in a consonant-vowel sequence, across different vowel contexts. It has generally been assumed that the slope of the locus equation reflects the degree of coarticulation between the consonant and the vowel, with a steeper slope showing more coarticulation. This study examined the articulatory basis for this assumption. Four subjects participated and produced VCV sequences of the consonants /b, d, g/ and the vowels /i, a, u/. The movements of the tongue and the lips were recorded using a magnetometer system. One articulatory measure was the temporal phasing between the onset of the lip closing movement for the bilabial consonant and the onset of the tongue movement from the first to the second vowel in a VCV sequence. A second measure was the magnitude of the tongue movement during the oral stop closure, averaged across four receivers on the tongue. A third measure was the magnitude of the tongue movement from the onset of the second vowel to the tongue position for that vowel. When compared with the corresponding locus equations, no measure showed any support for the assumption that the slope serves as an index of the degree of coarticulation between the consonant and the vowel.  (+info)

A Java speech implementation of the Mini Mental Status Exam. (20/1550)

The Folstein Mini Mental Status Exam (MMSE) is a simple, widely used, verbally administered test to assess cognitive function. The Java Speech Application Programming Interface (JSAPI) is a new, cross-platform interface for both speech recognition and speech synthesis in the Java environment. To evaluate the suitability of the JSAPI for interactive, patient interview applications, a JSAPI implementation of the MMSE was developed. The MMSE contains questions that vary in structure in order to assess different cognitive functions. This question variability provided an excellent test-bed to evaluate the strengths and weaknesses of JSAPI. The application is based on Java platform 2 and a JSAPI interface to the IBM ViaVoice recognition engine. Design and implementations issues are discussed. Preliminary usability studies demonstrate that an automated MMSE maybe a useful screening tool for cognitive disorders and changes.  (+info)

MedSpanish: a language tool for the emergency department. (21/1550)

Language barriers frequently impede the ability of the health care professional to provide the highest quality health care to his or her patients. Spanish speaking people are rapidly becoming the largest minority population in the United States. In order to facilitate access to appropriate medical care that would not be inhibited by miscommunication or lack of a trained translator, the MedSpanish Web Site was developed for use in the Emergency Department. The site contains common Spanish vocabularies, including translations and audio clips, that would be used in such a setting. The various sections are formatted so that they could easily become pocket cards rather than relying on the availability of a computer in a medical emergency. While MedSpanish is not designed to replace a trained translator, it does offer an effective alternative if such translations services are not available.  (+info)

Quality-of-service improvements from coupling a digital chest unit with integrated speech recognition, information, and picture archiving and communications systems. (22/1550)

Speech recognition reporting for chest examinations was introduced and tightly integrated with a Radiology Information System (RIS) and a Picture Archiving and Communications System (PACS). A feature of this integration was the unique one-to-one coupling of the workstation displayed case and the reporting via speech recognition for that and only that particular examination and patient. The utility of the resulting, wholly integrated electronic environment was then compared with that of the previous analog chest unit and dedicated wet processor, with reporting of hard copy examinations by direct dictation to a typist. Improvements in quality of service in comparison to the previous work environment include (1) immediate release of the patient, (2) decreased rate of repeat radiographs, (3) improved image quality, (4) decreased time for the examination to be available for interpretation, (5) automatic hanging of current and previous images, (6) ad-hoc availability of images, (7) capability of the radiologist to immediately review and correct the transcribed report, (8) decreased time for clinicians to view results, and (9) increased capacity of examinations per room.  (+info)

The physiologic development of speech motor control: lip and jaw coordination. (23/1550)

This investigation was designed to describe the development of lip and jaw coordination during speech and to evaluate the potential influence of speech motor development on phonologic development. Productions of syllables containing bilabial consonants were observed from speakers in four age groups (i.e., 1-year-olds, 2-year-olds, 6-year-olds, and young adults). A video-based movement tracking system was used to transduce movement of the upper lip, lower lip, and jaw. The coordinative organization of these articulatory gestures was shown to change dramatically during the first several years of life and to continue to undergo refinement past age 6. The present results are consistent with three primary phases in the development of lip and jaw coordination for speech: integration, differentiation, and refinement. Each of these developmental processes entails the existence of distinct coordinative constraints on early articulatory movement. It is suggested that these constraints will have predictable consequences for the sequence of phonologic development.  (+info)

Conscious and unconscious processing of nonverbal predictability in Wernicke's area. (24/1550)

The association of nonverbal predictability and brain activation was examined using functional magnetic resonance imaging in humans. Participants regarded four squares displayed horizontally across a screen and counted the incidence of a particular color. A repeating spatial sequence with varying levels of predictability was embedded within a random color presentation. Both Wernicke's area and its right homolog displayed a negative correlation with temporal predictability, and this effect was independent of individuals' conscious awareness of the sequence. When individuals were made aware of the underlying sequential predictability, a widespread network of cortical regions displayed activity that correlated with the predictability. Conscious processing of predictability resulted in a positive correlation to activity in right prefrontal cortex but a negative correlation in posterior parietal cortex. These results suggest that conscious processing of predictability invokes a large-scale cortical network, but independently of awareness, Wernicke's area processes predictive events in time and may not be exclusively associated with language.  (+info)