Regulation of parkinsonian speech volume: the effect of interlocuter distance.
This study examined the automatic regulation of speech volume over distance in hypophonic patients with Parkinson's disease and age and sex matched controls. There were two speech settings; conversation, and the recitation of sequential material (for example, counting). The perception of interlocuter speech volume by patients with Parkinson's disease and controls over varying distances was also examined, and found to be slightly discrepant. For speech production, it was found that controls significantly increased overall speech volume for conversation relative to that for sequential material. Patients with Parkinson's disease were unable to achieve this overall increase for conversation, and consistently spoke at a softer volume than controls at all distances (intercept reduction). However, patients were still able to increase volume for greater distances in a similar way to controls for conversation and sequential material, thus showing a normal pattern of volume regulation (slope similarity). It is suggested that speech volume regulation is intact in Parkinson's disease, but rather the gain is reduced. These findings are reminiscent of skeletal motor control studies in Parkinson's disease, in which the amplitude of movement is diminished but the relation with another factor is preserved (stride length increases as cadence-that is, stepping rate, increases). (+info)
Interarticulator phasing, locus equations, and degree of coarticulation.
A locus equation plots the frequency of the second formant at vowel onset against the target frequency of the same formant for the vowel in a consonant-vowel sequence, across different vowel contexts. It has generally been assumed that the slope of the locus equation reflects the degree of coarticulation between the consonant and the vowel, with a steeper slope showing more coarticulation. This study examined the articulatory basis for this assumption. Four subjects participated and produced VCV sequences of the consonants /b, d, g/ and the vowels /i, a, u/. The movements of the tongue and the lips were recorded using a magnetometer system. One articulatory measure was the temporal phasing between the onset of the lip closing movement for the bilabial consonant and the onset of the tongue movement from the first to the second vowel in a VCV sequence. A second measure was the magnitude of the tongue movement during the oral stop closure, averaged across four receivers on the tongue. A third measure was the magnitude of the tongue movement from the onset of the second vowel to the tongue position for that vowel. When compared with the corresponding locus equations, no measure showed any support for the assumption that the slope serves as an index of the degree of coarticulation between the consonant and the vowel. (+info)
Strength of German accent under altered auditory feedback.
Borden's (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions--normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden's hypothesis and other accounts about why altered auditory feedback disrupts speech control. (+info)
Intensive voice treatment (LSVT) for patients with Parkinson's disease: a 2 year follow up.
OBJECTIVES: To assess long term (24 months) effects of the Lee Silverman voice treatment (LSVT), a method designed to improve vocal function in patients with Parkinson's disease. METHODS: Thirty three patients with idiopathic Parkinson's disease were stratified and randomly assigned to two treatment groups. One group received the LSVT, which emphasises high phonatory-respiratory effort. The other group received respiratory therapy (RET), which emphasises high respiratory effort alone. Patients in both treatment groups sustained vowel phonation, read a passage, and produced a monologue under identical conditions before, immediately after, and 24 months after speech treatment. Change in vocal function was measured by means of acoustic analyses of voice loudness (measured as sound pressure level, or SPL) and inflection in voice fundamental frequency (measured in terms of semitone standard deviation, or STSD). RESULTS: The LSVT was significantly more effective than the RET in improving (increasing) SPL and STSD immediately post-treatment and maintaining those improvements at 2 year follow up. CONCLUSIONS: The findings provide evidence for the efficacy of the LSVT as well as the long term maintenance of these effects in the treatment of voice and speech disorders in patients with idiopathic Parkinson's disease. (+info)
Mice and humans perceive multiharmonic communication sounds in the same way.
Vowels and voiced consonants of human speech and most mammalian vocalizations consist of harmonically structured sounds. The frequency contours of formants in the sounds determine their spectral shape and timbre and carry, in human speech, important phonetic and prosodic information to be communicated. Steady-state partitions of vowels are discriminated and identified mainly on the basis of harmonics or formants having been resolved by the critical-band filters of the auditory system and then grouped together. Speech-analog processing and perception of vowel-like communication sounds in mammalian vocal repertoires has not been demonstrated so far. Here, we synthesize 11 call models and a tape loop with natural wriggling calls of mouse pups and show that house mice perceive this communication call in the same way as we perceive speech vowels: they need the presence of a minimum number of formants (three formants-in this case, at 3.8 + 7.6 + 11.4 kHz), they resolve formants by the critical-band mechanism, group formants together for call identification, perceive the formant structure rather continuously, may detect the missing fundamental of a harmonic complex, and all of these occur in a natural communication situation without any training or behavioral constraints. Thus, wriggling-call perception in mice is comparable with unconditioned vowel discrimination and perception in prelinguistic human infants and points to evolutionary old rules of handling speech sounds in the human auditory system up to the perceptual level. (+info)
Congenital amusia: a disorder of fine-grained pitch discrimination.
We report the first documented case of congenital amusia. This disorder refers to a musical disability that cannot be explained by prior brain lesion, hearing loss, cognitive deficits, socioaffective disturbance, or lack of environmental stimulation. This musical impairment is diagnosed in a middle-aged woman, hereafter referred to as Monica, who lacks most basic musical abilities, including melodic discrimination and recognition, despite normal audiometry and above-average intellectual, memory, and language skills. The results of psychophysical tests show that Monica has severe difficulties with detecting pitch changes. The data suggest that music-processing difficulties may result from problems in fine-grained discrimination of pitch, much in the same way as many language-processing difficulties arise from deficiencies in auditory temporal resolution. (+info)
Improving the classroom listening skills of children with Down syndrome by using sound-field amplification.
Many children with Down syndrome have fluctuating conductive hearing losses further reducing their speech, language and academic development. It is within the school environment where access to auditory information is crucial that many children with Down syndrome are especially disadvantaged. Conductive hearing impairment which is often fluctuating and undetected reduces the child's ability to extract the important information from the auditory signal. Unfortunately, the design and acoustics of the classroom leads to problems in extracting the speech signal through reduced speech intensity due to the increased distance of the student from the teacher in addition to masking from excessive background noise. One potential solution is the use of sound-field amplification which provides a uniform amplification to the teacher's voice through the use of a microphone and loudspeakers. This investigation examined the efficacy of sound-field amplification for 4 children with Down syndrome. Measures of speech perception were taken with and without the sound-field system and found that the children perceived significantly more speech in all conditions where the sound-field system was used (p < .0001). Importantly, listening performance with the sound-field system was not affected by reducing the signal-to-noise ratio through increasing the level of background noise. In summary, sound-field amplification provides improved access to the speech signal for children with Down syndrome and as a consequence leads to improved classroom success. (+info)
Timing interference to speech in altered listening conditions.
A theory is outlined that explains the disruption that occurs when auditory feedback is altered. The key part of the theory is that the number of, and relationship between, inputs to a timekeeper, operative during speech control, affects speech performance. The effects of alteration to auditory feedback depend on the extra input provided to the timekeeper. Different disruption is predicted for auditory feedback that is out of synchrony with other speech activity (e.g., delayed auditory feedback, DAF) compared with synchronous forms of altered feedback (e.g., frequency shifted feedback, FSF). Stimulus manipulations that can be made synchronously with speech are predicted to cause equivalent disruption to the synchronous form of altered feedback. Three experiments are reported. In all of them, subjects repeated a syllable at a fixed rate (Wing and Kristofferson, 1973). Overall timing variance was decomposed into the variance of a timekeeper (Cv) and the variance of a motor process (Mv). Experiment 1 validated Wing and Kristofferson's method for estimating Cv in a speech task by showing that only this variance component increased when subjects repeated syllables at different rates. Experiment 2 showed DAF increased Cv compared with when no altered sound occurred (experiment 1) and compared with FSF. In experiment 3, sections of the subject's output sequence were increased in amplitude. Subjects just heard this sound in one condition and made a duration decision about it in a second condition. When no response was made, results were like those with FSF. When a response was made, Cv increased at longer repetition periods. The findings that the principal effect of DAF, a duration decision and repetition period is on Cv whereas synchronous alterations that do not require a decision (amplitude increased sections where no response was made and FSF) do not affect Cv, support the hypothesis that the timekeeping process is affected by synchronized and asynchronized inputs in different ways. (+info)