Evaluating the effects of functional communication training in the presence and absence of establishing operations. (1/18)

We conducted functional analyses of aberrant behavior with 4 children with developmental disabilities. We then implemented functional communication training (FCT) by using different mands across two contexts, one in which the establishing operation (EO) that was relevant to the function of aberrant behavior was present and one in which the EO that was relevant to the function of aberrant behavior was absent. The mand used in the EO-present context served the same function as aberrant behavior, and the mand used in the EO-absent context served a different function than the one identified via the functional analysis. In addition, a free-play (control) condition was conducted for all children. Increases in relevant manding were observed in the EO-present context for 3 of the 4 participants. Decreases in aberrant behavior were achieved by the end of the treatment analysis for all 4 participants. Irrelevant mands were rarely observed in the EO-absent context for 3 of the 4 participants. Evaluating the effectiveness of FCT across different contexts allowed a further analysis of manding when the establishing operations were present or absent. The contributions of this study to the understanding of functional equivalence are also discussed.  (+info)

Assessment of a response bias for aggression over functionally equivalent appropriate behavior. (2/18)

We evaluated the effects of a dense (fixed-ratio 1) schedule of reinforcement for an 11-year-old boy's mands for toys while aggression produced the same toys on various schedules chosen on the basis of a progressive-ratio probe. Based on the probe session data, we accurately predicted that aggression would be more probable than mands when the schedules were equal or slightly discrepant, but that mands would be more probable when the schedule discrepancy was large.  (+info)

Differential use of vocal and gestural communication by chimpanzees (Pan troglodytes) in response to the attentional status of a human (Homo sapiens). (3/18)

This study examined the communicative behavior of 49 captive chimpanzees (Pan troglodytes), particularly their use of vocalizations, manual gestures, and other auditory- or tactile-based behaviors as a means of gaining an inattentive audience's attention. A human (Homo sapiens) experimenter held a banana while oriented either toward or away from the chimpanzee. The chimpanzees' behavior was recorded for 60 s. Chimpanzees emitted vocalizations faster and were more likely to produce vocalizations as their 1st communicative behavior when a human was oriented away from them. Chimpanzees used manual gestures more frequently and faster when the human was oriented toward them. These results replicate the findings of earlier studies on chimpanzee gestural communication and provide new information about the intentional and functional use of their vocalizations.  (+info)

Nonword imitation by children with cochlear implants: consonant analyses. (4/18)

OBJECTIVES: To complete detailed linguistic analyses of archived recordings of pediatric cochlear implant users' imitations of nonwords; to gain insight into the children's developing phonological systems and the wide range of variability in nonword responses. DESIGN: Nonword repetition: repetition of 20 auditory-only English-sounding nonwords. SETTING: Central Institute for the Deaf "Education of the Deaf Child" research program, St Louis, Mo. PARTICIPANTS: Eighty-eight 8- to 10-year-old experienced pediatric cochlear implant users. MAIN OUTCOME MEASURES: Several different consonant accuracy scores based on the linguistic structure (voicing, place, and manner of articulation) of the consonants being imitated; analysis of the errors produced for all consonants imitated incorrectly. RESULTS: Seventy-six children provided a response to at least 75% of the nonword stimuli. In these children's responses, 33% of the target consonants were imitated correctly, 25% of the target consonants were deleted, and substitutions were provided for 42% of the target consonants. The children tended to correctly reproduce target consonants with coronal place (which involve a mid-vocal tract constriction) more often than other consonants. Poorer performers tended to produce more deletions than the better performers, but their production errors tended to follow the same patterns as the better performers. CONCLUSIONS: Poorer performance on labial consonants suggests that scores were affected by the lack of visual cues such as lip closure. Oral communication users tended to perform better than total communication users, indicating that oral communication methods are beneficial to the development of pediatric cochlear implant users' phonological processing skills.  (+info)

Hearing mothers and their deaf children: the relationship between early, ongoing mode match and subsequent mental health functioning in adolescence. (5/18)

In the few studies that have been conducted, researchers have typically found that deaf adolescents have more mental health difficulties than their hearing peers and that, within the deaf groups, those who use spoken language have better mental health functioning than those who use sign language. This study investigated the hypotheses that mental health functioning in adolescence is related to an early and consistent mode match between mother and child rather than to the child's use of speech or sign itself. Using a large existing 15-year longitudinal database on children and adolescents with severe and profound deafness, 57 adolescents of hearing parents were identified for whom data on language experience (the child's and the mother's) and mental health functioning (from a culturally and linguistically adapted form of the Achenbach Youth Self Report) was available. Three groups were identified: auditory/oral (A/O), sign match (SM), and sign mismatch (SMM). As hypothesized, no significant difference in mental health functioning was found between the A/O and SM groups, but a significant difference was found favoring a combined A/O and SM group over the SMM group. These results support the notion of the importance of an early and consistent mode match between deaf children and hearing mothers, regardless of communication modality.  (+info)

The development of analogical reasoning in deaf children and their parents' communication mode. (6/18)

The purpose of this article is to analyze the results of a study of the development of analogical reasoning in deaf children coming from two different linguistic environments (deaf children of deaf parents--sign language, deaf children of hearing parents--spoken language) and in hearing children, as well as to compare two groups of deaf children to a group of hearing children. In order to estimate the development of children's analogical reasoning, especially the development of their understanding of different logical relations, two age groups were singled out in each population of children: younger (9- and 10-year-olds) and older (12- and 13-year-olds). In this way it is possible to assess the influence of early and consistent sign-language communication on the development of the conceptual system in deaf children and to establish whether early and consistent sign-language communication with deaf children affects their mental development to the same extent as early and consistent spoken-language communication with hearing children. The children were given three series of analogy tasks based on different logical relations: (a) a series of verbal analogy tasks (the relations of opposite, part-whole, and causality); (b) a series of numerical analogy tasks (the relations of class membership, opposite, and part-whole); and (c) a series of figural-geometric analogy tasks (the relations of opposite and part-whole). It was found that early and consistent sign-language communication with deaf children plays an almost equivalent role in the development of verbal, numerical, and spatial reasoning by analogy as early and consistent spoken-language communication with hearing children.  (+info)

Grammatical Subjects in home sign: Abstract linguistic structure in adult primary gesture systems without linguistic input. (7/18)

Language ordinarily emerges in young children as a consequence of both linguistic experience (for example, exposure to a spoken or signed language) and innate abilities (for example, the ability to acquire certain types of language patterns). One way to discern which aspects of language acquisition are controlled by experience and which arise from innate factors is to remove or manipulate linguistic input. However, experimental manipulations that involve depriving a child of language input are impossible. The present work examines the communication systems resulting from natural situations of language deprivation and thus explores the inherent tendency of humans to build communication systems of particular kinds, without any conventional linguistic input. We examined the gesture systems that three isolated deaf Nicaraguans (ages 14-23 years) have developed for use with their hearing families. These deaf individuals have had no contact with any conventional language, spoken or signed. To communicate with their families, they have each developed a gestural communication system within the home called "home sign." Our analysis focused on whether these systems show evidence of the grammatical category of Subject. Subjects are widely considered to be universal to human languages. Using specially designed elicitation tasks, we show that home signers also demonstrate the universal characteristics of Subjects in their gesture productions, despite the fact that their communicative systems have developed without exposure to a conventional language. These findings indicate that abstract linguistic structure, particularly the grammatical category of Subject, can emerge in the gestural modality without linguistic input.  (+info)

An ideal observer analysis of variability in visual-only speech. (8/18)

Normal-hearing observers typically have some ability to "lipread," or understand visual-only speech without an accompanying auditory signal. However, talkers vary in how easy they are to lipread. Such variability could arise from differences in the visual information available in talkers' speech, human perceptual strategies that are better suited to some talkers than others, or some combination of these factors. A comparison of human and ideal observer performance in a visual-only speech recognition task found that although talkers do vary in how much physical information they produce during speech, human perceptual strategies also play a role in talker variability.  (+info)