Parenting a child with a cochlear implant: a critical incident study. (65/256)

This study aimed to describe and categorize the attributes that parents of young children with cochlear implants (CIs) consider as facilitating their parental coping experience. I interviewed 15 hearing mothers and 13 hearing fathers (including 12 married couples) whose children had CIs, using the critical incident technique that asked parents to describe significant incidents (observable behaviors, thoughts, feelings) that facilitated their parenting experience. A total of 430 critical incidents were documented and sorted into 20 categories. Further analyses supported the suggested categorical system's validity and reliability. Results indicated various sources of influence on parents' coping experience, associated with social contextual aspects (e.g., professionals' support, sharing experience with others, family's/friends' consistent involvement, intervention services), with the parent himself or herself (e.g., taking action, personal resources, incorporating deafness into daily life), and with the child (e.g., child characteristics, identifying progress and success). The current research substantiates the soundness of implementing early intervention models such as the developmental system model (Guralnick, 2001) and the support approach to early intervention (McWilliam & Scott, 2001), which coincide with ecological theory and recognize that families need various combinations of resources, social support, information, and services to help them address the stressors associated with parenting in general and parenting a child with special needs in particular.  (+info)

Do you hear voices? Problems in assessment of mental status in deaf persons with severe language deprivation. (66/256)

When mental health clinicians perform mental status examinations, they examine the language patterns of patients because abnormal language patterns, sometimes referred to as language dysfluency, may indicate a thought disorder. Performing such examinations with deaf patients is a far more complex task, especially with traditionally underserved deaf people who have severe language deficits in their best language or communication modality. Many deaf patients suffer language deprivation due to late and inadequate exposure to ASL. They are also language dysfluent, but the language dysfluency is usually not due to mental illness. Others are language dysfluent due to brain disorders such as aphasia. This paper examines difficulties in performing a mental status examination with deaf patients. Issues involved in evaluating for hallucinations, delusions, and disorganized thinking are reviewed. Guidelines are drawn for differential diagnosis of language dysfluency related to thought disorder vs. language dysfluency related to language deprivation.  (+info)

Discriminating signs: perceptual precursors to acquiring a visual-gestural language. (67/256)

We tested hearing 6- and 10-month-olds' ability to discriminate among three American Sign Language (ASL) parameters (location, handshape, and movement) as well as a grammatical marker (facial expression). ASL-naive infants were habituated to a signer articulating a two-handed symmetrical sign in neutral space. During test, infants viewed novel two-handed signs that varied in only one parameter or in facial expression. Infants detected changes in the signer's facial expression and in the location of the sign but provided no evidence of detecting the changes in handshape or movement. These findings are consistent with children's production errors in ASL and reveal that infants can distinguish among some parameters of ASL more easily than others.  (+info)

The hands have it: number representations in adult deaf signers. (68/256)

This study examines a wide range of numerical representations (i.e., quantity, knowledge of multiplication facts, and use of parity information) in adult deaf signers. We introduce a modified version of the number bisection task, with sequential stimulus presentation, which allows for a systematic examination of mathematical skills in deaf individuals in different modalities (number signs in streaming video vs. Arabic digit displays). Reaction times and accuracy measures indicated that deaf signers make use of several representations simultaneously when bisecting number triplets, paralleling earlier findings in hearing individuals. Furthermore, some differences were obtained between the 2 display modalities, with effects being less prominent in the Arabic digit mode, suggesting that mathematical abilities in deaf signers should be assessed in their native sign language.  (+info)

The transition from fingerspelling to English print: facilitating English decoding. (69/256)

Fingerspelling is an integral part of American Sign Language (ASL) and it is also an important aspect of becoming bilingual in English and ASL. Even though fingerspelling is based on English orthography, the development of fingerspelling does not parallel the development of reading in hearing children. Research reveals that deaf children may initially treat fingerspelled words as lexical items rather than a series of letters that represent English orthography and only later begin to learn to link handshapes to English graphemes. The purpose of this study is to determine whether a training method that uses fingerspelling and phonological patterns that resemble those found in lexicalized fingerspelling to teach deaf students unknown English vocabulary would increase their ability to learn the fingerspelled and orthographic version of a word. There were 21 deaf students (aged 4-14 years) who participated. Results show that students were better able to recognize and write the printed English word as well as fingerspell the word, when training incorporated fingerspelling that is more lexicalized. The discussion focuses on the degree to which fingerspelling can serve as a visual phonological bridge as an aid to decode English print.  (+info)

Fingerspelling, signed language, text and picture processing in deaf native signers: the role of the mid-fusiform gyrus. (70/256)

In fingerspelling, different hand configurations are used to represent the different letters of the alphabet. Signers use this method of representing written language to fill lexical gaps in a signed language. Using fMRI, we compared cortical networks supporting the perception of fingerspelled, signed, written, and pictorial stimuli in deaf native signers of British Sign Language (BSL). In order to examine the effects of linguistic knowledge, hearing participants who knew neither fingerspelling nor a signed language were also tested. All input forms activated a left fronto-temporal network, including portions of left inferior temporal and mid-fusiform gyri, in both groups. To examine the extent to which activation in this region was influenced by orthographic structure, two contrasts of orthographic and non-orthographic stimuli were made: one using static stimuli (text vs. pictures), the other using dynamic stimuli (fingerspelling vs. signed language). Greater activation in left and right inferior temporal and mid-fusiform gyri was found for pictures than text in both deaf and hearing groups. In the fingerspelling vs. signed language contrast, a significant interaction indicated locations within the left and right mid-fusiform gyri. This showed greater activation for fingerspelling than signed language in deaf but not hearing participants. These results are discussed in light of recent proposals that the mid-fusiform gyrus may act as an integration region, mediating between visual input and higher-order stimulus properties.  (+info)

The neural correlates of sign versus word production. (71/256)

The production of sign language involves two large articulators (the hands) moving through space and contacting the body. In contrast, speech production requires small movements of the tongue and vocal tract with no observable spatial contrasts. Nonetheless, both language types exhibit a sublexical layer of structure with similar properties (e.g., segments, syllables, feature hierarchies). To investigate which neural areas are involved in modality-independent language production and which are tied specifically to the input-output mechanisms of signed and spoken language, we reanalyzed PET data collected from 29 deaf signers and 64 hearing speakers who participated in a series of separate studies. Participants were asked to overtly name concrete objects from distinct semantic categories in either American Sign Language (ASL) or in English. The baseline task required participants to judge the orientation of unknown faces (overtly responding 'yes'/'no' for upright/inverted). A random effects analysis revealed that left mesial temporal cortex and the left inferior frontal gyrus were equally involved in both speech and sign production, suggesting a modality-independent role for these regions in lexical access. Within the left parietal lobe, two regions were more active for sign than for speech: the supramarginal gyrus (peak coordinates: -60, -35, +27) and the superior parietal lobule (peak coordinates: -26, -51, +54). Activation in these regions may be linked to modality-specific output parameters of sign language. Specifically, activation within left SMG may reflect aspects of phonological processing in ASL (e.g., selection of hand configuration and place of articulation features), whereas activation within SPL may reflect proprioceptive monitoring of motoric output.  (+info)

Parents sharing books with young deaf children in spoken english and in BSL: the common and diverse features of different language settings. (72/256)

Twelve parents of young deaf children were recorded sharing books with their deaf child--six from families using British Sign Language (BSL) and six from families using spoken English. Although all families were engaged in sharing books with their deaf child and concerned to promote literacy development, they approached the task differently and had different expectations in terms of outcome. The sign bilingual families concentrated on using the book to promote BSL development, engaging in discussion around the book but without referring to the text, whereas the spoken language families were focused on features of the text and less inclined to use the book to promote wider knowledge. Implications for early intervention and support are drawn from the data.  (+info)