Acoustics
Sound Spectrography
Phonation
Voice Quality
Sound
Vocal Cords
Speech Production Measurement
Psychoacoustics
Voice
Speech Perception
Ultrasonics
Cues
Viscoelastic properties of f-actin, microtubules, f-actin/alpha-actinin, and f-actin/hexokinase determined in microliter volumes with a novel nondestructive method. (1/1046)
A nondestructive method to determine viscoelastic properties of gels and fluids involves an oscillating glass fiber serving as a sensor for the viscosity of the surrounding fluid. Extremely small displacements (typically 1-100 nm) are caused by the glass rod oscillating at its resonance frequency. These displacements are analyzed using a phase-sensitive acoustic microscope. Alterations of the elastic modulus of a fluid or gel change the propagation speed of a longitudinal acoustic wave. The system allows to study quantities as small as 10 microliters with temporal resolution >1 Hz. For 2-100 microM f-actin gels a final viscosity of 1.3-9.4 mPa s and a final elastic modulus of 2.229-2.254 GPa (corresponding to 1493-1501 m/s sound velocity) have been determined. For 10- to 100-microM microtubule gels (native, without stabilization by taxol), a final viscosity of 1.5-124 mPa s and a final elastic modulus of 2.288-2. 547 GPa (approximately 1513-1596 m/s) have been determined. During polymerization the sound velocity in low-concentration actin solutions increased up to +1.3 m/s (approximately 1.69 kPa) and decreased up to -7 m/s (approximately 49 kPa) at high actin concentrations. On polymerization of tubulin a concentration-dependent decrease of sound velocity was observed, too (+48 to -12 m/s approximately 2.3-0.1 MPa, for 10- to 100-microM tubulin). This decrease was interpreted by a nematic phase transition of the actin filaments and microtubules with increasing concentration. 2 mM ATP (when compared to 0.2 mM ATP) increased polymerization rate, final viscosity and elastic modulus of f-actin (17 microM). The actin-binding glycolytic enzyme hexokinase also accelerated the polymerization rate and final viscosity but elastic modulus (2.26 GPa) was less than for f-actin polymerized in presence of 0.2 mM ATP (2.28 GPa). (+info)What is distinct about infants' "colic" cries? (2/1046)
AIMS: To investigate (1) whether colic cries are acoustically distinct from pre-feed "hunger" cries; (2) the role of the acoustic properties of these cries versus their other properties in accounting for parents' concerns about colic. DESIGN: From a community sample, infants were selected who met Wessel colic criteria for amounts of crying and whose mothers identified colic bouts. Using acoustic analyses, the most intense segments of nine colic bouts were compared with matched segments from pre-feed cries presumed to reflect hunger. RESULTS: The colic cries did not have a higher pitch or proportion of dysphonation than the pre-feed cries. They did contain more frequent shorter utterances, but these resembled normal cries investigated in other studies. There is no evidence that colic cries have distinct acoustic features that are reproducible across samples and studies, which identify a discrete clinical condition, and which are identified accurately by parents. CONCLUSIONS: The most reliable finding is that colic cries convey diffuse acoustic and audible information that a baby is highly aroused or distressed. Non-acoustic features, including the prolonged, hard to soothe, and unexplained nature of the cries may be specific to colic cries and more important for parents. These properties might reflect temperament-like dispositions. (+info)Voice-controlled robotic arm in laparoscopic surgery. (3/1046)
AIM: To report on our experience with a voice-directed robotic arm for scope management in different procedures for "solo-surgery" and in complex laparoscopic operations. METHODS: A chip card with orders for the robotic arm is individually manufactured for every user. A surgeon gives order through a microphone and the optic field is thus under direct command of the surgeon. RESULTS: We analyzed 200 cases of laparoscopic procedures (gallbladder, stomach, colon, and hernia repair) done with the robotic arm. In each procedure the robotic arm worked precisely; voice understanding was exact and functioned flawlessly. A hundred "solo-surgery" operations were performed by a single surgeon. Another 96 complex videoscopic procedures were performed by a surgeon and one assistant. In comparison to other surgical procedures, operative time was not prolonged, and the number of used ports remained unchanged. CONCLUSION: Using the robotic arm in some procedures abolishes the need for assist ance. Further benefit accrued by the use of robotic assistance includes greater stability of view, less inadvertent smearing of the lens, and the absence of fatigue. The robotic arm can be used successfully in every operating theater by all surgeons using laparoscopy. (+info)Cochlear function: hearing in the fast lane. (4/1046)
The cochlea amplifies sound over a wide range of frequencies. Outer hair cells have been thought to play a mechanical part in this amplification, but it has been unclear whether they act rapidly enough. Recent work suggests that outer hair cells can indeed work at frequencies that cover the auditory range. (+info)Advances in photoacoustic noninvasive glucose testing. (5/1046)
We report here on in vitro and in vivo experiments that are intended to explore the feasibility of photoacoustic spectroscopy as a tool for the noninvasive measurement of blood glucose. The in vivo results from oral glucose tests on eight subjects showed good correlation with clinical measurements but indicated that physiological factors and person-to-person variability are important. In vitro measurements showed that the sensitivity of the glucose measurement is unaffected by the presence of common blood analytes but that there can be substantial shifts in baseline values. The results indicate the need for spectroscopic data to develop algorithms for the detection of glucose in the presence of other analytes. (+info)Mosquito hearing: sound-induced antennal vibrations in male and female Aedes aegypti. (6/1046)
Male mosquitoes are attracted by the flight sounds of conspecific females. In males only, the antennal flagellum bears a large number of long hairs and is therefore said to be plumose. As early as 1855, it was proposed that this remarkable antennal anatomy served as a sound-receiving structure. In the present study, the sound-induced vibrations of the antennal flagellum in male and female Aedes aegypti were compared, and the functional significance of the flagellar hairs for audition was examined. In both males and females, the antennae are resonantly tuned mechanical systems that move as simple forced damped harmonic oscillators when acoustically stimulated. The best frequency of the female antenna is around 230 Hz; that of the male is around 380 Hz, which corresponds approximately to the fundamental frequency of female flight sounds. The antennal hairs of males are resonantly tuned to frequencies between approximately 2600 and 3100 Hz and are therefore stiffly coupled to, and move together with, the flagellar shaft when stimulated at biologically relevant frequencies around 380 Hz. Because of this stiff coupling, forces acting on the hairs can be transmitted to the shaft and thus to the auditory sensory organ at the base of the flagellum, a process that is proposed to improve acoustic sensitivity. Indeed, the mechanical sensitivity of the male antenna not only exceeds the sensitivity of the female antenna but also those of all other arthropod movement receivers studied so far. (+info)Experience-dependent modification of ultrasound auditory processing in a cricket escape response. (7/1046)
The ultrasound acoustic startle response (ASR) of crickets (Teleogryllus oceanicus) is a defense against echolocating bats. The ASR to a test pulse can be habituated by a train of ultrasound prepulses. We found that this conditioning paradigm modified both the gain and the lateral direction of the startle response. Habituation reduced the slope of the intensity/response relationship but did not alter stimulus threshold, so habituation extended the dynamic range of the ASR to higher stimulus intensities. Prepulses from the side (90 degrees or 270 degrees azimuth) had a priming effect upon the lateral direction of the ASR, increasing the likelihood that test pulses from the front (between -22 degrees and +22 degrees ) would evoke responses towards the same side as prepulse-induced responses. The plasticity revealed by these experiments could alter the efficacy of the ASR as an escape response and might indicate experience-dependent modification of auditory perception. We also examined stimulus control of habituation by prepulse intensity or direction. Only suprathreshold prepulses induced habituation. Prepulses from one side habituated the responses to test pulses from either the ipsilateral or contralateral side, but habituation was strongest for the prepulse-ipsilateral side. We suggest that habituation of the ASR occurs in the brain, after the point in the pathway where the threshold is mediated, and that directional priming results from a second process of plasticity distinct from that underlying habituation. These inferences bring us a step closer to identifying the neural substrates of plasticity in the ASR pathway. (+info)Vocal tract length and acoustics of vocalization in the domestic dog (Canis familiaris). (8/1046)
The physical nature of the vocal tract results in the production of formants during vocalisation. In some animals (including humans), receivers can derive information (such as body size) about sender characteristics on the basis of formant characteristics. Domestication and selective breeding have resulted in a high variability in head size and shape in the dog (Canis familiaris), suggesting that there might be large differences in the vocal tract length, which could cause formant behaviour to affect interbreed communication. Lateral radiographs were made of dogs from several breeds ranging in size from a Yorkshire terrier (2.5 kg) to a German shepherd (50 kg) and were used to measure vocal tract length. In addition, we recorded an acoustic signal (growling) from some dogs. Significant correlations were found between vocal tract length, body mass and formant dispersion, suggesting that formant dispersion can deliver information about the body size of the vocalizer. Because of the low correlation between vocal tract length and the first formant, we predict a non-uniform vocal tract shape. (+info)Acoustics is a branch of physics that deals with the study of sound, its production, transmission, and effects. In a medical context, acoustics may refer to the use of sound waves in medical procedures such as:
1. Diagnostic ultrasound: This technique uses high-frequency sound waves to create images of internal organs and tissues. It is commonly used during pregnancy to monitor fetal development, but it can also be used to diagnose a variety of medical conditions, including heart disease, cancer, and musculoskeletal injuries.
2. Therapeutic ultrasound: This technique uses low-frequency sound waves to promote healing and reduce pain and inflammation in muscles, tendons, and ligaments. It is often used to treat soft tissue injuries, arthritis, and other musculoskeletal conditions.
3. Otology: Acoustics also plays a crucial role in the field of otology, which deals with the study and treatment of hearing and balance disorders. The shape, size, and movement of the outer ear, middle ear, and inner ear all affect how sound waves are transmitted and perceived. Abnormalities in any of these structures can lead to hearing loss, tinnitus, or balance problems.
In summary, acoustics is an important field of study in medicine that has applications in diagnosis, therapy, and the understanding of various medical conditions related to sound and hearing.
Speech acoustics is a subfield of acoustic phonetics that deals with the physical properties of speech sounds, such as frequency, amplitude, and duration. It involves the study of how these properties are produced by the vocal tract and perceived by the human ear. Speech acousticians use various techniques to analyze and measure the acoustic signals produced during speech, including spectral analysis, formant tracking, and pitch extraction. This information is used in a variety of applications, such as speech recognition, speaker identification, and hearing aid design.
Sound spectrography, also known as voice spectrography, is a diagnostic procedure in which a person's speech sounds are analyzed and displayed as a visual pattern called a spectrogram. This test is used to evaluate voice disorders, speech disorders, and hearing problems. It can help identify patterns of sound production and reveal any abnormalities in the vocal tract or hearing mechanism.
During the test, a person is asked to produce specific sounds or sentences, which are then recorded and analyzed by a computer program. The program breaks down the sound waves into their individual frequencies and amplitudes, and displays them as a series of horizontal lines on a graph. The resulting spectrogram shows how the frequencies and amplitudes change over time, providing valuable information about the person's speech patterns and any underlying problems.
Sound spectrography is a useful tool for diagnosing and treating voice and speech disorders, as well as for researching the acoustic properties of human speech. It can also be used to evaluate hearing aids and other assistive listening devices, and to assess the effectiveness of various treatments for hearing loss and other auditory disorders.
Phonation is the process of sound production in speech, singing, or crying. It involves the vibration of the vocal folds (also known as the vocal cords) in the larynx, which is located in the neck. When air from the lungs passes through the vibrating vocal folds, it causes them to vibrate and produce sound waves. These sound waves are then shaped into speech sounds by the articulatory structures of the mouth, nose, and throat.
Phonation is a critical component of human communication and is used in various forms of verbal expression, such as speaking, singing, and shouting. It requires precise control of the muscles that regulate the tension, mass, and length of the vocal folds, as well as the air pressure and flow from the lungs. Dysfunction in phonation can result in voice disorders, such as hoarseness, breathiness, or loss of voice.
Voice quality, in the context of medicine and particularly in otolaryngology (ear, nose, and throat medicine), refers to the characteristic sound of an individual's voice that can be influenced by various factors. These factors include the vocal fold vibration, respiratory support, articulation, and any underlying medical conditions.
A change in voice quality might indicate a problem with the vocal folds or surrounding structures, neurological issues affecting the nerves that control vocal fold movement, or other medical conditions. Examples of terms used to describe voice quality include breathy, hoarse, rough, strained, or tense. A detailed analysis of voice quality is often part of a speech-language pathologist's assessment and can help in diagnosing and managing various voice disorders.
In the context of medicine, particularly in the field of auscultation (the act of listening to the internal sounds of the body), "sound" refers to the noises produced by the functioning of the heart, lungs, and other organs. These sounds are typically categorized into two types:
1. **Bradyacoustic sounds**: These are low-pitched sounds that are heard when there is a turbulent flow of blood or when two body structures rub against each other. An example would be the heart sound known as "S1," which is produced by the closure of the mitral and tricuspid valves at the beginning of systole (contraction of the heart's ventricles).
2. **High-pitched sounds**: These are sharper, higher-frequency sounds that can provide valuable diagnostic information. An example would be lung sounds, which include breath sounds like those heard during inhalation and exhalation, as well as adventitious sounds like crackles, wheezes, and pleural friction rubs.
It's important to note that these medical "sounds" are not the same as the everyday definition of sound, which refers to the sensation produced by stimulation of the auditory system by vibrations.
Vocal cords, also known as vocal folds, are specialized bands of muscle, membrane, and connective tissue located within the larynx (voice box). They are essential for speech, singing, and other sounds produced by the human voice. The vocal cords vibrate when air from the lungs is passed through them, creating sound waves that vary in pitch and volume based on the tension, length, and mass of the vocal cords. These sound waves are then further modified by the resonance chambers of the throat, nose, and mouth to produce speech and other vocalizations.
Speech intelligibility is a term used in audiology and speech-language pathology to describe the ability of a listener to correctly understand spoken language. It is a measure of how well speech can be understood by others, and is often assessed through standardized tests that involve the presentation of recorded or live speech at varying levels of loudness and/or background noise.
Speech intelligibility can be affected by various factors, including hearing loss, cognitive impairment, developmental disorders, neurological conditions, and structural abnormalities of the speech production mechanism. Factors related to the speaker, such as speaking rate, clarity, and articulation, as well as factors related to the listener, such as attention, motivation, and familiarity with the speaker or accent, can also influence speech intelligibility.
Poor speech intelligibility can have significant impacts on communication, socialization, education, and employment opportunities, making it an important area of assessment and intervention in clinical practice.
Speech production measurement is the quantitative analysis and assessment of various parameters and characteristics of spoken language, such as speech rate, intensity, duration, pitch, and articulation. These measurements can be used to diagnose and monitor speech disorders, evaluate the effectiveness of treatment, and conduct research in fields such as linguistics, psychology, and communication disorders. Speech production measurement tools may include specialized software, hardware, and techniques for recording, analyzing, and visualizing speech data.
Animal vocalization refers to the production of sound by animals through the use of the vocal organs, such as the larynx in mammals or the syrinx in birds. These sounds can serve various purposes, including communication, expressing emotions, attracting mates, warning others of danger, and establishing territory. The complexity and diversity of animal vocalizations are vast, with some species capable of producing intricate songs or using specific calls to convey different messages. In a broader sense, animal vocalizations can also include sounds produced through other means, such as stridulation in insects.
Psychoacoustics is a branch of psychophysics that deals with the study of the psychological and physiological responses to sound. It involves understanding how people perceive, interpret, and react to different sounds, including speech, music, and environmental noises. This field combines knowledge from various areas such as psychology, acoustics, physics, and engineering to investigate the relationship between physical sound characteristics and human perception. Research in psychoacoustics has applications in fields like hearing aid design, noise control, music perception, and communication systems.
Occupational noise is defined as exposure to excessive or harmful levels of sound in the workplace that has the potential to cause adverse health effects such as hearing loss, tinnitus, and stress-related symptoms. The measurement of occupational noise is typically expressed in units of decibels (dB), and the permissible exposure limits are regulated by organizations such as the Occupational Safety and Health Administration (OSHA) in the United States.
Exposure to high levels of occupational noise can lead to permanent hearing loss, which is often irreversible. It can also interfere with communication and concentration, leading to decreased productivity and increased risk of accidents. Therefore, it is essential to implement appropriate measures to control and reduce occupational noise exposure in the workplace.
In medical terms, the term "voice" refers to the sound produced by vibration of the vocal cords caused by air passing out from the lungs during speech, singing, or breathing. It is a complex process that involves coordination between respiratory, phonatory, and articulatory systems. Any damage or disorder in these systems can affect the quality, pitch, loudness, and flexibility of the voice.
The medical field dealing with voice disorders is called Phoniatrics or Voice Medicine. Voice disorders can present as hoarseness, breathiness, roughness, strain, weakness, or a complete loss of voice, which can significantly impact communication, social interaction, and quality of life.
Speech perception is the process by which the brain interprets and understands spoken language. It involves recognizing and discriminating speech sounds (phonemes), organizing them into words, and attaching meaning to those words in order to comprehend spoken language. This process requires the integration of auditory information with prior knowledge and context. Factors such as hearing ability, cognitive function, and language experience can all impact speech perception.
Ultrasonics is a branch of physics and acoustics that deals with the study and application of sound waves with frequencies higher than the upper limit of human hearing, typically 20 kilohertz or above. In the field of medicine, ultrasonics is commonly used in diagnostic and therapeutic applications through the use of medical ultrasound.
Diagnostic medical ultrasound, also known as sonography, uses high-frequency sound waves to produce images of internal organs, tissues, and bodily structures. A transducer probe emits and receives sound waves that bounce off body structures and reflect back to the probe, creating echoes that are then processed into an image. This technology is widely used in various medical specialties, such as obstetrics and gynecology, cardiology, radiology, and vascular medicine, to diagnose a range of conditions and monitor the health of organs and tissues.
Therapeutic ultrasound, on the other hand, uses lower-frequency sound waves to generate heat within body tissues, promoting healing, increasing local blood flow, and reducing pain and inflammation. This modality is often used in physical therapy and rehabilitation settings to treat soft tissue injuries, joint pain, and musculoskeletal disorders.
In summary, ultrasonics in medicine refers to the use of high-frequency sound waves for diagnostic and therapeutic purposes, providing valuable information about internal body structures and facilitating healing processes.
In the context of medicine, particularly in audiology and otolaryngology (ear, nose, and throat specialty), "noise" is defined as unwanted or disturbing sound in the environment that can interfere with communication, rest, sleep, or cognitive tasks. It can also refer to sounds that are harmful to hearing, such as loud machinery noises or music, which can cause noise-induced hearing loss if exposure is prolonged or at high enough levels.
In some medical contexts, "noise" may also refer to non-specific signals or interfering factors in diagnostic tests and measurements that can make it difficult to interpret results accurately.
Acoustic stimulation refers to the use of sound waves or vibrations to elicit a response in an individual, typically for the purpose of assessing or treating hearing, balance, or neurological disorders. In a medical context, acoustic stimulation may involve presenting pure tones, speech sounds, or other types of auditory signals through headphones, speakers, or specialized devices such as bone conduction transducers.
The response to acoustic stimulation can be measured using various techniques, including electrophysiological tests like auditory brainstem responses (ABRs) or otoacoustic emissions (OAEs), behavioral observations, or functional imaging methods like fMRI. Acoustic stimulation is also used in therapeutic settings, such as auditory training programs for hearing impairment or vestibular rehabilitation for balance disorders.
It's important to note that acoustic stimulation should be administered under the guidance of a qualified healthcare professional to ensure safety and effectiveness.
In the context of medicine, "cues" generally refer to specific pieces of information or signals that can help healthcare professionals recognize and respond to a particular situation or condition. These cues can come in various forms, such as:
1. Physical examination findings: For example, a patient's abnormal heart rate or blood pressure reading during a physical exam may serve as a cue for the healthcare professional to investigate further.
2. Patient symptoms: A patient reporting chest pain, shortness of breath, or other concerning symptoms can act as a cue for a healthcare provider to consider potential diagnoses and develop an appropriate treatment plan.
3. Laboratory test results: Abnormal findings on laboratory tests, such as elevated blood glucose levels or abnormal liver function tests, may serve as cues for further evaluation and diagnosis.
4. Medical history information: A patient's medical history can provide valuable cues for healthcare professionals when assessing their current health status. For example, a history of smoking may increase the suspicion for chronic obstructive pulmonary disease (COPD) in a patient presenting with respiratory symptoms.
5. Behavioral or environmental cues: In some cases, behavioral or environmental factors can serve as cues for healthcare professionals to consider potential health risks. For instance, exposure to secondhand smoke or living in an area with high air pollution levels may increase the risk of developing respiratory conditions.
Overall, "cues" in a medical context are essential pieces of information that help healthcare professionals make informed decisions about patient care and treatment.
In the field of medicine, "time factors" refer to the duration of symptoms or time elapsed since the onset of a medical condition, which can have significant implications for diagnosis and treatment. Understanding time factors is crucial in determining the progression of a disease, evaluating the effectiveness of treatments, and making critical decisions regarding patient care.
For example, in stroke management, "time is brain," meaning that rapid intervention within a specific time frame (usually within 4.5 hours) is essential to administering tissue plasminogen activator (tPA), a clot-busting drug that can minimize brain damage and improve patient outcomes. Similarly, in trauma care, the "golden hour" concept emphasizes the importance of providing definitive care within the first 60 minutes after injury to increase survival rates and reduce morbidity.
Time factors also play a role in monitoring the progression of chronic conditions like diabetes or heart disease, where regular follow-ups and assessments help determine appropriate treatment adjustments and prevent complications. In infectious diseases, time factors are crucial for initiating antibiotic therapy and identifying potential outbreaks to control their spread.
Overall, "time factors" encompass the significance of recognizing and acting promptly in various medical scenarios to optimize patient outcomes and provide effective care.