Optimal spatial frequencies for discrimination of motion direction in optic flow patterns. (1/224)

Spatial frequency tuning functions were measured for direction discrimination of optic flow patterns. Three subjects discriminated the direction of a curved motion path using computer generated optic flow patterns composed of randomly positioned dots. Performance was measured with unfiltered patterns and with patterns that were spatially filtered across a range of spatial frequencies (center spatial frequencies of 0.4, 0.8, 1.6, 3.2, 6.4, and 9.6 c/deg). The same subjects discriminated the direction of uniform, translational motion on the fronto-parallel plane. The uniform motion patterns were also composed of randomly positioned dots, that were either unfiltered or filtered with the same spatial filters used for the optic flow patterns. The peak spatial frequency was the same for both the optic flow and uniform motion patterns. For both types of motion, a narrow band (1.5 octaves) of optimal spatial frequencies was sufficient to support the same level of performance as found with unfiltered, broadband patterns. Additional experiments demonstrated that the peak spatial frequency for the optic flow patterns varies with mean image speed in the same manner as has been reported for moving sinusoidal gratings. These findings confirm the hypothesis that the outputs of the local motion mechanisms thought to underlie the perception of uniform motion provide the inputs to, and constrain the operation of, the mechanism that processes self motion from optic flow patterns.  (+info)

The reorganization of sensorimotor function in children after hemispherectomy. A functional MRI and somatosensory evoked potential study. (2/224)

Children who have suffered extensive unilateral brain injury early in life may show a remarkable degree of residual sensorimotor function. It is generally believed that this reflects the high capacity of the immature brain for cerebral reorganization. In this study, we investigated 17 patients who had undergone hemispherectomy for relief from seizures; eight of the patients had congenital brain damage and nine had sustained their initial insult at the age of 1 year or older. Sensorimotor functions of the hand were investigated using functional MRI (fMRI) during a passive movement task, somatosensory evoked potentials (SEPs) arising from electrical and vibration stimulation, and behavioural tests including grip strength, double simultaneous stimulation and joint position sense. On fMRI, two of the eight patients studied with this technique (one with congenital damage and one with damage acquired at the age of 3 years) showed activation in the sensorimotor cortex of the remaining hemisphere with passive movement of the hemiplegic hand. The location of the ipsilateral brain activation was similar to that found on movement of the normal contralateral hand, although the latter was greater in spatial extent. In one of these patients, a greater role was demonstrated for the ipsilateral secondary sensorimotor area (compared with the ipsilateral primary sensorimotor area) for movement of the hemiplegic hand than for movement of the normal hand. Median nerve stimulation of the hemiplegic hand showed reproducible early-latency ipsilateral SEP components in the remaining sensorimotor cortex in 10 of the 17 patients (five with congenital and five with acquired disease). Five of the patients who demonstrated ipsilateral electrical SEPs also showed ipsilateral vibration SEPs (two with congenital and three with acquired disease). The behavioural tests revealed residual sensorimotor function in 14 of the patients; however, not all of the patients who exhibited ipsilateral SEP or fMRI responses had residual sensorimotor function in the hemiplegic hand. Ipsilateral sensorimotor responses were demonstrated both in patients with congenital disease and those with acquired disease, suggesting that factors additional to aetiology and age at injury may influence the degree of residual sensorimotor function and cerebral reorganization.  (+info)

Sensory integration in the perception of movements at the human metacarpophalangeal joint. (3/224)

These experiments were designed to investigate illusions of movements of the fingers produced by combined feedback from muscle spindle receptors and receptors located in different regions of the skin of the hand. Vibration (100 Hz) applied in cyclic bursts (4 s 'on', 4 s 'off') over the tendons of the finger extensors of the right wrist produced illusions of flexion-extension of the fingers. Cutaneous receptors were activated by local skin stretch and electrical stimulation. Illusory movements at the metacarpophalangeal (MCP) joints were measured from voluntary matching movements made with the left hand. Localised stretch of the dorsal skin over specific MCP joints altered vibration-induced illusions in 8/10 subjects. For the group, this combined stimulation produced movement illusions at MCP joints under, adjacent to, and two joints away from the stretched region of skin that were 176 +/- 33, 122 +/- 9 and 67 +/- 11 % of the size of those from vibration alone, respectively. Innocuous electrical stimulation over the same skin regions, but not at the digit tips, also 'focused' the sensation of movement to the stimulated digit. Stretch of the dorsal skin and compression of the ventral skin around one MCP joint altered the vibration-induced illusions in all subjects. The illusions became more focused, being 295 +/- 57, 116 +/- 18 and 65 +/- 7 % of the corresponding vibration-induced illusions at MCP joints that were under, adjacent to, and two joints away from the stimulated regions of skin, respectively. These results show that feedback from cutaneous and muscle spindle receptors is continuously integrated for the perception of finger movements. The contribution from the skin was not simply a general facilitation of sensations produced by muscle receptors but, when the appropriate regions of skin were stimulated, movement illusions were focused to the joint under the stimulated skin. One role for cutaneous feedback from the hand may be to help identify which finger joint is moving.  (+info)

A few minor suggestions. (4/224)

We agree with almost all of the analysis in this excellent presentation of the molecular view of avoidance behavior. A few suggestions are made as follows: Referring to response-generated stimuli as ''readily observable" seems not quite right for the kinesthetic components of such stimuli, although their scientific legitimacy is not questioned. Interpreting response-generated stimuli as a form of positive reinforcement is contested, and an alternative interpretation is offered. A possibly simpler interpretation of the Sidman (1962) two-lever experiment is suggested. We question Dinsmoor's (2001) explanation for warning stimuli not being avoided, except for the reference to the weakness of third-order conditioning effects. A final question is raised regarding the nature of the variables that are responsible for the momentary evocation of the avoidance response.  (+info)

Neural model for processing the influence of visual orientation on visually perceived eye level (VPEL). (5/224)

An individual line or a combination of lines viewed in darkness has a large influence on the elevation to which an observer sets a target so that it is perceived to lie at eye level (VPEL). These influences are systematically related to the orientation of pitched-from-vertical lines on pitched plane(s) and to the lengths of the lines, as well as to the orientations of lines of 'equivalent pitch' that lie on frontoparallel planes. A three-stage model processes the visual influence: The first stage parallel processes the orientations of the lines utilizing 2 classes of orientation-sensitive neural units in each hemisphere, with the two classes sensitive to opposing ranges of orientations; the signal delivered by each class is of opposite sign in the two hemispheres. The second stage generates the total visual influence from the parallel combination of inputs delivered by the 4 groups of the first stage, and a third stage combines the total visual influence from the second stage with signals from the body-referenced mechanism that contains information about the position and orientation of the eyes, head, and body. The circuit equation describing the combined influence of n separate inputs from stage 1 on the output of the stage 2 integrating neuron is derived for n stimulus lines which possess any combination of orientations and lengths; Each of the n lines is assumed to stimulate one of the groups of orientation-sensitive units in visual cortex (stage 1) whose signals converge on to a dendrite of the integrating neuron (stage 2), and to produce changes in postsynaptic membrane conductance (g(i)) and potential (V(i)) there. The net current from the n dendrites results in a voltage change (V(A)) at the initial segment of the axon of the integrating neuron. Nerve impulse frequency proportional to this voltage change signals the total visual influence on perceived elevation of the visual field. The circuit equation corresponding to the total visual influence for n equal length inducing lines is V(A)= sum V(i)/[n+(g(A)/g(S))], where the potential change due to line i, V(i), is proportional to line orientation, g(A) is the conductance at the axon's summing point, and g(S)=g(i) for each i for the equal length case; the net conductance change due to a line is proportional to the line's length. The circuit equation is interpreted as a basis for quantitative predictions from the model that can be compared to psychophysical measurements of the elevation of VPEL. The interpretation provides the predicted relation for the visual influence on VPEL, V, by n inducing lines each with length l: thus, V=a+[k(i) sum theta(i)/n+(k(2)/l)], where theta(i) is the orientation of line i, a is the effect of the body-referenced mechanism, and k(1) and k(2) are constants. The model's output is fitted to the results of five sets of experiments in which the elevation of VPEL measured with a small target in the median plane is systematically influenced by distantly located 1-line or 2-line inducing stimuli varying in orientation and length and viewed in otherwise total darkness with gaze restricted to the median plane; each line is located at either 25 degrees eccentricity to the left or right of the median plane. The model predicts the negatively accelerated growth of VPEL with line length for each orientation and the change of slope constant of the linear combination rule among lines from 1.00 (linear summation; short lines) to 0.61 (near-averaging; long lines). Fits to the data are obtained over a range of orientations from -30 degrees to +30 degrees of pitch for 1-line visual fields from lengths of 3 degrees to 64 degrees, for parallel 2-line visual fields over the same range of lengths and orientations, for short and long 2-line combinations in which each of the two members may have any orientation (parallel or nonparallel pairs), and for the well-illuminated and fully structured pitchroom. In addition, similar experiments with 2-line stimuli of equivalent pitch in the frontoparallel plane were also fitted to the model. The model accounts for more than 98% of the variance of the results in each case.  (+info)

Influences of visual pitch and visual yaw on visually perceived eye level (VPEL) and straight ahead (VPSA) for erect and rolled-to-horizontal observers. (6/224)

Localization within the space in front of an observer can be specified along two orthogonal physical dimensions: elevation ('up', 'down') and horizontal ('left','right'). For the erect observer, these correspond to egocentric dimensions along the long and short axes of the body, respectively. However, when subjects are rolled-to-horizontal (lying on their sides), the correspondence between the physical and egocentric dimensions is reversed. Employing egocentric coordinates, localization can be referred to a central perceptual point-visually perceived eye level (VPEL) along the long axis of the body, and visually perceived straight ahead (VPSA) along the short axis of the body. In the present experiment, measurements of VPEL and of VPSA were made on each of eight subjects who were either erect or rolled-to-horizontal while monocularly viewing a long 2-line stimulus (two parallel, 64 degrees -long lines separated by 50 degrees ) in otherwise complete darkness that was centered on the eye of the observer and was tilted out of the frontoparallel plane by a variable amount and direction (from -30 degrees to +30 degrees in 10 degrees steps). The stimulus tilt was either around an axis through the center of the two eyes (pitch; VPEL was measured) or around the long axis of the body that passed through the center of the viewing eye (yaw; VPSA was measured). Large variations in the localization settings were measured that were systematic with stimulus tilt. The slopes of the functions plouing the deviations from veridicality against the orientation of the 2-line stimulus ('induction functions') were larger for the rolled-to-horizontal observer than for the erect observer for both VPEL and VPSA, and for a given body orientation were larger for the VPEL discrimination than for the VPSA discrimination; the influences of body orientation in physical space and the direction of the discrimination relative to the body were lineraly additive. Both the y-intercepts of the induction functions and the central perceptual point measured in complete darkness were lower when the norm setting by the subject was along the vertical than when it was along the horizontal; this held for both the VPEL and VPSA discriminations. The systematic effects of body orientation on the slopes and of line orientation on the y-intercepts and dark values result from an effect of gravity on the settings and fit well to a general principle: any departure from erect posture increases the induction effects of the visual stimulus. The effect of gravity is consistent with the effect of gravity in previous work in high-g environments with the VPEL discrimination.  (+info)

The stationarity hypothesis: an allocentric criterion in visual perception. (7/224)

Having long considered that extraretinal information plays little or no role in spatial vision, the study of structure from motion (SfM) has confounded a moving observer perceiving a stationary object with a non-moving observer perceiving a rigid object undergoing equal and opposite motion. However, recently it has been shown that extraretinal information does play an important role in the extraction of structure from motion by enhancing motion cues for objects that are stationary in an allocentric, world-fixed reference frame (Nature 409 (2001) 85). Here, we test whether stationarity per se is a criterion in SfM by pitting it against rigidity. We have created stimuli that, for a moving observer, offer two interpretations: one that is rigid but non-stationary, another that is more stationary or less rigid. In two experiments, with subjects reporting either structure or motion, we show that stationary, non-rigid solutions are preferred over rigid, non-stationary solutions; and that when no perfectly stationary solutions is available, the visual system prefers the solution that is most stationary. These results demonstrate that allocentric criteria, derived from extra-retinal information, participate in reconstructing the visual scene.  (+info)

The subjective vertical and the sense of self orientation during active body tilt. (8/224)

Previous testing of the ability to set a luminous line to the direction of gravity in passively-tilted subjects, in darkness, has revealed a remarkable pattern of systematic errors at tilts beyond 60 degrees, as if body tilt is undercompensated or underestimated (Aubert or A-effect). We investigated whether these consistent deviations from orientation constancy can be avoided during active body tilt, where more potential cues about body tilt (e.g. proprioception and efference copy) are available. The effects of active body tilt on the subjective vertical and on the perception of self tilt were studied in six subjects. After adopting a laterally-tilted posture, while standing in a dark room, they indicated the subjective vertical by adjusting a visual line and gave their verbal estimate of head orientation, expressed on a clock scale. Head roll tilts covered the range from -150 degrees to +150 degrees. The subjective vertical results showed no sign of improvement. Actively-tilted subjects still exhibited the same pattern of systematic errors that characterised their performance during passive tilt. Random errors in this task showed a steep monotonic increase with tilt angle, as in earlier passive tilt experiments. By contrast, verbal head-tilt estimates in the active experiments showed a clear improvement and were now almost devoid of systematic errors, but the noise level remained high. Various models are discussed in an attempt to clarify how these task-related differences and the selective improvement of the self-tilt estimates in the active experiments may have come about.  (+info)