Neural dynamics of motion integration and segmentation within and across apertures. (73/677)

A neural model is developed of how motion integration and segmentation processes, both within and across apertures, compute global motion percepts. Figure-ground properties, such as occlusion, influence which motion signals determine the percept. For visible apertures, a line's terminators do not specify true line motion. For invisible apertures, a line's intrinsic terminators create veridical feature-tracking signals. Sparse feature-tracking signals can be amplified before they propagate across position and are integrated with ambiguous motion signals within line interiors. This integration process determines the global percept. It is the result of several processing stages: directional transient cells respond to image transients and input to a directional short-range filter that selectively boosts feature-tracking signals with the help of competitive signals. Then, a long-range filter inputs to directional cells that pool signals over multiple orientations, opposite contrast polarities, and depths. This all happens no later than cortical area MT. The directional cells activate a directional grouping network, proposed to occur within cortical area MST, within which directions compete to determine a local winner. Enhanced feature-tracking signals typically win over ambiguous motion signals. Model MST cells that encode the winning direction feed back to model MT cells, where they boost directionally consistent cell activities and suppress inconsistent activities over the spatial region to which they project. This feedback accomplishes directional and depthful motion capture within that region. Model simulations include the barberpole illusion, motion capture, the spotted barberpole, the triple barberpole, the occluded translating square illusion, motion transparency and the chopsticks illusion. Qualitative explanations of illusory contours from translating terminators and plaid adaptation are also given.  (+info)

Transducer models of head-centred motion perception. (74/677)

By adding retinal and pursuit eye-movement velocity one can determine the motion of an object with respect to the head. It would seem likely that the visual system carries out a similar computation by summing extra-retinal, eye-velocity signals with retinal motion signals. Perceived head-centred motion may therefore be determined by differences in the way these signals encode speed. For example, if extra-retinal signals provide the lower estimate of speed then moving objects will appear slower when pursued (Aubert-Fleischl phenomenon) and stationary objects will move opposite to an eye movement (Filehne illusion). Most previous work proposes that these illusions exist because retinal signals encode retinal motion accurately while extra-retinal signals under-estimate eye speed. A more general model is presented in which both signals could be in error. Two types of input/output speed relationship are examined. The first uses linear speed transducers and the second non-linear speed transducers, the latter based on power laws. It is shown that studies of the Aubert-Fleischl phenomenon and Filehne illusion reveal the gain ratio or power ratio alone. We also consider general velocity-matching and show that in theory matching functions are limited by gain ratio in the linear case. However, in the non-linear case individual transducer shapes are revealed albeit up to an unknown scaling factor. The experiments show that the Aubert-Fleischl phenomenon and Filehne illusion are adequately described by linear speed transducers with a gain ratio less than one. For some observers, this is also the case in general velocity-matching experiments. For other observers, however, behaviour is non-linear and, according to the transducer model, indicates the existence of expansive non-linearities in speed encoding. This surprising result is discussed in relation to other theories of head-centred motion perception and the possible strategies some observers might adopt when judging stimulus motion during an eye movement.  (+info)

Cue interactions, border ownership and illusory contours. (75/677)

When two retinally adjacent image regions both claim 'ownership' of their common boundary based on different visual cues, their perceptual competition could result in: (1) cue averaging, in which the common boundary is not strongly perceived as owned by either region, or (2) perceptual bistability, in which the competing interpretations alternate in conscious perception over time. We report that when the perception of one or another illusory surface depends on the outcome of such a competition, the alternative percepts primarily exhibit bistability rather than averaging (or mutual weakening). More generally, we suggest that mutually inconsistent perceptual interpretations of sensory data will tend to exhibit bistability to the extent that they require significant constructive activity by vision. When one interpretation is more 'literal' (i.e. less constructive), it will tend to block alternative percepts. Put somewhat differently, when competing visual cues specify different preferred (but not necessary) interpretations, then the likely perceptual outcome is bistability rather than cue averaging. However, inconsistent visual cues can also result in perceptual bistability if the interpretations they specify are so incommensurable that simply averaging them would not provide useful information for perception.  (+info)

A flash-lag effect in random motion. (76/677)

The flash-lag effect refers to the phenomenon in which a flash adjacent to a continuously moving object is perceived to lag behind it. To test three previously proposed hypotheses (motion extrapolation, positional averaging, and differential latency), a new stimulus configuration, to which the three hypotheses give different predictions, was introduced. Instead of continuous motion, a randomly jumping bar was used as the moving stimulus, relative to which the position of the flash was judged. The results were visualized as a spatiotemporal correlogram, in which the response to a flash was plotted at the space-time relative to the position and onset of the jumping bar. The actual human performance was not consistent with any of the original hypotheses. However, all the results were explained well if the differential latency was assumed to fluctuate considerably, its probability density function being approximated by Gaussian. Also, the model fit well with previously published data on the flash-lag effect.  (+info)

Darkness filling-in: a neural model of darkness induction. (77/677)

A model of darkness induction based on a neural filling-in mechanism is proposed. The model borrows principles from both Land's Retinex theory and BCS/FCS filling-in model of Grossberg and colleagues. The main novel assumption of the induction model is that darkness filling-in signals, which originate at luminance borders, are partially blocked when they try to cross other borders. The percentage of the filling-in signal that is blocked is proportional to the log luminance ratio across the border that does the blocking. The model is used to give a quantitative account of the data from a brightness matching experiment in which a decremental test disk was surrounded by two concentric rings. The luminances of the rings were independently varied to modulate the brightness of the test. Observers adjusted the luminance of a comparison disk surrounded by a single ring of higher luminance to match the test disk in brightness.  (+info)

Anisotropy in judging the absolute direction of motion. (78/677)

The angular dependence of precision measurements is well established as the oblique effect in motion perception. Recently, it has been shown that the visual system also exhibits anisotropic behaviour with respect to accuracy of the absolute direction of motion of random dot fields. This study aimed to investigate whether this angular dependent, directional bias is a general phenomenon of motion perception. Our results demonstrate, for single translating tilted lines viewed foveally, an extraordinary illusion with perceptual deviations of up to 35 degrees from veridical. Not only is the magnitude of these deviations substantially larger than that for random dots, but the general pattern of the illusion is also different from that found for dot fields. Significant differences in the bias, as a function of line tilt and line length, suggest that the illusion does not result from fixed inaccuracies of the visual system in the computation of direction of motion. Potential sources for these large biases are motion integration mechanisms. These were also found to be anisotropic. The anisotropic nature and the surprisingly large magnitude of the effect make it a necessary consideration in analyses of motion experiments and in modelling studies.  (+info)

Shifts in the population response in the middle temporal visual area parallel perceptual and motor illusions produced by apparent motion. (79/677)

We recorded behavioral, perceptual, and neural responses to targets that provided apparent visual motion consisting of a sequence of stationary flashes. Increasing the flash separation degrades the quality of motion, but for some separations evoked larger smooth pursuit responses from both humans and monkeys than did smooth motion. The same flash separations also produced an increase in perceived speed in humans. Recordings from single neurons in the middle temporal visual area (MT) of awake monkeys revealed a potential basis for the illusion in the population response. Apparent motion produced diminished neural responses relative to smooth motion. However, neurons with slow preferred speeds were more affected than were those with fast preferred speeds. Increasing the flash separation thus caused the population response to become diminished in amplitude and to shift so that the most active neurons had higher preferred speeds. The entire constellation of effects of apparent motion on the magnitude and latency of the initial pursuit response was accounted for if the MT population response was decoded by (1) creating an opponent motion signal for each neuron by treating its preferred and opposite direction responses as those of a pair of oppositely tuned neurons and (2) computing the vector average of these opponent motion signals. Other ways of decoding the population response recorded in MT failed to account for one or more aspects of behavior. We conclude that the effects of apparent motion on both pursuit and perception can be accounted for if target speed is estimated from the MT population response by a neural computation that implements a vector average based on opponent motion.  (+info)

Visual summation of luminance lines and illusory contours induced by pictorial, motion, and disparity cues. (80/677)

Illusory contours where no contrast exists in the image can be seen between pairs of spatially separate but aligned inducing real contours defined either by pictorial cues (luminance contrasts or offset gratings), kinetic contrast, or binocular disparity contrast. In previous studies it has been shown that the detection of a thin luminous line is facilitated when the line is superimposed on illusory contours and the inducing flanking elements are defined by luminance contrast. By using a spatial forced-choice technique I show that luminous lines summate with illusory contours induced by luminance contrast, offset gratings, motion contrast, and disparity contrast when the line is superimposed on the illusory contour. Control experiments show that the positional cues, offered by the inducing contours, are unable to account for these results. It is suggested that real luminous lines or edges and illusory contours activate common neural mechanisms in the brain irrespectively of the stimulus attributes that induce the illusory contour.  (+info)