Target detection against narrow band noise backgrounds. (73/9350)

We studied the detectability of narrow band random noise targets embedded in narrow band random noise backgrounds as a function of differences in center frequency, spatial frequency bandwidth and orientation bandwidth between target and the immediately adjacent background. Unlike most target detection experiments the targets were not added to the background; they replaced the underlying background texture. Simulations showed that target detection probabilities could be accounted for by a simple transformation on the summed outputs of a two layer filter model similar to the complex channels model proposed by Graham, Beck and Sutter (Graham, N., Beck, J., & Sutter, A. (1992). Vision Research, 32, 719-743). Subsequently, the model was tested on the detection of camouflaged vehicle targets with encouraging results.  (+info)

The perception and discrimination of speed in complex motion. (74/9350)

Random dot kinematograms were used to simulate radial, rotational and spiral optic flow. The stimuli were designed so that, while dot speed increased linearly with distance from the centre of the display, the density of dots remained uniform throughout their presentation. In two experiments, subjects were required to perform a temporal 2AFC speed discrimination task. Experiment 1 measured the perceived speed of a range of optic flow patterns against a rotational comparison stimulus. Radial motions were found to appear faster than rotations by approximately 10%, with a smaller but significant effect for spirals. Experiment 2 measured discrimination thresholds for pairs of similar optic flow stimuli identical in all respects except mean speed. No consistent differences were observed between the speed discrimination thresholds of radial, rotational and spiral motions and a control stimulus with the same speed profile in which motion followed fixed random trajectories. The perceived speed results are interpreted in terms of a model satisfying constraints on motion-in-depth and object rigidity, while speed discrimination appears to be based upon the pooled responses of elementary motion detectors.  (+info)

Second-order motion discrimination by feature-tracking. (75/9350)

When a plaid pattern (the sum of two high spatial frequency gratings oriented +/- 84 degrees from vertical) jumps horizontally by 3/8 of its spatial period its contrast envelope, a second-order pattern, moves in the opposite direction to its luminance waveform. Observers report that the pattern moves in the direction of the contrast envelope when the jumps are repeated at intervals of more than 125 ms and in the direction of the luminance profile when they are repeated at longer intervals. When a pedestal [Lu, Z.-L. & Sperling, G. (1995). Vision Research, 35, 2697-2722] is added to the moving plaid a higher contrast is required to see motion of the contrast envelope but not to see the motion of the luminance profile, suggesting that the motion of the contrast envelope is sensed by a mechanism that tracks features. Static plaids with different spatial parameters from the moving pattern are less effective at raising the contrast required to see the motion of the contrast envelope and simple gratings of low or high spatial frequency are almost completely ineffective, suggesting that the feature-tracking mechanism is selective for the type of pattern being tracked and rejects distortion products and zero-crossings.  (+info)

On the mechanism for scale invariance in orientation-defined textures. (76/9350)

Texture perception is generally found to be scale invariant, that is, the perceived properties of textures do not change with viewing distance. Previously, Kingdom, F. A. A., Keeble, D. R. T., & Moulden, B. (Vision Research, 1995, 35, 79-91) showed that the orientation modulation function (OMF), which describes sensitivity to sinusoidal modulations of micropattern orientation as a function of modulation spatial frequency, was scale invariant--peak sensitivity occurred at a modulation spatial frequency which was invariant with viewing distance when modulation frequency was plotted in object units, e.g. cycles cm-1. We have attempted to determine the mechanism underlying the scale invariant properties of the OMF. We first confirmed that the OMF was scale invariant using Gabor-micropattern textures. We then measured OMFs at a number of viewing distances, while holding constant various stimulus features in the retinal image. The question was which stimulus feature(s) disrupted scale invariance when manipulated in this way. We found that the scale (size) of the micropatterns was a critical factor and that the most important scale parameter was the micropatterns' carrier spatial frequency. Micropattern length and density were shown to have a small influence on scale invariance, while micropattern width had no influence at all. These results are consistent with the idea that scale invariance in orientation-defined textures is a consequence of 'second-stage' texture-sensitive mechanisms being tied in spatial scale selectivity to their 'first-stage' luminance-contrast-sensitive inputs.  (+info)

Simultaneous color constancy: how surface color perception varies with the illuminant. (77/9350)

In two experiments simultaneous color constancy was measured using simulations of illuminated surfaces presented on a CRT monitor. Subjects saw two identical Mondrians side-by-side: one Mondrian rendered under a standard illuminant, the other rendered under one of several test illuminants. The matching field was adjusted under the test illuminant so that it (a) had the same hue, saturation, and brightness (appearance match) or (b) looked as if it were cut from the same piece of paper (surface match) as a test surface under the standard illuminant. Matches were set for three different surface collections. The surface matches showed a much higher level of constancy than the appearance matches. The adjustment in the surface matches was nearly complete in the L and M cone data, and deviations from perfect constancy were mainly due to failures in the adjustment of the S cone signals. Besides this difference in amount of adjustment, the appearance and surface matches showed two major similarities. First, both types of matches were well described by simple parametric models. In particular, a model based on the notion of von Kries adjustment provided a good, although not perfect, description of the data. Second, for both types of matches the illuminant adjustment was largely independent of the surface collection in the image. The two types of matches thus differed only quantitatively, there was no qualitative difference between them.  (+info)

Remodelling colour contrast: implications for visual processing and colour representation. (78/9350)

Colour contrast describes the influence of one colour on the perception of colours in neighbouring areas. This study addresses two issues: (i) the accurate representation of the colour changes; (ii) the underlying visual mechanisms. Observers viewed a haploscopic display in which a standard display was presented to one eye and a matching display to the other. The matches could be represented accurately using a diagram that is a logarithmic transformation of the MacLeod-Boynton (r, b) (1979) chromaticity diagram. Since haploscopic presentation has been described as isolating retinal processes (Whittle, P., & Challands, P.D.C. (1969). The effect of background luminance on the brightness of flashes. Vision Research, 9, 1095-1110; Chichilnisky, E.J., & Wandell, B.A. (1995). Photoreceptor sensitivity changes explain color appearance shifts induced by large uniform backgrounds in dichoptic matching. Vision Research, 35, 239-254), the results are discussed in terms of receptor sensitivity changes and the ratio of receptor contrasts.  (+info)

Ocular responses to radial optic flow and single accelerated targets in humans. (79/9350)

Self-movement in a structured environment induces retinal image motion called optic flow. Optic flow on one hand provides information about the direction of self-motion. On the other hand optic flow presents large field visual motion which will elicit eye movements for the purpose of image stabilization. We investigated oculomotor behavior in humans during the presentation of radial optic flow fields which simulated forward or backward self-motion. Different conditions and oculomotor tasks were compared. In one condition, subjects had to actively pursue single dots in a radial flow pattern. In a second condition, subjects had to pursue single dots over a dark background. These dots accelerated or decelerated similar to single dots in radial optic flow. In a third condition, subjects were asked to passively view the entire optic flow stimulus. Smooth pursuit eye movements with high gain were observed when dots were actively pursued. This was true for single dots moving over a homogeneous background and for single dots in the optic flow. Passive viewing of optic flow stimuli evoked eye movements that resembled an optokinetic nystagmus. Slow phase eye movements tracked the motion of elements in the optic flow. Gain was low for simulated forward self-motion (expanding optic flow) and high for simulated backward movement self-motion (contracting optic flow). Thus, voluntary pursuit and passive optokinetic responses yielded different gain for the tracking of elements of an expanding optic flow pattern. During passive viewing of the optic flow stimulus, gaze was usually at or near the focus of radial flow. Our results give insights into the oculomotor performances and needs for image stabilization during self-motion and in the role of gaze strategy for the detection of the direction of heading.  (+info)

A self-organizing neural system for learning to recognize textured scenes. (80/9350)

A self-organizing ARTEX model is developed to categorize and classify textured image regions. ARTEX specializes the FACADE model of how the visual cortex sees, and the ART model of how temporal and prefrontal cortices interact with the hippocampal system to learn visual recognition categories and their names. FACADE processing generates a vector of boundary and surface properties, notably texture and brightness properties, by utilizing multi-scale filtering, competition, and diffusive filling-in. Its context-sensitive local measures of textured scenes can be used to recognize scenic properties that gradually change across space, as well as abrupt texture boundaries. ART incrementally learns recognition categories that classify FACADE output vectors, class names of these categories, and their probabilities. Top-down expectations within ART encode learned prototypes that pay attention to expected visual features. When novel visual information creates a poor match with the best existing category prototype, a memory search selects a new category with which classify the novel data. ARTEX is compared with psychophysical data, and is bench marked on classification of natural textures and synthetic aperture radar images. It outperforms state-of-the-art systems that use rule-based, backpropagation, and K-nearest neighbor classifiers.  (+info)