Journal of Vision
Резюме статей, опубликованных в <a href="https://www.psychology-online.net/guide/links/link-5737.html">Journal of Vision</a>. Полнотекстовые варианты статей доступны в журнале.
Близкие разделы (1): Библиотека » Учебная литература » Хрестоматия » Психология познавательных процессов » Психология ощущений и восприятия (50)
A controversial hypothesis within the domain of sensory research is that observers are able to use visual and auditory distance cues to maintain perceptual synchrony—despite the differential velocities of light and sound. Here we show that observers are categorically unable to utilize such distance cues. Nevertheless, given a period of adaptation to the naturally occurring audiovisual asynchrony associated with each viewing distance, a temporal recalibration mechanism helps to perceptually compensate for the effects of distance-induced auditory delays. These effects demonstrate a novel functionality of temporal recalibration with clear ecological benefits.
|
How do selective and constructive visual mechanisms interact to determine the outcome of conscious perception? Binocular rivalry involves selective perception of one of two competing monocular images, whereas visual phantoms involve perceptual filling-in between two low-contrast collinear gratings. Recently, we showed that visual phantoms lead to neural filling-in of activity in V1 and V2, which can be dynamically gated by rivalry suppression (M. Meng, D. A. Remus, & F. Tong, 2005). Here, we used psychophysical methods to study the temporal dynamics of filling-in, by applying rivalry or flash suppression to trigger the suppression or appearance of visual phantoms. Experiments revealed that phantom filling-in involves an active, time-dependent process that depends on the phenomenal visibility of the phantom-inducing gratings. Shortly after the inducing gratings became dominant during rivalry, the likelihood of perceiving phantoms in the intervening gap increased over time, with larger gaps requiring more time for filling-in. In contrast, suppression of the inducing gratings promptly led to the disappearance of visual phantoms, with response times independent of gap size. The fact that binocular rivalry can prevent the formation of visual phantoms rules out the possibility that rivalry suppression occurs after the site of phantom filling-in. This study provides novel evidence that visual phantoms result from a slow time-dependent filling-in mechanism; possible models to account for its time course are discussed.
|
To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale–space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or “phantom” edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale–space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches.
|
Much of our interaction with the visual world requires us to isolate some currently important objects from other less important objects. This task becomes more difficult when objects move, or when our field of view moves relative to the world, requiring us to track these objects over space and time. Previous experiments have shown that observers can track a maximum of about 4 moving objects. A natural explanation for this capacity limit is that the visual system is architecturally limited to handling a fixed number of objects at once, a so-called magical number 4 on visual attention. In contrast to this view, Experiment 1 shows that tracking capacity is not fixed. At slow speeds it is possible to track up to 8 objects, and yet there are fast speeds at which only a single object can be tracked. Experiment 2 suggests that that the limit on tracking is related to the spatial resolution of attention. These findings suggest that the number of objects that can be tracked is primarily set by a flexibly allocated resource, which has important implications for the mechanisms of object tracking and for the relationship between object tracking and other cognitive processes.
|
Fast interceptive actions, such as catching a ball, rely upon accurate and precise information from vision. Recent models rely on flexible combinations of visual angle and its rate of expansion of which the tau parameter is a specific case. When an object approaches an observer, however, its trajectory may introduce bias into tau-like parameters that render these computations unacceptable as the sole source of information for actions. Here we show that observer knowledge of object size influences their action timing, and known size combined with image expansion simplifies the computations required to make interceptive actions and provides a route for experience to influence interceptive action.
|
We studied the perceptual integration of contours consisting of Gabor elements positioned along a smooth path, embedded among distractor elements. Contour elements either formed tangents to the path (“snakes”) or were perpendicular to it (“ladders”). Perfectly straight snakes and ladders were easily detected in the fovea but, at an eccentricity of 6°, only the snakes were detectable. The disproportionate impairment of peripheral ladder detection remained when we brought foveal performance away from ceiling by jittering the orientations of the elements. We propose that the failure to detect peripheral ladders is a form of crowding, the phenomenon observed when identification of peripherally located letters is disrupted by flanking letters. D. G. Pelli, M. Palomares, and N. J. Majaj (2004) outlined a model in which simple feature detectors are followed by integration fields, which are involved in tasks, such as letter identification, that require the outputs of several detectors. They proposed that crowding occurs because small integration fields are absent from the periphery, leading to inappropriate feature integration by large peripheral integration fields. We argue that the “association field,” which has been proposed to mediate contour integration (D. J. Field, A. Hayes, & R. F. Hess, 1993), is a type of integration field. Our data are explained by an elaboration of Pelli et al.'s model, in which weak ladder integration competes with strong snake integration. In the fovea, the association fields were small, and the model integrated snakes and ladders with little interference. In the periphery, the association fields were large, and integration of ladders was severely disrupted by interference from spurious snake contours. In contrast, the model easily detected snake contours in the periphery. In a further demonstration of the possible link between contour integration and crowding, we ran our contour integration model on groups of three-letter stimuli made from short line segments. Our model showed several key properties of crowding: The critical spacing for crowding to occur was independent of the size of the target letter, scaled with eccentricity, and was greater on the peripheral side of the target.
|
Studies of visual search performance with shaded stimuli, in which the target is rotated by 180° relative to the distracters, typically demonstrate more efficient performance in stimuli with vertical compared to horizontal shading gradients. In addition, performance is usually better for vertically shaded stimuli with top-light (seen as convex) distracters compared to those with bottom-light (seen as concave) distracters. These findings have been cited as evidence for the use of the prior assumptions of overhead lighting and convexity in the interpretation of shaded stimuli and suggest that these priors affect preattentive processing. Here we attempt to modify these priors by providing observers with visual–haptic training in an environment inconsistent with their priors. Observers' performance was measured in a visual search task and a shape judgment task before and after training. Following training, we found a reduced asymmetry between visual search performance with convex and concave distracters, suggesting a modification of the convexity prior. However, although evidence of a change in the light-from-above prior was found in the shape judgment task, no change was found in the visual search task. We conclude that experience can modify the convexity prior at a preattentive stage in processing; however, our training did not modify the light-from-above prior that is measured via visual search.
|
We present a numerical analysis of rendered pairs of rooms, in which the spectral power distribution of the illuminant in one room matched the surface reflectance function in the other room, and vice versa. We ask whether distinction between the rooms is possible and on what cues this discrimination is based. Using accurately rendered three-dimensional (3D) scenes, we found that room pairs can be distinguished based on indirect illumination, as suggested by A. L. Gilchrist and A. Jacobsen (1984). In a simulated color constancy scenario, we show that indirect illumination plays a pivotal role as areas of indirect illumination undergo a smaller appearance change than areas of direct illumination. Our study confirms that indirect illumination can play a critical role in surface color recovery and shows how computer rendering programs, which model the light–object interaction according to the laws of physics, are valuable tools that can be used to analyze and explore what image information is available to the visual system from 3D scenes.
|
The position of a flash presented during pursuit is mislocalized in the direction of the pursuit. Although this has been explained by a temporal mismatch between the slow visual processing of flash and fast efferent signals on eye positions, here we show that spatial contexts also play an important role in determining the flash position. We put various continuously lit objects (walls) between veridical and to-be-mislocalized positions of flash. Consequently, these walls significantly reduced the mislocalization of flash, preventing the flash from being mislocalized beyond the wall (Experiment 1). When the wall was shortened or had a hole in its center, the shape of the mislocalized flash was vertically shortened as if cutoff or funneled by the wall (Experiment 2). The wall also induced color interactions; a red wall made a green flash appear yellowish if it was in the path of mislocalization (Experiment 3). Finally, those flash–wall interactions could be induced even when the walls were presented after the disappearance of flash (Experiment 4). These results indicate that various features (position, shape, and color) of flash during pursuit are determined with an integration window that is spatially and temporally broad, providing a new insight for generating mechanisms of eye-movement mislocalizations.
|
The viewpoint aftereffect is a perceptual illusion that, after adapting to an object/face viewed from one side (e.g., 30В° to the left of center), when the same object/face is subsequently presented near the front view, the perceived viewing direction is biased in a direction opposite to that of the adapted viewpoint (e.g., 2В° to the right). In this study, we measured the face viewpoint aftereffects when the adapting and the testing faces were different in identity and gender and when their vertical orientations were inverted. The aftereffect showed a strong transfer following adaptation to other faces. This effect was slightly attenuated when the adapting and the test face stimuli were made more dissimilar. This suggests the existence of neurons jointly tuned to both face view and structure. However, the transfer from cross adapting to an inverted face was much reduced and weak, indicating that the neural coding of upright and inverted faces in the high-level visual cortex is different and a major part of the face viewpoint coding occurs at the level where faces are holistically represented.
|
Действия
Статистика категории
Статьи: 10
обычная: 10
Последнее добавление: 5.11.2007
обычная: 10
Последнее добавление: 5.11.2007