понедельник, 12 марта 2012 г.

Modifying guided search: preattentive object files

Many models of visual search behaviour consist of a first stage in which basic features are processed "in parallel" at all locations across the visual field and a second, limited - capacity stage in which processing is restricted to a single item or location. Perhaps the best known of these models is Feature Integration Theory (FIT) (e.g. Treisman, 1993). The original FIT was the starting point for our Guided Search (Wolfe, 1994).

The heart of Guided Search is the proposal that the parallel first stage can guide the spatial deployment of the limited resources of the second stage. For example, consider a conjunction search for a red vertical line among green vertical lines and red horizontal lines. There is good reason to believe that no first stage mechanism is specifically designed to be sensitive to conjunctions of colour and orientation. Nevertheless, searches for conjunctions of this sort are quite efficient; more efficient than they ought to be if second stage resources were deployed from item to item in a random, serial search. This efficiency can be obtained if information is combined from two first stage feature processors. If a colour processor guides attention toward all red items while an orientation processor guides attention toward the vertical items, attention would be guided most strongly toward the red vertical items. Even if we assume that guidance is not perfect, the combination of these two sources of information will make the search for a conjunction more efficient that it would have been in the absence of guidance.

In models like Guided Search and Feature Integration, the first stage is usually assumed to be a composed of a set of modules that process features in some sort of spatiotopic array. Indeed, the term "Feature Integration" derives from the hypothesis that attention is needed to bind these features into an object (creating what Treisman and Kahneman dubbed an "object file"; Kahneman, Treisman, & Gibbs, 1992). However, a significant body of recent work indicates that attention can be deployed to objects. This suggests that some representation of objects exist preattentively. Our recent research has uncovered two properties of what we will call "preattentive object files".

First, preattentive object files collect features, but the relationships of those features to each other are not known until attention is deployed to the object. For example, imagine a "plus" composed of a green vertical and a red horizontal line. Search for this target item among green horizontal/red vertical plusses is very inefficient (RT x set size slopes 47 msec/item for target present trials). Both target and distractors form preattentive object files containing the attributes "red", "green", "vertical", and "horizontal". The relationships between these features are known only after the arrival of attention. If the horizontal segments of the plusses are connected, the plusses are broken into vertical segments and long horizontal lines. Now search for the green vertical target is efficient (5.5 msec/item) because each plus is split into two preattentive object files, making it possible to guide attention to, say, green and vertical.

The second curious property of preattentive object files is that they have no global shape. Some local form properties are known preattentively. For example, an item with a line terminator can be found efficiently among items without a terminator. However, in a variety of search tasks, when target identification relied on global shape identification, search was very inefficient (slopes from 30 to 90 msec/item).

In sum, prior to the arrival of attention, the visual scene is divided into objects. These may not be the final perceptual objects, but these "preattentive object files" are more than just spatially coincident features. Features like colour, orientation, size, etc. are attached to the preattentive object file. Global shape information is not. The relationships between features, including the relationships that will define global shape are not available until attention binds the contents of the preattentive object file into a perceptible object.

Kahneman, D., Treisman, A.M., & Gibbs, B.J. (1992). The reviewing of object files: Object - specific integration of information. Cognitive Psychology, 24, 175 - 219.

Treisman, A. (1993) In A. Baddeley and L. Weiskrantz (eds). Attention: Selection, awareness, and control. Oxford, Clarendon Press.

Wolfe, J., 1994, Psychonomic Bulletin and Review 1(2): 202 - 238.

Комментариев нет:

Отправить комментарий