When a camouflaged animal sits in front of the appropriate background, the animal is effectively invisible. As soon as the animal moves, however, it is easily visible despite the fact that there is still no static shape information. Its shape is perceived solely by the pattern of changes over time. This process, referred to as Spatiotemporal Boundary Formation (SBF), can be initiated by a wide range of texture transformations, including changes in the visibility, shape, or color of individual texture elements. Shipley and colleagues have gathered a wealth of psychophysical data on SBF, and have presented a mathematical proof of how the orientation of local edge segments (LESs) can be recovered from as few as 3 element changes (Shipley and Kellman, 1997). Here, we extend this proof to the extraction of global form and motion. More specifically, we present a model that recovers the orientation of the LESs from a dataset consisting of the relative spatiotemporal location of the element changes. The recovered orientations of as few as 2 LESs can then be used to extract the global motion, which is then used to determine the relative spatiotemporal location and minimal length of the LESs. Computational simulations show that the model captures the major psychophysical aspects of SBF, including a dependency on the spatiotemporal density of element changes, a sensitivity to spurious changes, an ability to extract more than one figure at a time, and a tolerance for a non-constant global motion. Unlike Shipley and Kellman's earlier proof, which required that pairs of element changes be represented as local motion vectors, the present model merely encodes the relative spatiotemporal locations of the changes. This usage of a relative encoding scheme yields several emergent properties that are strikingly similar to the perception of aperture viewed figures (Anorthoscopic perception). This offering the possibility of unifying the two phenomena within a single mathematical model.
ASJC Scopus subject areas
- Sensory Systems