We present an integrated method to match multiple features including points, regions, and lines in two perspective images, and simultaneously segment them such that all features in each segment have the same 3D motion. The method uses local affine (first-order) approximation of the displacement field under the assumption of locally rigid motion. Each distinct motion is represented in the image plane by a distinct set of values for six displacement parameters. To compute the values of these parameters, the 6D space is split into two 3D spaces, and each is exhaustively searched coarse-to-fine. This yields two results simultaneously, correspondences between features and segmentation of features into subsets corresponding to locally rigid patches of moving objects. Since matching is based on the 2D approximation of 3D motion, problems due to motion or object boundaries and occlusion can be avoided. Large motion is also handled in a manner unlike the methods based on flow field. Integrated use of the multiple features not only gives a larger number of features (overconstrained system) but also reduces the number of candidate matches for the features, thus making matching less ambiguous. Experimental results are presented for four pairs of real images.
ASJC Scopus subject areas
- Signal Processing
- Computer Vision and Pattern Recognition