Visual tracking using pertinent patch selection and masking

Dae Youn Lee, Jae Young Sim, Chang-Su Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

35 Citations (Scopus)

Abstract

A novel visual tracking algorithm using patch-based appearance models is proposed in this paper. We first divide the bounding box of a target object into multiple patches and then select only pertinent patches, which occur repeatedly near the center of the bounding box, to construct the foreground appearance model. We also divide the input image into non-overlapping blocks, construct a background model at each block location, and integrate these background models for tracking. Using the appearance models, we obtain an accurate foreground probability map. Finally, we estimate the optimal object position by maximizing the likelihood, which is obtained by convolving the foreground probability map with the pertinence mask. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art tracking algorithms significantly in terms of center position errors and success rates.

Original languageEnglish
Title of host publicationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
PublisherIEEE Computer Society
Pages3486-3493
Number of pages8
ISBN (Print)9781479951178, 9781479951178
DOIs
Publication statusPublished - 2014 Jan 1
Event27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014 - Columbus, United States
Duration: 2014 Jun 232014 Jun 28

Other

Other27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014
CountryUnited States
CityColumbus
Period14/6/2314/6/28

Fingerprint

Masks

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Lee, D. Y., Sim, J. Y., & Kim, C-S. (2014). Visual tracking using pertinent patch selection and masking. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 3486-3493). [6909841] IEEE Computer Society. https://doi.org/10.1109/CVPR.2014.446

Visual tracking using pertinent patch selection and masking. / Lee, Dae Youn; Sim, Jae Young; Kim, Chang-Su.

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2014. p. 3486-3493 6909841.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lee, DY, Sim, JY & Kim, C-S 2014, Visual tracking using pertinent patch selection and masking. in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition., 6909841, IEEE Computer Society, pp. 3486-3493, 27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, United States, 14/6/23. https://doi.org/10.1109/CVPR.2014.446
Lee DY, Sim JY, Kim C-S. Visual tracking using pertinent patch selection and masking. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society. 2014. p. 3486-3493. 6909841 https://doi.org/10.1109/CVPR.2014.446
Lee, Dae Youn ; Sim, Jae Young ; Kim, Chang-Su. / Visual tracking using pertinent patch selection and masking. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2014. pp. 3486-3493
@inproceedings{98c65bc9243f46d8a3b850b2ee7639c3,
title = "Visual tracking using pertinent patch selection and masking",
abstract = "A novel visual tracking algorithm using patch-based appearance models is proposed in this paper. We first divide the bounding box of a target object into multiple patches and then select only pertinent patches, which occur repeatedly near the center of the bounding box, to construct the foreground appearance model. We also divide the input image into non-overlapping blocks, construct a background model at each block location, and integrate these background models for tracking. Using the appearance models, we obtain an accurate foreground probability map. Finally, we estimate the optimal object position by maximizing the likelihood, which is obtained by convolving the foreground probability map with the pertinence mask. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art tracking algorithms significantly in terms of center position errors and success rates.",
author = "Lee, {Dae Youn} and Sim, {Jae Young} and Chang-Su Kim",
year = "2014",
month = "1",
day = "1",
doi = "10.1109/CVPR.2014.446",
language = "English",
isbn = "9781479951178",
pages = "3486--3493",
booktitle = "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",
publisher = "IEEE Computer Society",

}

TY - GEN

T1 - Visual tracking using pertinent patch selection and masking

AU - Lee, Dae Youn

AU - Sim, Jae Young

AU - Kim, Chang-Su

PY - 2014/1/1

Y1 - 2014/1/1

N2 - A novel visual tracking algorithm using patch-based appearance models is proposed in this paper. We first divide the bounding box of a target object into multiple patches and then select only pertinent patches, which occur repeatedly near the center of the bounding box, to construct the foreground appearance model. We also divide the input image into non-overlapping blocks, construct a background model at each block location, and integrate these background models for tracking. Using the appearance models, we obtain an accurate foreground probability map. Finally, we estimate the optimal object position by maximizing the likelihood, which is obtained by convolving the foreground probability map with the pertinence mask. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art tracking algorithms significantly in terms of center position errors and success rates.

AB - A novel visual tracking algorithm using patch-based appearance models is proposed in this paper. We first divide the bounding box of a target object into multiple patches and then select only pertinent patches, which occur repeatedly near the center of the bounding box, to construct the foreground appearance model. We also divide the input image into non-overlapping blocks, construct a background model at each block location, and integrate these background models for tracking. Using the appearance models, we obtain an accurate foreground probability map. Finally, we estimate the optimal object position by maximizing the likelihood, which is obtained by convolving the foreground probability map with the pertinence mask. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art tracking algorithms significantly in terms of center position errors and success rates.

UR - http://www.scopus.com/inward/record.url?scp=84911406830&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84911406830&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2014.446

DO - 10.1109/CVPR.2014.446

M3 - Conference contribution

AN - SCOPUS:84911406830

SN - 9781479951178

SN - 9781479951178

SP - 3486

EP - 3493

BT - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

PB - IEEE Computer Society

ER -