Tracking-by-segmentation using superpixel-wise neural network

Se Ho Lee, Won Dong Jang, Chang-Su Kim

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

A tracking-by-segmentation algorithm, which tracks and segments a target object in a video sequence, is proposed in this paper. In the first frame, we segment out the target object in a user-annotated bounding box. Then, we divide subsequent frames into superpixels. We develop a superpixel-wise neural network for tracking-by-segmentation, called TBSNet, which extracts multi-level convolutional features of each superpixel and yields the foreground probability of the superpixel as the output. We train TBSNet in two stages. First, we perform offline training to enable TBSNet to discriminate general objects from the background. Second, during the tracking, we fine-tune TBSNet to distinguish the target object from non-targets and adapt to color change and shape variation of the target object. Finally, we perform conditional random field optimization to improve the segmentation quality further. Experimental results demonstrate that the proposed algorithm outperforms the state-of-the-art trackers on four challenging data sets.

Original languageEnglish
Article number8476565
Pages (from-to)54982-54993
Number of pages12
JournalIEEE Access
Volume6
DOIs
Publication statusPublished - 2018 Jan 1

Fingerprint

Neural networks
Color

Keywords

  • object segmentation
  • object tracking
  • Tracking-by-segmentation
  • visual tracking

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Cite this

Tracking-by-segmentation using superpixel-wise neural network. / Lee, Se Ho; Jang, Won Dong; Kim, Chang-Su.

In: IEEE Access, Vol. 6, 8476565, 01.01.2018, p. 54982-54993.

Research output: Contribution to journalArticle

Lee, Se Ho ; Jang, Won Dong ; Kim, Chang-Su. / Tracking-by-segmentation using superpixel-wise neural network. In: IEEE Access. 2018 ; Vol. 6. pp. 54982-54993.
@article{55cfa93e1f6045808057c67467b75d48,
title = "Tracking-by-segmentation using superpixel-wise neural network",
abstract = "A tracking-by-segmentation algorithm, which tracks and segments a target object in a video sequence, is proposed in this paper. In the first frame, we segment out the target object in a user-annotated bounding box. Then, we divide subsequent frames into superpixels. We develop a superpixel-wise neural network for tracking-by-segmentation, called TBSNet, which extracts multi-level convolutional features of each superpixel and yields the foreground probability of the superpixel as the output. We train TBSNet in two stages. First, we perform offline training to enable TBSNet to discriminate general objects from the background. Second, during the tracking, we fine-tune TBSNet to distinguish the target object from non-targets and adapt to color change and shape variation of the target object. Finally, we perform conditional random field optimization to improve the segmentation quality further. Experimental results demonstrate that the proposed algorithm outperforms the state-of-the-art trackers on four challenging data sets.",
keywords = "object segmentation, object tracking, Tracking-by-segmentation, visual tracking",
author = "Lee, {Se Ho} and Jang, {Won Dong} and Chang-Su Kim",
year = "2018",
month = "1",
day = "1",
doi = "10.1109/ACCESS.2018.2872735",
language = "English",
volume = "6",
pages = "54982--54993",
journal = "IEEE Access",
issn = "2169-3536",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - Tracking-by-segmentation using superpixel-wise neural network

AU - Lee, Se Ho

AU - Jang, Won Dong

AU - Kim, Chang-Su

PY - 2018/1/1

Y1 - 2018/1/1

N2 - A tracking-by-segmentation algorithm, which tracks and segments a target object in a video sequence, is proposed in this paper. In the first frame, we segment out the target object in a user-annotated bounding box. Then, we divide subsequent frames into superpixels. We develop a superpixel-wise neural network for tracking-by-segmentation, called TBSNet, which extracts multi-level convolutional features of each superpixel and yields the foreground probability of the superpixel as the output. We train TBSNet in two stages. First, we perform offline training to enable TBSNet to discriminate general objects from the background. Second, during the tracking, we fine-tune TBSNet to distinguish the target object from non-targets and adapt to color change and shape variation of the target object. Finally, we perform conditional random field optimization to improve the segmentation quality further. Experimental results demonstrate that the proposed algorithm outperforms the state-of-the-art trackers on four challenging data sets.

AB - A tracking-by-segmentation algorithm, which tracks and segments a target object in a video sequence, is proposed in this paper. In the first frame, we segment out the target object in a user-annotated bounding box. Then, we divide subsequent frames into superpixels. We develop a superpixel-wise neural network for tracking-by-segmentation, called TBSNet, which extracts multi-level convolutional features of each superpixel and yields the foreground probability of the superpixel as the output. We train TBSNet in two stages. First, we perform offline training to enable TBSNet to discriminate general objects from the background. Second, during the tracking, we fine-tune TBSNet to distinguish the target object from non-targets and adapt to color change and shape variation of the target object. Finally, we perform conditional random field optimization to improve the segmentation quality further. Experimental results demonstrate that the proposed algorithm outperforms the state-of-the-art trackers on four challenging data sets.

KW - object segmentation

KW - object tracking

KW - Tracking-by-segmentation

KW - visual tracking

UR - http://www.scopus.com/inward/record.url?scp=85054412693&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85054412693&partnerID=8YFLogxK

U2 - 10.1109/ACCESS.2018.2872735

DO - 10.1109/ACCESS.2018.2872735

M3 - Article

AN - SCOPUS:85054412693

VL - 6

SP - 54982

EP - 54993

JO - IEEE Access

JF - IEEE Access

SN - 2169-3536

M1 - 8476565

ER -