Contour knowledge transfer for salient object detection

Xin Li, Fan Yang, Hong Cheng, Wei Liu, Dinggang Shen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In recent years, deep Convolutional Neural Networks (CNNs) have broken all records in salient object detection. However, training such a deep model requires a large amount of manual annotations. Our goal is to overcome this limitation by automatically converting an existing deep contour detection model into a salient object detection model without using any manual salient object masks. For this purpose, we have created a deep network architecture, namely Contour-to-Saliency Network (C2S-Net), by grafting a new branch onto a well-trained contour detection network. Therefore, our C2S-Net has two branches for performing two different tasks: (1) predicting contours with the original contour branch, and (2) estimating per-pixel saliency score of each image with the newly-added saliency branch. To bridge the gap between these two tasks, we further propose a contour-to-saliency transferring method to automatically generate salient object masks which can be used to train the saliency branch from outputs of the contour branch. Finally, we introduce a novel alternating training pipeline to gradually update the network parameters. In this scheme, the contour branch generates saliency masks for training the saliency branch, while the saliency branch, in turn, feeds back saliency knowledge in the form of saliency-aware contour labels, for fine-tuning the contour branch. The proposed method achieves state-of-the-art performance on five well-known benchmarks, outperforming existing fully supervised methods while also maintaining high efficiency.

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings
EditorsYair Weiss, Vittorio Ferrari, Cristian Sminchisescu, Martial Hebert
PublisherSpringer Verlag
Pages370-385
Number of pages16
ISBN (Print)9783030012663
DOIs
Publication statusPublished - 2018 Jan 1
Externally publishedYes
Event15th European Conference on Computer Vision, ECCV 2018 - Munich, Germany
Duration: 2018 Sep 82018 Sep 14

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11219 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other15th European Conference on Computer Vision, ECCV 2018
CountryGermany
CityMunich
Period18/9/818/9/14

Fingerprint

Knowledge Transfer
Saliency
Object Detection
Masks
Branch
Network architecture
Labels
Mask
Tuning
Pipelines
Pixels
Neural networks
Object detection
Network Architecture
High Efficiency
Annotation
Pixel
Update
Model
Neural Networks

Keywords

  • Deep learning
  • Saliency detection
  • Transfer learning

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Li, X., Yang, F., Cheng, H., Liu, W., & Shen, D. (2018). Contour knowledge transfer for salient object detection. In Y. Weiss, V. Ferrari, C. Sminchisescu, & M. Hebert (Eds.), Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings (pp. 370-385). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11219 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-030-01267-0_22

Contour knowledge transfer for salient object detection. / Li, Xin; Yang, Fan; Cheng, Hong; Liu, Wei; Shen, Dinggang.

Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. ed. / Yair Weiss; Vittorio Ferrari; Cristian Sminchisescu; Martial Hebert. Springer Verlag, 2018. p. 370-385 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11219 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Li, X, Yang, F, Cheng, H, Liu, W & Shen, D 2018, Contour knowledge transfer for salient object detection. in Y Weiss, V Ferrari, C Sminchisescu & M Hebert (eds), Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11219 LNCS, Springer Verlag, pp. 370-385, 15th European Conference on Computer Vision, ECCV 2018, Munich, Germany, 18/9/8. https://doi.org/10.1007/978-3-030-01267-0_22
Li X, Yang F, Cheng H, Liu W, Shen D. Contour knowledge transfer for salient object detection. In Weiss Y, Ferrari V, Sminchisescu C, Hebert M, editors, Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. Springer Verlag. 2018. p. 370-385. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-01267-0_22
Li, Xin ; Yang, Fan ; Cheng, Hong ; Liu, Wei ; Shen, Dinggang. / Contour knowledge transfer for salient object detection. Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. editor / Yair Weiss ; Vittorio Ferrari ; Cristian Sminchisescu ; Martial Hebert. Springer Verlag, 2018. pp. 370-385 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{57e92bd143f64b6b87f1c14cfc92affc,
title = "Contour knowledge transfer for salient object detection",
abstract = "In recent years, deep Convolutional Neural Networks (CNNs) have broken all records in salient object detection. However, training such a deep model requires a large amount of manual annotations. Our goal is to overcome this limitation by automatically converting an existing deep contour detection model into a salient object detection model without using any manual salient object masks. For this purpose, we have created a deep network architecture, namely Contour-to-Saliency Network (C2S-Net), by grafting a new branch onto a well-trained contour detection network. Therefore, our C2S-Net has two branches for performing two different tasks: (1) predicting contours with the original contour branch, and (2) estimating per-pixel saliency score of each image with the newly-added saliency branch. To bridge the gap between these two tasks, we further propose a contour-to-saliency transferring method to automatically generate salient object masks which can be used to train the saliency branch from outputs of the contour branch. Finally, we introduce a novel alternating training pipeline to gradually update the network parameters. In this scheme, the contour branch generates saliency masks for training the saliency branch, while the saliency branch, in turn, feeds back saliency knowledge in the form of saliency-aware contour labels, for fine-tuning the contour branch. The proposed method achieves state-of-the-art performance on five well-known benchmarks, outperforming existing fully supervised methods while also maintaining high efficiency.",
keywords = "Deep learning, Saliency detection, Transfer learning",
author = "Xin Li and Fan Yang and Hong Cheng and Wei Liu and Dinggang Shen",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/978-3-030-01267-0_22",
language = "English",
isbn = "9783030012663",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "370--385",
editor = "Yair Weiss and Vittorio Ferrari and Cristian Sminchisescu and Martial Hebert",
booktitle = "Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings",

}

TY - GEN

T1 - Contour knowledge transfer for salient object detection

AU - Li, Xin

AU - Yang, Fan

AU - Cheng, Hong

AU - Liu, Wei

AU - Shen, Dinggang

PY - 2018/1/1

Y1 - 2018/1/1

N2 - In recent years, deep Convolutional Neural Networks (CNNs) have broken all records in salient object detection. However, training such a deep model requires a large amount of manual annotations. Our goal is to overcome this limitation by automatically converting an existing deep contour detection model into a salient object detection model without using any manual salient object masks. For this purpose, we have created a deep network architecture, namely Contour-to-Saliency Network (C2S-Net), by grafting a new branch onto a well-trained contour detection network. Therefore, our C2S-Net has two branches for performing two different tasks: (1) predicting contours with the original contour branch, and (2) estimating per-pixel saliency score of each image with the newly-added saliency branch. To bridge the gap between these two tasks, we further propose a contour-to-saliency transferring method to automatically generate salient object masks which can be used to train the saliency branch from outputs of the contour branch. Finally, we introduce a novel alternating training pipeline to gradually update the network parameters. In this scheme, the contour branch generates saliency masks for training the saliency branch, while the saliency branch, in turn, feeds back saliency knowledge in the form of saliency-aware contour labels, for fine-tuning the contour branch. The proposed method achieves state-of-the-art performance on five well-known benchmarks, outperforming existing fully supervised methods while also maintaining high efficiency.

AB - In recent years, deep Convolutional Neural Networks (CNNs) have broken all records in salient object detection. However, training such a deep model requires a large amount of manual annotations. Our goal is to overcome this limitation by automatically converting an existing deep contour detection model into a salient object detection model without using any manual salient object masks. For this purpose, we have created a deep network architecture, namely Contour-to-Saliency Network (C2S-Net), by grafting a new branch onto a well-trained contour detection network. Therefore, our C2S-Net has two branches for performing two different tasks: (1) predicting contours with the original contour branch, and (2) estimating per-pixel saliency score of each image with the newly-added saliency branch. To bridge the gap between these two tasks, we further propose a contour-to-saliency transferring method to automatically generate salient object masks which can be used to train the saliency branch from outputs of the contour branch. Finally, we introduce a novel alternating training pipeline to gradually update the network parameters. In this scheme, the contour branch generates saliency masks for training the saliency branch, while the saliency branch, in turn, feeds back saliency knowledge in the form of saliency-aware contour labels, for fine-tuning the contour branch. The proposed method achieves state-of-the-art performance on five well-known benchmarks, outperforming existing fully supervised methods while also maintaining high efficiency.

KW - Deep learning

KW - Saliency detection

KW - Transfer learning

UR - http://www.scopus.com/inward/record.url?scp=85055424049&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85055424049&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-01267-0_22

DO - 10.1007/978-3-030-01267-0_22

M3 - Conference contribution

SN - 9783030012663

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 370

EP - 385

BT - Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings

A2 - Weiss, Yair

A2 - Ferrari, Vittorio

A2 - Sminchisescu, Cristian

A2 - Hebert, Martial

PB - Springer Verlag

ER -