View-independent human action recognition based on a stereo camera

Myung Cheol Roh, Ho Keun Shin, Seong Whan Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Vision-based human action recognition provides an advanced interface, and research in the field of human action recognition has been actively carried out. However, an environment from dynamic viewpoint, where we can be in any position, any direction, etc., must be considered in our living 3D space. In order to overcome the viewpoint dependency, we propose a Volume Motion Template(VMT) and Projected Motion Template (PMT). The proposed VMT method is an extension of the Motion History Image (MHI) method to 3D space. The PMT is generated by projecting the VMT into a 2D plane that is orthogonal to an optimal virtual viewpoint where the optimal virtual viewpoint is a viewpoint from which an action can be described in greatest detail, in 2D space. From the proposed method, any actions taken from different viewpoints can be recognized independent of the viewpoints. The experimental results demonstrate the accuracies and effectiveness of the proposed VMT method for view-independent human action recognition.

Original languageEnglish
Title of host publicationProceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR
Pages832-836
Number of pages5
DOIs
Publication statusPublished - 2009 Dec 1
Event2009 Chinese Conference on Pattern Recognition, CCPR 2009 and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR - Nanjing, China
Duration: 2009 Nov 42009 Nov 6

Other

Other2009 Chinese Conference on Pattern Recognition, CCPR 2009 and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR
CountryChina
CityNanjing
Period09/11/409/11/6

Fingerprint

Cameras

Keywords

  • Human action recognition
  • Motion history image
  • View-independence
  • Volume motion template

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Software

Cite this

Roh, M. C., Shin, H. K., & Lee, S. W. (2009). View-independent human action recognition based on a stereo camera. In Proceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR (pp. 832-836). [5343991] https://doi.org/10.1109/CCPR.2009.5343991

View-independent human action recognition based on a stereo camera. / Roh, Myung Cheol; Shin, Ho Keun; Lee, Seong Whan.

Proceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR. 2009. p. 832-836 5343991.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Roh, MC, Shin, HK & Lee, SW 2009, View-independent human action recognition based on a stereo camera. in Proceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR., 5343991, pp. 832-836, 2009 Chinese Conference on Pattern Recognition, CCPR 2009 and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR, Nanjing, China, 09/11/4. https://doi.org/10.1109/CCPR.2009.5343991
Roh MC, Shin HK, Lee SW. View-independent human action recognition based on a stereo camera. In Proceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR. 2009. p. 832-836. 5343991 https://doi.org/10.1109/CCPR.2009.5343991
Roh, Myung Cheol ; Shin, Ho Keun ; Lee, Seong Whan. / View-independent human action recognition based on a stereo camera. Proceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR. 2009. pp. 832-836
@inproceedings{4702c7fd62fe48108da1ca176efca213,
title = "View-independent human action recognition based on a stereo camera",
abstract = "Vision-based human action recognition provides an advanced interface, and research in the field of human action recognition has been actively carried out. However, an environment from dynamic viewpoint, where we can be in any position, any direction, etc., must be considered in our living 3D space. In order to overcome the viewpoint dependency, we propose a Volume Motion Template(VMT) and Projected Motion Template (PMT). The proposed VMT method is an extension of the Motion History Image (MHI) method to 3D space. The PMT is generated by projecting the VMT into a 2D plane that is orthogonal to an optimal virtual viewpoint where the optimal virtual viewpoint is a viewpoint from which an action can be described in greatest detail, in 2D space. From the proposed method, any actions taken from different viewpoints can be recognized independent of the viewpoints. The experimental results demonstrate the accuracies and effectiveness of the proposed VMT method for view-independent human action recognition.",
keywords = "Human action recognition, Motion history image, View-independence, Volume motion template",
author = "Roh, {Myung Cheol} and Shin, {Ho Keun} and Lee, {Seong Whan}",
year = "2009",
month = "12",
day = "1",
doi = "10.1109/CCPR.2009.5343991",
language = "English",
isbn = "9781424441990",
pages = "832--836",
booktitle = "Proceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR",

}

TY - GEN

T1 - View-independent human action recognition based on a stereo camera

AU - Roh, Myung Cheol

AU - Shin, Ho Keun

AU - Lee, Seong Whan

PY - 2009/12/1

Y1 - 2009/12/1

N2 - Vision-based human action recognition provides an advanced interface, and research in the field of human action recognition has been actively carried out. However, an environment from dynamic viewpoint, where we can be in any position, any direction, etc., must be considered in our living 3D space. In order to overcome the viewpoint dependency, we propose a Volume Motion Template(VMT) and Projected Motion Template (PMT). The proposed VMT method is an extension of the Motion History Image (MHI) method to 3D space. The PMT is generated by projecting the VMT into a 2D plane that is orthogonal to an optimal virtual viewpoint where the optimal virtual viewpoint is a viewpoint from which an action can be described in greatest detail, in 2D space. From the proposed method, any actions taken from different viewpoints can be recognized independent of the viewpoints. The experimental results demonstrate the accuracies and effectiveness of the proposed VMT method for view-independent human action recognition.

AB - Vision-based human action recognition provides an advanced interface, and research in the field of human action recognition has been actively carried out. However, an environment from dynamic viewpoint, where we can be in any position, any direction, etc., must be considered in our living 3D space. In order to overcome the viewpoint dependency, we propose a Volume Motion Template(VMT) and Projected Motion Template (PMT). The proposed VMT method is an extension of the Motion History Image (MHI) method to 3D space. The PMT is generated by projecting the VMT into a 2D plane that is orthogonal to an optimal virtual viewpoint where the optimal virtual viewpoint is a viewpoint from which an action can be described in greatest detail, in 2D space. From the proposed method, any actions taken from different viewpoints can be recognized independent of the viewpoints. The experimental results demonstrate the accuracies and effectiveness of the proposed VMT method for view-independent human action recognition.

KW - Human action recognition

KW - Motion history image

KW - View-independence

KW - Volume motion template

UR - http://www.scopus.com/inward/record.url?scp=74549134124&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=74549134124&partnerID=8YFLogxK

U2 - 10.1109/CCPR.2009.5343991

DO - 10.1109/CCPR.2009.5343991

M3 - Conference contribution

SN - 9781424441990

SP - 832

EP - 836

BT - Proceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR

ER -