View-invariant 3D action recognition using spatiotemporal self-similarities from depth camera

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Citations (Scopus)

Abstract

The problem of viewpoint changes is an important issue in the study of human action recognition. In this paper, we propose the use of spatial features in a spatiotemporal self-similarity matrix (SSM) based on action recognition that is robust in viewpoint changes from depth sequences. The spatial features represent a discriminative density of 3D point clouds in a 3D grid. We construct the spatiotemporal SSM for the spatial features that change along with frames. To obtain the spatiotemporal SSM, we compute the Euclidean distance of each spatial feature between two frames. The spatiotemporal SSM represents similarity of human action robust in viewpoint changes. Our proposed method is robust in viewpoint changes and various length of action sequence. This method is evaluated on ACTA2 dataset containing the multi-view RGBD human action data, and MSRAction3D dataset. In the experimental validation, the spatiotemporal SSM is a good solution for the problem of viewpoint changes in a depth sequence.

Original languageEnglish
Title of host publicationProceedings - International Conference on Pattern Recognition
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages501-505
Number of pages5
ISBN (Print)9781479952083
DOIs
Publication statusPublished - 2014 Jan 1
Event22nd International Conference on Pattern Recognition, ICPR 2014 - Stockholm, Sweden
Duration: 2014 Aug 242014 Aug 28

Other

Other22nd International Conference on Pattern Recognition, ICPR 2014
CountrySweden
CityStockholm
Period14/8/2414/8/28

Fingerprint

Cameras

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Cite this

Lee, A. R., Suk, H-I., & Lee, S. W. (2014). View-invariant 3D action recognition using spatiotemporal self-similarities from depth camera. In Proceedings - International Conference on Pattern Recognition (pp. 501-505). [6976806] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICPR.2014.95

View-invariant 3D action recognition using spatiotemporal self-similarities from depth camera. / Lee, A. Reum; Suk, Heung-Il; Lee, Seong Whan.

Proceedings - International Conference on Pattern Recognition. Institute of Electrical and Electronics Engineers Inc., 2014. p. 501-505 6976806.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lee, AR, Suk, H-I & Lee, SW 2014, View-invariant 3D action recognition using spatiotemporal self-similarities from depth camera. in Proceedings - International Conference on Pattern Recognition., 6976806, Institute of Electrical and Electronics Engineers Inc., pp. 501-505, 22nd International Conference on Pattern Recognition, ICPR 2014, Stockholm, Sweden, 14/8/24. https://doi.org/10.1109/ICPR.2014.95
Lee AR, Suk H-I, Lee SW. View-invariant 3D action recognition using spatiotemporal self-similarities from depth camera. In Proceedings - International Conference on Pattern Recognition. Institute of Electrical and Electronics Engineers Inc. 2014. p. 501-505. 6976806 https://doi.org/10.1109/ICPR.2014.95
Lee, A. Reum ; Suk, Heung-Il ; Lee, Seong Whan. / View-invariant 3D action recognition using spatiotemporal self-similarities from depth camera. Proceedings - International Conference on Pattern Recognition. Institute of Electrical and Electronics Engineers Inc., 2014. pp. 501-505
@inproceedings{405818131ce74305b8e8c23afc3ab3d2,
title = "View-invariant 3D action recognition using spatiotemporal self-similarities from depth camera",
abstract = "The problem of viewpoint changes is an important issue in the study of human action recognition. In this paper, we propose the use of spatial features in a spatiotemporal self-similarity matrix (SSM) based on action recognition that is robust in viewpoint changes from depth sequences. The spatial features represent a discriminative density of 3D point clouds in a 3D grid. We construct the spatiotemporal SSM for the spatial features that change along with frames. To obtain the spatiotemporal SSM, we compute the Euclidean distance of each spatial feature between two frames. The spatiotemporal SSM represents similarity of human action robust in viewpoint changes. Our proposed method is robust in viewpoint changes and various length of action sequence. This method is evaluated on ACTA2 dataset containing the multi-view RGBD human action data, and MSRAction3D dataset. In the experimental validation, the spatiotemporal SSM is a good solution for the problem of viewpoint changes in a depth sequence.",
author = "Lee, {A. Reum} and Heung-Il Suk and Lee, {Seong Whan}",
year = "2014",
month = "1",
day = "1",
doi = "10.1109/ICPR.2014.95",
language = "English",
isbn = "9781479952083",
pages = "501--505",
booktitle = "Proceedings - International Conference on Pattern Recognition",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - View-invariant 3D action recognition using spatiotemporal self-similarities from depth camera

AU - Lee, A. Reum

AU - Suk, Heung-Il

AU - Lee, Seong Whan

PY - 2014/1/1

Y1 - 2014/1/1

N2 - The problem of viewpoint changes is an important issue in the study of human action recognition. In this paper, we propose the use of spatial features in a spatiotemporal self-similarity matrix (SSM) based on action recognition that is robust in viewpoint changes from depth sequences. The spatial features represent a discriminative density of 3D point clouds in a 3D grid. We construct the spatiotemporal SSM for the spatial features that change along with frames. To obtain the spatiotemporal SSM, we compute the Euclidean distance of each spatial feature between two frames. The spatiotemporal SSM represents similarity of human action robust in viewpoint changes. Our proposed method is robust in viewpoint changes and various length of action sequence. This method is evaluated on ACTA2 dataset containing the multi-view RGBD human action data, and MSRAction3D dataset. In the experimental validation, the spatiotemporal SSM is a good solution for the problem of viewpoint changes in a depth sequence.

AB - The problem of viewpoint changes is an important issue in the study of human action recognition. In this paper, we propose the use of spatial features in a spatiotemporal self-similarity matrix (SSM) based on action recognition that is robust in viewpoint changes from depth sequences. The spatial features represent a discriminative density of 3D point clouds in a 3D grid. We construct the spatiotemporal SSM for the spatial features that change along with frames. To obtain the spatiotemporal SSM, we compute the Euclidean distance of each spatial feature between two frames. The spatiotemporal SSM represents similarity of human action robust in viewpoint changes. Our proposed method is robust in viewpoint changes and various length of action sequence. This method is evaluated on ACTA2 dataset containing the multi-view RGBD human action data, and MSRAction3D dataset. In the experimental validation, the spatiotemporal SSM is a good solution for the problem of viewpoint changes in a depth sequence.

UR - http://www.scopus.com/inward/record.url?scp=84919883499&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84919883499&partnerID=8YFLogxK

U2 - 10.1109/ICPR.2014.95

DO - 10.1109/ICPR.2014.95

M3 - Conference contribution

AN - SCOPUS:84919883499

SN - 9781479952083

SP - 501

EP - 505

BT - Proceedings - International Conference on Pattern Recognition

PB - Institute of Electrical and Electronics Engineers Inc.

ER -