A novel online action detection framework from untrimmed video streams

Da Hye Yoon, Nam Gyu Cho, Seong Whan Lee

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Online temporal action localization from an untrimmed video stream is a challenging problem in computer vision. It is challenging because of i) in an untrimmed video stream, more than one action instance may appear, including background scenes, and ii) in online settings, only past and current information is available. Therefore, temporal priors, such as the average action duration of training data, which have been exploited by previous action detection methods, are not suitable for this task because of the high intra-class variation in human actions. We propose a novel online action detection framework that considers actions as a set of temporally ordered subclasses and leverages a future frame generation network to cope with the limited information issue associated with the problem outlined above. Additionally, we augment our data by varying the lengths of videos to allow the proposed method to learn about the high intra-class variation in human actions. We evaluate our method using two benchmark datasets, THUMOS’14 and ActivityNet, for an online temporal action localization scenario and demonstrate that the performance is comparable to state-of-the-art methods that have been proposed for offline settings.

Original languageEnglish
Article number107396
JournalPattern Recognition
Volume106
DOIs
Publication statusPublished - 2020 Oct

Keywords

  • 3D convolutional neural network
  • Future frame generation
  • Long short-term memory
  • Online action detection
  • Untrimmed video stream

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'A novel online action detection framework from untrimmed video streams'. Together they form a unique fingerprint.

Cite this