Human interaction recognition framework based on interacting body part attention

Dong Gyu Lee, Seong Whan Lee

Research output: Contribution to journalArticlepeer-review

Abstract

Human activity recognition in videos has been widely studied and has recently gained significant advances with deep learning approaches; however, it remains a challenging task. In this paper, we propose a novel framework that simultaneously considers both implicit and explicit representations of human interactions by fusing information of local image where the interaction actively occurred, primitive motion with the posture of individual subject's body parts, and the co-occurrence of overall appearance change. Human interactions change, depending on how the body parts of each human interact with the other. The proposed method captures the subtle difference between different interactions using interacting body part attention. Semantically important body parts that interact with other objects are given more weight during feature representation. The combined feature of interacting body part attention-based individual representation and the co-occurrence descriptor of the full-body appearance change is fed into long short-term memory to model the temporal dynamics over time in a single framework. The experimental results on five widely used public datasets demonstrate the effectiveness of the proposed method to recognize human interactions from videos.

Original languageEnglish
Article number108645
JournalPattern Recognition
Volume128
DOIs
Publication statusPublished - 2022 Aug

Keywords

  • Human activity recognition
  • Human-human interaction
  • Interacting body part attention

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Human interaction recognition framework based on interacting body part attention'. Together they form a unique fingerprint.

Cite this