Empathetic video experience through timely multimodal interaction

Myunghee Lee, Jeonghyun Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we describe a video playing system, named "Empatheater," that is controlled by multimodal interaction. As the video is played, the user must interact and emulate predefined video "events" through multimodal guidance and whole body interaction (e.g. following the main character's motion or gestures). Without the timely interaction, the video stops. The system shows guidance information as how to properly react and continue the video playing. The purpose of such a system is to provide indirect experience (of the given video content) by eliciting the user to mimic and empathize with the main character. The user is given the illusion (suspended disbelief) of playing an active role in the unraveling video content. We discuss various features of the newly proposed interactive medium. In addition, we report on the results of the pilot study that was carried out to evaluate its user experience compared to passive video viewing and keyboard based video control.

Original languageEnglish
Title of host publicationInternational Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010
DOIs
Publication statusPublished - 2010 Dec 1
Event1st International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010 - Beijing, China
Duration: 2010 Nov 82010 Nov 10

Other

Other1st International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010
CountryChina
CityBeijing
Period10/11/810/11/10

Keywords

  • empathy
  • interactive video
  • multimodality
  • user experience
  • user guidance

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Software
  • Education

Fingerprint Dive into the research topics of 'Empathetic video experience through timely multimodal interaction'. Together they form a unique fingerprint.

  • Cite this

    Lee, M., & Kim, J. (2010). Empathetic video experience through timely multimodal interaction. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010 [1891948] https://doi.org/10.1145/1891903.1891948