Empathetic video experience through timely multimodal interaction

Myunghee Lee, Jeonghyun Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we describe a video playing system, named "Empatheater," that is controlled by multimodal interaction. As the video is played, the user must interact and emulate predefined video "events" through multimodal guidance and whole body interaction (e.g. following the main character's motion or gestures). Without the timely interaction, the video stops. The system shows guidance information as how to properly react and continue the video playing. The purpose of such a system is to provide indirect experience (of the given video content) by eliciting the user to mimic and empathize with the main character. The user is given the illusion (suspended disbelief) of playing an active role in the unraveling video content. We discuss various features of the newly proposed interactive medium. In addition, we report on the results of the pilot study that was carried out to evaluate its user experience compared to passive video viewing and keyboard based video control.

Original languageEnglish
Title of host publicationInternational Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010
DOIs
Publication statusPublished - 2010 Dec 1
Event1st International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010 - Beijing, China
Duration: 2010 Nov 82010 Nov 10

Other

Other1st International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010
CountryChina
CityBeijing
Period10/11/810/11/10

Fingerprint

video
interaction
experience
interactive media
event

Keywords

  • empathy
  • interactive video
  • multimodality
  • user experience
  • user guidance

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Software
  • Education

Cite this

Lee, M., & Kim, J. (2010). Empathetic video experience through timely multimodal interaction. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010 [1891948] https://doi.org/10.1145/1891903.1891948

Empathetic video experience through timely multimodal interaction. / Lee, Myunghee; Kim, Jeonghyun.

International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010. 2010. 1891948.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lee, M & Kim, J 2010, Empathetic video experience through timely multimodal interaction. in International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010., 1891948, 1st International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010, Beijing, China, 10/11/8. https://doi.org/10.1145/1891903.1891948
Lee M, Kim J. Empathetic video experience through timely multimodal interaction. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010. 2010. 1891948 https://doi.org/10.1145/1891903.1891948
Lee, Myunghee ; Kim, Jeonghyun. / Empathetic video experience through timely multimodal interaction. International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010. 2010.
@inproceedings{c9bc3ddd264f439fb790c065e83191ea,
title = "Empathetic video experience through timely multimodal interaction",
abstract = "In this paper, we describe a video playing system, named {"}Empatheater,{"} that is controlled by multimodal interaction. As the video is played, the user must interact and emulate predefined video {"}events{"} through multimodal guidance and whole body interaction (e.g. following the main character's motion or gestures). Without the timely interaction, the video stops. The system shows guidance information as how to properly react and continue the video playing. The purpose of such a system is to provide indirect experience (of the given video content) by eliciting the user to mimic and empathize with the main character. The user is given the illusion (suspended disbelief) of playing an active role in the unraveling video content. We discuss various features of the newly proposed interactive medium. In addition, we report on the results of the pilot study that was carried out to evaluate its user experience compared to passive video viewing and keyboard based video control.",
keywords = "empathy, interactive video, multimodality, user experience, user guidance",
author = "Myunghee Lee and Jeonghyun Kim",
year = "2010",
month = "12",
day = "1",
doi = "10.1145/1891903.1891948",
language = "English",
isbn = "9781450304146",
booktitle = "International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010",

}

TY - GEN

T1 - Empathetic video experience through timely multimodal interaction

AU - Lee, Myunghee

AU - Kim, Jeonghyun

PY - 2010/12/1

Y1 - 2010/12/1

N2 - In this paper, we describe a video playing system, named "Empatheater," that is controlled by multimodal interaction. As the video is played, the user must interact and emulate predefined video "events" through multimodal guidance and whole body interaction (e.g. following the main character's motion or gestures). Without the timely interaction, the video stops. The system shows guidance information as how to properly react and continue the video playing. The purpose of such a system is to provide indirect experience (of the given video content) by eliciting the user to mimic and empathize with the main character. The user is given the illusion (suspended disbelief) of playing an active role in the unraveling video content. We discuss various features of the newly proposed interactive medium. In addition, we report on the results of the pilot study that was carried out to evaluate its user experience compared to passive video viewing and keyboard based video control.

AB - In this paper, we describe a video playing system, named "Empatheater," that is controlled by multimodal interaction. As the video is played, the user must interact and emulate predefined video "events" through multimodal guidance and whole body interaction (e.g. following the main character's motion or gestures). Without the timely interaction, the video stops. The system shows guidance information as how to properly react and continue the video playing. The purpose of such a system is to provide indirect experience (of the given video content) by eliciting the user to mimic and empathize with the main character. The user is given the illusion (suspended disbelief) of playing an active role in the unraveling video content. We discuss various features of the newly proposed interactive medium. In addition, we report on the results of the pilot study that was carried out to evaluate its user experience compared to passive video viewing and keyboard based video control.

KW - empathy

KW - interactive video

KW - multimodality

KW - user experience

KW - user guidance

UR - http://www.scopus.com/inward/record.url?scp=78650960122&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=78650960122&partnerID=8YFLogxK

U2 - 10.1145/1891903.1891948

DO - 10.1145/1891903.1891948

M3 - Conference contribution

AN - SCOPUS:78650960122

SN - 9781450304146

BT - International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010

ER -