TY - GEN
T1 - Empathetic video experience through timely multimodal interaction
AU - Lee, Myunghee
AU - Kim, Gerard J.
PY - 2010
Y1 - 2010
N2 - In this paper, we describe a video playing system, named "Empatheater," that is controlled by multimodal interaction. As the video is played, the user must interact and emulate predefined video "events" through multimodal guidance and whole body interaction (e.g. following the main character's motion or gestures). Without the timely interaction, the video stops. The system shows guidance information as how to properly react and continue the video playing. The purpose of such a system is to provide indirect experience (of the given video content) by eliciting the user to mimic and empathize with the main character. The user is given the illusion (suspended disbelief) of playing an active role in the unraveling video content. We discuss various features of the newly proposed interactive medium. In addition, we report on the results of the pilot study that was carried out to evaluate its user experience compared to passive video viewing and keyboard based video control.
AB - In this paper, we describe a video playing system, named "Empatheater," that is controlled by multimodal interaction. As the video is played, the user must interact and emulate predefined video "events" through multimodal guidance and whole body interaction (e.g. following the main character's motion or gestures). Without the timely interaction, the video stops. The system shows guidance information as how to properly react and continue the video playing. The purpose of such a system is to provide indirect experience (of the given video content) by eliciting the user to mimic and empathize with the main character. The user is given the illusion (suspended disbelief) of playing an active role in the unraveling video content. We discuss various features of the newly proposed interactive medium. In addition, we report on the results of the pilot study that was carried out to evaluate its user experience compared to passive video viewing and keyboard based video control.
KW - empathy
KW - interactive video
KW - multimodality
KW - user experience
KW - user guidance
UR - http://www.scopus.com/inward/record.url?scp=78650960122&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=78650960122&partnerID=8YFLogxK
U2 - 10.1145/1891903.1891948
DO - 10.1145/1891903.1891948
M3 - Conference contribution
AN - SCOPUS:78650960122
SN - 9781450304146
T3 - International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010
BT - International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010
T2 - 1st International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI 2010
Y2 - 8 November 2010 through 10 November 2010
ER -