TY - JOUR
T1 - Vouch
T2 - multimodal touch-and-voice input for smart watches under difficult operating conditions
AU - Lee, Jaedong
AU - Lee, Changhyeon
AU - Kim, Jeonghyun
PY - 2017/6/19
Y1 - 2017/6/19
N2 - We consider a multimodal method for smart-watch text entry, called “Vouch,” which combines touch and voice input. Touch input is familiar and has good ergonomic accessibility, but is limited by the fat-finger problem (or equivalently, the screen size) and is sensitive to user motion. Voice input is mostly immune to slow user motion, but its reliability may suffer from environmental noise. Together, however, such characteristics can complement each other when coping with the difficult smart-watch operating conditions. With Vouch, the user makes an approximate touch among the densely distributed alphabetic keys; the accompanying voice input can be used to effectively disambiguate the target from among possible candidates, if not identify the target outright. We present a prototype implementation of the proposed multimodal input method and compare its performance and usability to the conventional unimodal method. We focus particularly on the potential improvement under difficult operating conditions, such as when the user is in motion. The comparative experiment validates our hypothesis that the Vouch multimodal approach would show more reliable recognition performance and higher usability.
AB - We consider a multimodal method for smart-watch text entry, called “Vouch,” which combines touch and voice input. Touch input is familiar and has good ergonomic accessibility, but is limited by the fat-finger problem (or equivalently, the screen size) and is sensitive to user motion. Voice input is mostly immune to slow user motion, but its reliability may suffer from environmental noise. Together, however, such characteristics can complement each other when coping with the difficult smart-watch operating conditions. With Vouch, the user makes an approximate touch among the densely distributed alphabetic keys; the accompanying voice input can be used to effectively disambiguate the target from among possible candidates, if not identify the target outright. We present a prototype implementation of the proposed multimodal input method and compare its performance and usability to the conventional unimodal method. We focus particularly on the potential improvement under difficult operating conditions, such as when the user is in motion. The comparative experiment validates our hypothesis that the Vouch multimodal approach would show more reliable recognition performance and higher usability.
KW - Multimodal interaction
KW - Smart watch input
KW - Touch input
KW - Voice input
UR - http://www.scopus.com/inward/record.url?scp=85021063208&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85021063208&partnerID=8YFLogxK
U2 - 10.1007/s12193-017-0246-y
DO - 10.1007/s12193-017-0246-y
M3 - Article
AN - SCOPUS:85021063208
SP - 1
EP - 11
JO - Journal on Multimodal User Interfaces
JF - Journal on Multimodal User Interfaces
SN - 1783-7677
ER -