Vouch: multimodal touch-and-voice input for smart watches under difficult operating conditions

Jaedong Lee, Changhyeon Lee, Jeonghyun Kim

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

We consider a multimodal method for smart-watch text entry, called “Vouch,” which combines touch and voice input. Touch input is familiar and has good ergonomic accessibility, but is limited by the fat-finger problem (or equivalently, the screen size) and is sensitive to user motion. Voice input is mostly immune to slow user motion, but its reliability may suffer from environmental noise. Together, however, such characteristics can complement each other when coping with the difficult smart-watch operating conditions. With Vouch, the user makes an approximate touch among the densely distributed alphabetic keys; the accompanying voice input can be used to effectively disambiguate the target from among possible candidates, if not identify the target outright. We present a prototype implementation of the proposed multimodal input method and compare its performance and usability to the conventional unimodal method. We focus particularly on the potential improvement under difficult operating conditions, such as when the user is in motion. The comparative experiment validates our hypothesis that the Vouch multimodal approach would show more reliable recognition performance and higher usability.

Original languageEnglish
Pages (from-to)1-11
Number of pages11
JournalJournal on Multimodal User Interfaces
DOIs
Publication statusAccepted/In press - 2017 Jun 19

Fingerprint

Watches
Speech recognition
Ergonomics
Oils and fats
Experiments

Keywords

  • Multimodal interaction
  • Smart watch input
  • Touch input
  • Voice input

ASJC Scopus subject areas

  • Signal Processing
  • Human-Computer Interaction

Cite this

Vouch : multimodal touch-and-voice input for smart watches under difficult operating conditions. / Lee, Jaedong; Lee, Changhyeon; Kim, Jeonghyun.

In: Journal on Multimodal User Interfaces, 19.06.2017, p. 1-11.

Research output: Contribution to journalArticle

@article{85faefb0d1f04f72873d38fdae1fec71,
title = "Vouch: multimodal touch-and-voice input for smart watches under difficult operating conditions",
abstract = "We consider a multimodal method for smart-watch text entry, called “Vouch,” which combines touch and voice input. Touch input is familiar and has good ergonomic accessibility, but is limited by the fat-finger problem (or equivalently, the screen size) and is sensitive to user motion. Voice input is mostly immune to slow user motion, but its reliability may suffer from environmental noise. Together, however, such characteristics can complement each other when coping with the difficult smart-watch operating conditions. With Vouch, the user makes an approximate touch among the densely distributed alphabetic keys; the accompanying voice input can be used to effectively disambiguate the target from among possible candidates, if not identify the target outright. We present a prototype implementation of the proposed multimodal input method and compare its performance and usability to the conventional unimodal method. We focus particularly on the potential improvement under difficult operating conditions, such as when the user is in motion. The comparative experiment validates our hypothesis that the Vouch multimodal approach would show more reliable recognition performance and higher usability.",
keywords = "Multimodal interaction, Smart watch input, Touch input, Voice input",
author = "Jaedong Lee and Changhyeon Lee and Jeonghyun Kim",
year = "2017",
month = "6",
day = "19",
doi = "10.1007/s12193-017-0246-y",
language = "English",
pages = "1--11",
journal = "Journal on Multimodal User Interfaces",
issn = "1783-7677",
publisher = "Springer Verlag",

}

TY - JOUR

T1 - Vouch

T2 - multimodal touch-and-voice input for smart watches under difficult operating conditions

AU - Lee, Jaedong

AU - Lee, Changhyeon

AU - Kim, Jeonghyun

PY - 2017/6/19

Y1 - 2017/6/19

N2 - We consider a multimodal method for smart-watch text entry, called “Vouch,” which combines touch and voice input. Touch input is familiar and has good ergonomic accessibility, but is limited by the fat-finger problem (or equivalently, the screen size) and is sensitive to user motion. Voice input is mostly immune to slow user motion, but its reliability may suffer from environmental noise. Together, however, such characteristics can complement each other when coping with the difficult smart-watch operating conditions. With Vouch, the user makes an approximate touch among the densely distributed alphabetic keys; the accompanying voice input can be used to effectively disambiguate the target from among possible candidates, if not identify the target outright. We present a prototype implementation of the proposed multimodal input method and compare its performance and usability to the conventional unimodal method. We focus particularly on the potential improvement under difficult operating conditions, such as when the user is in motion. The comparative experiment validates our hypothesis that the Vouch multimodal approach would show more reliable recognition performance and higher usability.

AB - We consider a multimodal method for smart-watch text entry, called “Vouch,” which combines touch and voice input. Touch input is familiar and has good ergonomic accessibility, but is limited by the fat-finger problem (or equivalently, the screen size) and is sensitive to user motion. Voice input is mostly immune to slow user motion, but its reliability may suffer from environmental noise. Together, however, such characteristics can complement each other when coping with the difficult smart-watch operating conditions. With Vouch, the user makes an approximate touch among the densely distributed alphabetic keys; the accompanying voice input can be used to effectively disambiguate the target from among possible candidates, if not identify the target outright. We present a prototype implementation of the proposed multimodal input method and compare its performance and usability to the conventional unimodal method. We focus particularly on the potential improvement under difficult operating conditions, such as when the user is in motion. The comparative experiment validates our hypothesis that the Vouch multimodal approach would show more reliable recognition performance and higher usability.

KW - Multimodal interaction

KW - Smart watch input

KW - Touch input

KW - Voice input

UR - http://www.scopus.com/inward/record.url?scp=85021063208&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85021063208&partnerID=8YFLogxK

U2 - 10.1007/s12193-017-0246-y

DO - 10.1007/s12193-017-0246-y

M3 - Article

AN - SCOPUS:85021063208

SP - 1

EP - 11

JO - Journal on Multimodal User Interfaces

JF - Journal on Multimodal User Interfaces

SN - 1783-7677

ER -