Head movement module in ACT-R for multi-display environment

Hyungseok Oh, Seongsik Jo, Rohae Myung

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

The ACT-R cognitive architecture only deals with a single display interface, which needs to be expanded and improved so as to describe the real environment that needs more than a single display. Therefore, this paper proposes a method to describe human performance in a multidisplay environment by developing the head module because the behavior of searching the object beyond the preferred visual angle of ± 15° could not be modeled with the visual module in ACTR. The result shows that ACT-R model with the head module should be necessary when performing tasks in a multi-display environment. In addition, a separate ACT-R model was also developed when a different head movement pattern was involved such as a peripheral vision.

Original languageEnglish
Title of host publicationProceedings of the Human Factors and Ergonomics Society
Pages1836-1839
Number of pages4
DOIs
Publication statusPublished - 2011 Nov 28
Event55th Human Factors and Ergonomics Society Annual Meeting, HFES 2011 - Las Vegas, NV, United States
Duration: 2011 Sep 192011 Sep 23

Other

Other55th Human Factors and Ergonomics Society Annual Meeting, HFES 2011
CountryUnited States
CityLas Vegas, NV
Period11/9/1911/9/23

ASJC Scopus subject areas

  • Human Factors and Ergonomics

Fingerprint Dive into the research topics of 'Head movement module in ACT-R for multi-display environment'. Together they form a unique fingerprint.

  • Cite this

    Oh, H., Jo, S., & Myung, R. (2011). Head movement module in ACT-R for multi-display environment. In Proceedings of the Human Factors and Ergonomics Society (pp. 1836-1839) https://doi.org/10.1177/1071181311551382