Head movement module in ACT-R for multi-display environment

Hyungseok Oh, Seongsik Jo, Rohae Myung

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

The ACT-R cognitive architecture only deals with a single display interface, which needs to be expanded and improved so as to describe the real environment that needs more than a single display. Therefore, this paper proposes a method to describe human performance in a multidisplay environment by developing the head module because the behavior of searching the object beyond the preferred visual angle of ± 15° could not be modeled with the visual module in ACTR. The result shows that ACT-R model with the head module should be necessary when performing tasks in a multi-display environment. In addition, a separate ACT-R model was also developed when a different head movement pattern was involved such as a peripheral vision.

Original languageEnglish
Title of host publicationProceedings of the Human Factors and Ergonomics Society
Pages1836-1839
Number of pages4
DOIs
Publication statusPublished - 2011 Nov 28
Event55th Human Factors and Ergonomics Society Annual Meeting, HFES 2011 - Las Vegas, NV, United States
Duration: 2011 Sep 192011 Sep 23

Other

Other55th Human Factors and Ergonomics Society Annual Meeting, HFES 2011
CountryUnited States
CityLas Vegas, NV
Period11/9/1911/9/23

Fingerprint

Display devices
performance

ASJC Scopus subject areas

  • Human Factors and Ergonomics

Cite this

Oh, H., Jo, S., & Myung, R. (2011). Head movement module in ACT-R for multi-display environment. In Proceedings of the Human Factors and Ergonomics Society (pp. 1836-1839) https://doi.org/10.1177/1071181311551382

Head movement module in ACT-R for multi-display environment. / Oh, Hyungseok; Jo, Seongsik; Myung, Rohae.

Proceedings of the Human Factors and Ergonomics Society. 2011. p. 1836-1839.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Oh, H, Jo, S & Myung, R 2011, Head movement module in ACT-R for multi-display environment. in Proceedings of the Human Factors and Ergonomics Society. pp. 1836-1839, 55th Human Factors and Ergonomics Society Annual Meeting, HFES 2011, Las Vegas, NV, United States, 11/9/19. https://doi.org/10.1177/1071181311551382
Oh H, Jo S, Myung R. Head movement module in ACT-R for multi-display environment. In Proceedings of the Human Factors and Ergonomics Society. 2011. p. 1836-1839 https://doi.org/10.1177/1071181311551382
Oh, Hyungseok ; Jo, Seongsik ; Myung, Rohae. / Head movement module in ACT-R for multi-display environment. Proceedings of the Human Factors and Ergonomics Society. 2011. pp. 1836-1839
@inproceedings{e4160337dc554aa78c6f5dfaaddc5274,
title = "Head movement module in ACT-R for multi-display environment",
abstract = "The ACT-R cognitive architecture only deals with a single display interface, which needs to be expanded and improved so as to describe the real environment that needs more than a single display. Therefore, this paper proposes a method to describe human performance in a multidisplay environment by developing the head module because the behavior of searching the object beyond the preferred visual angle of ± 15° could not be modeled with the visual module in ACTR. The result shows that ACT-R model with the head module should be necessary when performing tasks in a multi-display environment. In addition, a separate ACT-R model was also developed when a different head movement pattern was involved such as a peripheral vision.",
author = "Hyungseok Oh and Seongsik Jo and Rohae Myung",
year = "2011",
month = "11",
day = "28",
doi = "10.1177/1071181311551382",
language = "English",
isbn = "9780945289395",
pages = "1836--1839",
booktitle = "Proceedings of the Human Factors and Ergonomics Society",

}

TY - GEN

T1 - Head movement module in ACT-R for multi-display environment

AU - Oh, Hyungseok

AU - Jo, Seongsik

AU - Myung, Rohae

PY - 2011/11/28

Y1 - 2011/11/28

N2 - The ACT-R cognitive architecture only deals with a single display interface, which needs to be expanded and improved so as to describe the real environment that needs more than a single display. Therefore, this paper proposes a method to describe human performance in a multidisplay environment by developing the head module because the behavior of searching the object beyond the preferred visual angle of ± 15° could not be modeled with the visual module in ACTR. The result shows that ACT-R model with the head module should be necessary when performing tasks in a multi-display environment. In addition, a separate ACT-R model was also developed when a different head movement pattern was involved such as a peripheral vision.

AB - The ACT-R cognitive architecture only deals with a single display interface, which needs to be expanded and improved so as to describe the real environment that needs more than a single display. Therefore, this paper proposes a method to describe human performance in a multidisplay environment by developing the head module because the behavior of searching the object beyond the preferred visual angle of ± 15° could not be modeled with the visual module in ACTR. The result shows that ACT-R model with the head module should be necessary when performing tasks in a multi-display environment. In addition, a separate ACT-R model was also developed when a different head movement pattern was involved such as a peripheral vision.

UR - http://www.scopus.com/inward/record.url?scp=81855217464&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=81855217464&partnerID=8YFLogxK

U2 - 10.1177/1071181311551382

DO - 10.1177/1071181311551382

M3 - Conference contribution

SN - 9780945289395

SP - 1836

EP - 1839

BT - Proceedings of the Human Factors and Ergonomics Society

ER -