語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures./
作者:
Tehranchi, Farnaz.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
132 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-03, Section: A.
Contained By:
Dissertations Abstracts International83-03A.
標題:
Industrial engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28767677
ISBN:
9798535589190
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
Tehranchi, Farnaz.
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 132 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: A.
Thesis (Ph.D.)--The Pennsylvania State University, 2020.
A form of Artificial Intelligence simulates human intelligence and behavior. These simulations are not always complete and not always interactive. Adding a new type of memory and extending the visual and motor modules to existing cognitive architecture offers a motivating approach for simulating human behavior. This dissertation presents an Eyes and Hands model, a new approach to facilitate cognitive models to interact with the world. For this approach, the Java Segmentation and Manipulation (JSegMan) tool is built. JSegMan builds upon Java packages to segment and manipulate the screen. JSegMan also generates operating system commands to implement actions with interfaces. Cognitive architectures provide a unified theory of cognition for developing and simulating cognition and human behavior. The Eyes and Hands model extends two cognitive architecture modules, along with JSegMan, to facilitate interaction. Eyes and hands models can be used to explore the role of interaction in human behavior.In this dissertation, three Eyes and Hands models were developed: (a) the Dismal model that completed a spreadsheet task in the Dismal mode of Emacs, (b) the Biased-coin model based on an existing two-choice experiment, and (c) the Excel model that completed the spreadsheet task in the Excel task environment. I conducted two studies to investigate the model's visual attention and response time. In the first study, learners' eye movements data were recorded to predict learning. The results showed that with eye movement data, the learners' performance could be predicted correctly 76% of the time. Therefore, where users are looking is important and should be considered in the simulation. In the second study, participants' response time and eye movements were recorded. The Excel model was built upon this study. A simple Eyes and Hands Error model was built to demonstrate how the model's time is allocated to error detection, error correction, and different types of knowledge. The results suggested that further analysis is required to investigate human errors.
ISBN: 9798535589190Subjects--Topical Terms:
526216
Industrial engineering.
Subjects--Index Terms:
Memory
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
LDR
:03411nmm a2200445 4500
001
2347675
005
20220823142328.5
008
241004s2020 ||||||||||||||||| ||eng d
020
$a
9798535589190
035
$a
(MiAaPQ)AAI28767677
035
$a
AAI28767677
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Tehranchi, Farnaz.
$3
3686957
245
1 3
$a
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
132 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: A.
500
$a
Advisor: Ritter, Frank E. ;Passonneau, Rebecca .
502
$a
Thesis (Ph.D.)--The Pennsylvania State University, 2020.
520
$a
A form of Artificial Intelligence simulates human intelligence and behavior. These simulations are not always complete and not always interactive. Adding a new type of memory and extending the visual and motor modules to existing cognitive architecture offers a motivating approach for simulating human behavior. This dissertation presents an Eyes and Hands model, a new approach to facilitate cognitive models to interact with the world. For this approach, the Java Segmentation and Manipulation (JSegMan) tool is built. JSegMan builds upon Java packages to segment and manipulate the screen. JSegMan also generates operating system commands to implement actions with interfaces. Cognitive architectures provide a unified theory of cognition for developing and simulating cognition and human behavior. The Eyes and Hands model extends two cognitive architecture modules, along with JSegMan, to facilitate interaction. Eyes and hands models can be used to explore the role of interaction in human behavior.In this dissertation, three Eyes and Hands models were developed: (a) the Dismal model that completed a spreadsheet task in the Dismal mode of Emacs, (b) the Biased-coin model based on an existing two-choice experiment, and (c) the Excel model that completed the spreadsheet task in the Excel task environment. I conducted two studies to investigate the model's visual attention and response time. In the first study, learners' eye movements data were recorded to predict learning. The results showed that with eye movement data, the learners' performance could be predicted correctly 76% of the time. Therefore, where users are looking is important and should be considered in the simulation. In the second study, participants' response time and eye movements were recorded. The Excel model was built upon this study. A simple Eyes and Hands Error model was built to demonstrate how the model's time is allocated to error detection, error correction, and different types of knowledge. The results suggested that further analysis is required to investigate human errors.
590
$a
School code: 0176.
650
4
$a
Industrial engineering.
$3
526216
650
4
$a
Computer science.
$3
523869
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Software.
$2
gtt.
$3
619355
650
4
$a
Dissertations & theses.
$3
3560115
650
4
$a
Cognitive models.
$3
3686958
650
4
$a
Design.
$3
518875
650
4
$a
Cognition & reasoning.
$3
3556293
650
4
$a
Human-computer interaction.
$3
560071
653
$a
Memory
653
$a
Visual modules
653
$a
Motor module
653
$a
Cognitive models
653
$a
Segmentation
653
$a
Screen manipulation
653
$a
Interaction
653
$a
Eyes and hands error
653
$a
Human error
653
$a
Simulated human behavior
690
$a
0984
690
$a
0800
690
$a
0546
690
$a
0544
690
$a
0389
710
2
$a
The Pennsylvania State University.
$3
699896
773
0
$t
Dissertations Abstracts International
$g
83-03A.
790
$a
0176
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28767677
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9470113
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入