Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Search
Recommendations
ReaderScope
My Account
Help
Simple Search
Advanced Search
Public Library Lists
Public Reader Lists
AcademicReservedBook [CH]
BookLoanBillboard [CH]
BookReservedBillboard [CH]
Classification Browse [CH]
Exhibition [CH]
New books RSS feed [CH]
Personal Details
Saved Searches
Recommendations
Borrow/Reserve record
Reviews
Personal Lists
ETIBS
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Linked to FindBook
Google Book
Amazon
博客來
Where Do You Look? Relating Visual Attention to Learning Outcomes and URL Parsing.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Where Do You Look? Relating Visual Attention to Learning Outcomes and URL Parsing./
Author:
Ramkumar, Niveta.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
Description:
63 p.
Notes:
Source: Masters Abstracts International, Volume: 83-01.
Contained By:
Masters Abstracts International83-01.
Subject:
Engineering. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28499221
ISBN:
9798516964688
Where Do You Look? Relating Visual Attention to Learning Outcomes and URL Parsing.
Ramkumar, Niveta.
Where Do You Look? Relating Visual Attention to Learning Outcomes and URL Parsing.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 63 p.
Source: Masters Abstracts International, Volume: 83-01.
Thesis (M.S.)--University of New Hampshire, 2021.
This item must not be sold to any third party vendors.
Visual behavior provides a dynamic trail of where attention is directed. It is considered the behavioral interface between engagement and gaining information, and researchers have used it for several decades to study user's behavior. This thesis focuses on employing visual attention to understand user's behavior in two contexts: 3D learning and gauging URL safety. Such understanding is valuable for improving interactive tools and interface designs. In the first chapter, we present results from studying learners' visual behavior while engaging with tangible and virtual 3D representations of objects. This is a replication of a recent study, and we extended it using eye tracking. By analyzing the visual behavior, we confirmed the original study results and added more quantitative explanations for the corresponding learning outcomes. Among other things, our results indicated that the users allocate similar visual attention while analyzing virtual and tangible learning material. In the next chapter, we present a user study's outcomes wherein participants are instructed to classify a set of URLs wearing an eye tracker. Much effort is spent on teaching users how to detect malicious URLs. There has been significantly less focus on understanding exactly how and why users routinely fail to vet URLs properly. This user study aims to fill the void by shedding light on the underlying processes that users employ to gauge the UR L's trustworthiness at the time of scanning. Our findings suggest that users have a cap on the amount of cognitive resources they are willing to expend on vetting a URL. Also, they tend to believe that the presence of "www" in the domain name indicates that the URL is safe.
ISBN: 9798516964688Subjects--Topical Terms:
586835
Engineering.
Subjects--Index Terms:
Eye tracking
Where Do You Look? Relating Visual Attention to Learning Outcomes and URL Parsing.
LDR
:02907nmm a2200409 4500
001
2348796
005
20220908125421.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798516964688
035
$a
(MiAaPQ)AAI28499221
035
$a
AAI28499221
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Ramkumar, Niveta.
$3
3688165
245
1 0
$a
Where Do You Look? Relating Visual Attention to Learning Outcomes and URL Parsing.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
63 p.
500
$a
Source: Masters Abstracts International, Volume: 83-01.
500
$a
Advisor: Kun, Andrew.
502
$a
Thesis (M.S.)--University of New Hampshire, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Visual behavior provides a dynamic trail of where attention is directed. It is considered the behavioral interface between engagement and gaining information, and researchers have used it for several decades to study user's behavior. This thesis focuses on employing visual attention to understand user's behavior in two contexts: 3D learning and gauging URL safety. Such understanding is valuable for improving interactive tools and interface designs. In the first chapter, we present results from studying learners' visual behavior while engaging with tangible and virtual 3D representations of objects. This is a replication of a recent study, and we extended it using eye tracking. By analyzing the visual behavior, we confirmed the original study results and added more quantitative explanations for the corresponding learning outcomes. Among other things, our results indicated that the users allocate similar visual attention while analyzing virtual and tangible learning material. In the next chapter, we present a user study's outcomes wherein participants are instructed to classify a set of URLs wearing an eye tracker. Much effort is spent on teaching users how to detect malicious URLs. There has been significantly less focus on understanding exactly how and why users routinely fail to vet URLs properly. This user study aims to fill the void by shedding light on the underlying processes that users employ to gauge the UR L's trustworthiness at the time of scanning. Our findings suggest that users have a cap on the amount of cognitive resources they are willing to expend on vetting a URL. Also, they tend to believe that the presence of "www" in the domain name indicates that the URL is safe.
590
$a
School code: 0141.
650
4
$a
Engineering.
$3
586835
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Educational technology.
$3
517670
650
4
$a
Computer science.
$3
523869
650
4
$a
Research.
$3
531893
650
4
$a
Augmented reality.
$3
1620831
650
4
$a
Collaboration.
$3
3556296
650
4
$a
URLs.
$3
3681655
650
4
$a
Questionnaires.
$3
529568
650
4
$a
Data analysis.
$2
bisacsh
$3
3515250
650
4
$a
Codes.
$3
3560019
650
4
$a
Virtual reality.
$3
527460
650
4
$a
Museums.
$3
569592
650
4
$a
Eye movements.
$3
3564691
650
4
$a
Cameras.
$3
524039
650
4
$a
Interactive learning.
$3
3561476
650
4
$a
Experiments.
$3
525909
650
4
$a
Design.
$3
518875
650
4
$a
Archaeology.
$3
558412
650
4
$a
Algorithms.
$3
536374
653
$a
Eye tracking
653
$a
Human-computer interaction
653
$a
Image processing
653
$a
Learning
653
$a
Security
690
$a
0537
690
$a
0984
690
$a
0464
690
$a
0710
690
$a
0389
690
$a
0324
710
2
$a
University of New Hampshire.
$b
Electrical and Computer Engineering.
$3
3426590
773
0
$t
Masters Abstracts International
$g
83-01.
790
$a
0141
791
$a
M.S.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28499221
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9471234
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login