語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Vision-Guided Policy Learning for Co...
~
Ye, Xin.
FindBook
Google Book
Amazon
博客來
Vision-Guided Policy Learning for Complex Tasks.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Vision-Guided Policy Learning for Complex Tasks./
作者:
Ye, Xin.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
131 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Contained By:
Dissertations Abstracts International83-03B.
標題:
Computer science. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28648241
ISBN:
9798535547701
Vision-Guided Policy Learning for Complex Tasks.
Ye, Xin.
Vision-Guided Policy Learning for Complex Tasks.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 131 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Thesis (Ph.D.)--Arizona State University, 2021.
This item must not be sold to any third party vendors.
The field of computer vision has achieved tremendous progress over recent years with innovations in deep learning and neural networks. The advances have unprecedentedly enabled an intelligent agent to understand the world from its visual observations, such as recognizing an object, detecting the object's position, and estimating the distance to the object. It then comes to a question of how such visual understanding can be used to support the agent's decisions over its actions to perform a task. This dissertation aims to study this question in which several methods are presented to address the challenges in learning a desirable action policy from the agent's visual inputs for the agent to perform a task well. Specifically, this dissertation starts with learning an action policy from high dimensional visual observations by improving the sample efficiency. The improved sample efficiency is achieved through a denser reward function defined upon the visual understanding of the task, and an efficient exploration strategy equipped with a hierarchical policy. It further studies the generalizable action policy learning problem. The generalizability is achieved for both a fully observable task with local environment dynamic captured by visual representations, and a partially observable task with global environment dynamic captured by a novel graph representation. Finally, this dissertation explores learning from human-provided priors, such as natural language instructions and demonstration videos for better generalization ability.
ISBN: 9798535547701Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Vision-guided policy
Vision-Guided Policy Learning for Complex Tasks.
LDR
:02718nmm a2200385 4500
001
2283852
005
20211115071703.5
008
220723s2021 ||||||||||||||||| ||eng d
020
$a
9798535547701
035
$a
(MiAaPQ)AAI28648241
035
$a
AAI28648241
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Ye, Xin.
$3
1269287
245
1 0
$a
Vision-Guided Policy Learning for Complex Tasks.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
131 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
500
$a
Advisor: Yang, Yezhou.
502
$a
Thesis (Ph.D.)--Arizona State University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
The field of computer vision has achieved tremendous progress over recent years with innovations in deep learning and neural networks. The advances have unprecedentedly enabled an intelligent agent to understand the world from its visual observations, such as recognizing an object, detecting the object's position, and estimating the distance to the object. It then comes to a question of how such visual understanding can be used to support the agent's decisions over its actions to perform a task. This dissertation aims to study this question in which several methods are presented to address the challenges in learning a desirable action policy from the agent's visual inputs for the agent to perform a task well. Specifically, this dissertation starts with learning an action policy from high dimensional visual observations by improving the sample efficiency. The improved sample efficiency is achieved through a denser reward function defined upon the visual understanding of the task, and an efficient exploration strategy equipped with a hierarchical policy. It further studies the generalizable action policy learning problem. The generalizability is achieved for both a fully observable task with local environment dynamic captured by visual representations, and a partially observable task with global environment dynamic captured by a novel graph representation. Finally, this dissertation explores learning from human-provided priors, such as natural language instructions and demonstration videos for better generalization ability.
590
$a
School code: 0010.
650
4
$a
Computer science.
$3
523869
650
4
$a
Robots.
$3
529507
650
4
$a
Robotics.
$3
519753
650
4
$a
Information science.
$3
554358
653
$a
Vision-guided policy
653
$a
Policy learning
653
$a
Complex tasks
653
$a
Computer vision
653
$a
Deep learning
653
$a
Desirable action policy
690
$a
0984
690
$a
0771
690
$a
0723
710
2
$a
Arizona State University.
$b
Computer Science.
$3
1676136
773
0
$t
Dissertations Abstracts International
$g
83-03B.
790
$a
0010
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28648241
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9435585
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入