語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Autonomous qualitative learning of d...
~
Mugan, Jonathan William.
FindBook
Google Book
Amazon
博客來
Autonomous qualitative learning of distinctions and actions in a developing agent.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Autonomous qualitative learning of distinctions and actions in a developing agent./
作者:
Mugan, Jonathan William.
面頁冊數:
189 p.
附註:
Source: Dissertation Abstracts International, Volume: 71-11, Section: B, page: 6889.
Contained By:
Dissertation Abstracts International71-11B.
標題:
Engineering, Robotics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3428996
ISBN:
9781124281346
Autonomous qualitative learning of distinctions and actions in a developing agent.
Mugan, Jonathan William.
Autonomous qualitative learning of distinctions and actions in a developing agent.
- 189 p.
Source: Dissertation Abstracts International, Volume: 71-11, Section: B, page: 6889.
Thesis (Ph.D.)--The University of Texas at Austin, 2010.
How can an agent bootstrap up from a pixel-level representation to autonomously learn high-level states and actions using only domain general knowledge? This thesis attacks a piece of this problem and assumes that an agent has a set of continuous variables describing the environment and a set of continuous motor primitives, and poses a solution for the problem of how an agent can learn a set of useful states and effective higher-level actions through autonomous experience with the environment. There exist methods for learning models of the environment, and there also exist methods for planning. However, for autonomous learning, these methods have been used almost exclusively in discrete environments.
ISBN: 9781124281346Subjects--Topical Terms:
1018454
Engineering, Robotics.
Autonomous qualitative learning of distinctions and actions in a developing agent.
LDR
:02821nam 2200301 4500
001
1405157
005
20111206130422.5
008
130515s2010 ||||||||||||||||| ||eng d
020
$a
9781124281346
035
$a
(UMI)AAI3428996
035
$a
AAI3428996
040
$a
UMI
$c
UMI
100
1
$a
Mugan, Jonathan William.
$3
1684515
245
1 0
$a
Autonomous qualitative learning of distinctions and actions in a developing agent.
300
$a
189 p.
500
$a
Source: Dissertation Abstracts International, Volume: 71-11, Section: B, page: 6889.
500
$a
Adviser: Benjamin J. Kuipers.
502
$a
Thesis (Ph.D.)--The University of Texas at Austin, 2010.
520
$a
How can an agent bootstrap up from a pixel-level representation to autonomously learn high-level states and actions using only domain general knowledge? This thesis attacks a piece of this problem and assumes that an agent has a set of continuous variables describing the environment and a set of continuous motor primitives, and poses a solution for the problem of how an agent can learn a set of useful states and effective higher-level actions through autonomous experience with the environment. There exist methods for learning models of the environment, and there also exist methods for planning. However, for autonomous learning, these methods have been used almost exclusively in discrete environments.
520
$a
This thesis proposes attacking the problem of learning high-level states and actions in continuous environments by using a qualitative representation to bridge the gap between continuous and discrete variable representations. In this approach, the agent begins with a broad discretization and initially can only tell if the value of each variable is increasing, decreasing, or remaining steady. The agent then simultaneously learns a qualitative representation (discretization) and a set of predictive models of the environment. The agent then converts these models into plans to form actions. The agent then uses those learned actions to explore the environment.
520
$a
The method is evaluated using a simulated robot with realistic physics. The robot is sitting at a table that contains one or two blocks, as well as other distractor objects that are out of reach. The agent autonomously explores the environment without being given a task. After learning, the agent is given various tasks to determine if it learned the necessary states and actions to complete them. The results show that the agent was able to use this method to autonomously learn to perform the tasks.
590
$a
School code: 0227.
650
4
$a
Engineering, Robotics.
$3
1018454
650
4
$a
Computer Science.
$3
626642
690
$a
0771
690
$a
0984
710
2
$a
The University of Texas at Austin.
$3
718984
773
0
$t
Dissertation Abstracts International
$g
71-11B.
790
1 0
$a
Kuipers, Benjamin J.,
$e
advisor
790
$a
0227
791
$a
Ph.D.
792
$a
2010
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3428996
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9168296
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入