Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Scalable Bayesian Reinforcement Lear...
~
Lee, Gilwoo.
Linked to FindBook
Google Book
Amazon
博客來
Scalable Bayesian Reinforcement Learning.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Scalable Bayesian Reinforcement Learning./
Author:
Lee, Gilwoo.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
Description:
113 p.
Notes:
Source: Dissertations Abstracts International, Volume: 82-05, Section: B.
Contained By:
Dissertations Abstracts International82-05B.
Subject:
Artificial intelligence. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28091288
ISBN:
9798684669699
Scalable Bayesian Reinforcement Learning.
Lee, Gilwoo.
Scalable Bayesian Reinforcement Learning.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 113 p.
Source: Dissertations Abstracts International, Volume: 82-05, Section: B.
Thesis (Ph.D.)--University of Washington, 2020.
This item must not be sold to any third party vendors.
Informed and robust decision making in the face of uncertainty is critical for robots operating in unstructured environments. We formulate this problem as Bayesian Reinforcement Learning (BRL) over latent Markov Decision Processes (MDPs). While Bayes-optimality is theoretically the gold standard, existing algorithms scale poorly to continuous state and action spaces. This thesis proposes a set of BRL algorithms that scale to complex control tasks. Our algorithms build on the following insight: robotics problems have structural priors that we can use to produce approximate models and experts that the agent can leverage.First, we propose an algorithm which improves a nominal model and policy with data-driven semi-parametric learning and optimal control. Then, we look into more general BRL tasks with complex latent models. We propose algorithms that combine batch reinforcement learning algorithms with experts to scale to complex latent tasks. Finally, through simulated and physical experiments, we demonstrate that our algorithms drastically outperform existing adaptive RL methods.
ISBN: 9798684669699Subjects--Topical Terms:
516317
Artificial intelligence.
Subjects--Index Terms:
Adaptive Control
Scalable Bayesian Reinforcement Learning.
LDR
:02216nmm a2200361 4500
001
2281729
005
20210920103353.5
008
220723s2020 ||||||||||||||||| ||eng d
020
$a
9798684669699
035
$a
(MiAaPQ)AAI28091288
035
$a
AAI28091288
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Lee, Gilwoo.
$3
3560437
245
1 0
$a
Scalable Bayesian Reinforcement Learning.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
113 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-05, Section: B.
500
$a
Advisor: Srinivasa, Siddhartha S.
502
$a
Thesis (Ph.D.)--University of Washington, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
Informed and robust decision making in the face of uncertainty is critical for robots operating in unstructured environments. We formulate this problem as Bayesian Reinforcement Learning (BRL) over latent Markov Decision Processes (MDPs). While Bayes-optimality is theoretically the gold standard, existing algorithms scale poorly to continuous state and action spaces. This thesis proposes a set of BRL algorithms that scale to complex control tasks. Our algorithms build on the following insight: robotics problems have structural priors that we can use to produce approximate models and experts that the agent can leverage.First, we propose an algorithm which improves a nominal model and policy with data-driven semi-parametric learning and optimal control. Then, we look into more general BRL tasks with complex latent models. We propose algorithms that combine batch reinforcement learning algorithms with experts to scale to complex latent tasks. Finally, through simulated and physical experiments, we demonstrate that our algorithms drastically outperform existing adaptive RL methods.
590
$a
School code: 0250.
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Robotics.
$3
519753
650
4
$a
Statistical physics.
$3
536281
650
4
$a
Applied mathematics.
$3
2122814
653
$a
Adaptive Control
653
$a
Bayesian Reinforcement Learning
653
$a
Robust Reinforcement Learning
690
$a
0800
690
$a
0771
690
$a
0217
690
$a
0364
710
2
$a
University of Washington.
$b
Computer Science and Engineering.
$3
2097608
773
0
$t
Dissertations Abstracts International
$g
82-05B.
790
$a
0250
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28091288
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9433462
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login