語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
A reinforcement learning approach to...
~
Jardine, P. Travis.
FindBook
Google Book
Amazon
博客來
A reinforcement learning approach to predictive control design: Autonomous vehicle applications.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
A reinforcement learning approach to predictive control design: Autonomous vehicle applications./
作者:
Jardine, P. Travis.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
140 p.
附註:
Source: Dissertation Abstracts International, Volume: 76-07C.
Contained By:
Dissertation Abstracts International76-07C.
標題:
Electrical engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10857294
A reinforcement learning approach to predictive control design: Autonomous vehicle applications.
Jardine, P. Travis.
A reinforcement learning approach to predictive control design: Autonomous vehicle applications.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 140 p.
Source: Dissertation Abstracts International, Volume: 76-07C.
Thesis (Ph.D.)--Queen's University (Canada), 2018.
This research investigates the use of learning techniques to select control parameters in the Model Predictive Control (MPC) of autonomous vehicles. The general problem of having a vehicle track a target while adhering to constraints and minimizing control effort is defined. We further expand the problem to consider a vehicle for which the underlying dynamics are not well known. A game of Finite Action-Set Learning Automata (FALA) is used to select the weighting parameters in the MPC cost function. Fast Orthogonal Search (FOS) is combined with a Kalman Filter to simultaneously identify the model while estimating the system states. Planar inequality constraints are used to avoid spherical obstacles. The performance of these techniques is assessed for applications involving ground and aerial vehicles. Simulation and experimental results demonstrate that the combined FOS-FALA architecture reduces the overall number of design parameters that must be selected. The amount of reduction depends on the specific application. For the differential drive robot case considered here, the number for parameters was reduced from six to one. Furthermore, the learning strategy links the selection of these parameters to the desired performance. This is a significant improvement over the typical approach of trial and error.Subjects--Topical Terms:
649834
Electrical engineering.
A reinforcement learning approach to predictive control design: Autonomous vehicle applications.
LDR
:02292nmm a2200301 4500
001
2163405
005
20181022132250.5
008
190424s2018 ||||||||||||||||| ||eng d
035
$a
(MiAaPQ)AAI10857294
035
$a
(MiAaPQ)QueensUCan197424245
035
$a
AAI10857294
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Jardine, P. Travis.
$3
3351424
245
1 2
$a
A reinforcement learning approach to predictive control design: Autonomous vehicle applications.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
140 p.
500
$a
Source: Dissertation Abstracts International, Volume: 76-07C.
500
$a
Advisers: Shahram Yousefi; Sidney Givigi.
502
$a
Thesis (Ph.D.)--Queen's University (Canada), 2018.
520
$a
This research investigates the use of learning techniques to select control parameters in the Model Predictive Control (MPC) of autonomous vehicles. The general problem of having a vehicle track a target while adhering to constraints and minimizing control effort is defined. We further expand the problem to consider a vehicle for which the underlying dynamics are not well known. A game of Finite Action-Set Learning Automata (FALA) is used to select the weighting parameters in the MPC cost function. Fast Orthogonal Search (FOS) is combined with a Kalman Filter to simultaneously identify the model while estimating the system states. Planar inequality constraints are used to avoid spherical obstacles. The performance of these techniques is assessed for applications involving ground and aerial vehicles. Simulation and experimental results demonstrate that the combined FOS-FALA architecture reduces the overall number of design parameters that must be selected. The amount of reduction depends on the specific application. For the differential drive robot case considered here, the number for parameters was reduced from six to one. Furthermore, the learning strategy links the selection of these parameters to the desired performance. This is a significant improvement over the typical approach of trial and error.
590
$a
School code: 0283.
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Automotive engineering.
$3
2181195
690
$a
0544
690
$a
0464
690
$a
0540
710
2
$a
Queen's University (Canada).
$3
1017786
773
0
$t
Dissertation Abstracts International
$g
76-07C.
790
$a
0283
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10857294
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9362952
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入