語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Feedforward Learning Control for Mul...
~
Chen, Zhi.
FindBook
Google Book
Amazon
博客來
Feedforward Learning Control for Multi Actuator Hard Drives and Freeform 3D Printers.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Feedforward Learning Control for Multi Actuator Hard Drives and Freeform 3D Printers./
作者:
Chen, Zhi.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
110 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Contained By:
Dissertations Abstracts International83-03B.
標題:
Mechanical engineering. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28491639
ISBN:
9798535551432
Feedforward Learning Control for Multi Actuator Hard Drives and Freeform 3D Printers.
Chen, Zhi.
Feedforward Learning Control for Multi Actuator Hard Drives and Freeform 3D Printers.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 110 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Thesis (Ph.D.)--University of California, Berkeley, 2021.
This item must not be sold to any third party vendors.
This work addresses feedforward disturbance problems and trajectory tracking problems for both small-scale linear systems and large-scale nonlinear systems. The feedforward disturbance rejection problem, acoustic control problem, and trajectory tracking problem can all be formulated in a united manner as an optimization problem to minimize the difference between the plant output, under time-varying disturbances, and the desired reference response. For a single-input-single-output (SISO) linear system, the compensator or the tracking controller can be represented by an FIR filter or IIR filter. When statistics of the reference signals are unknown, an adaptive filter is used to find the optimal controller parameters based on some recursive algorithms. The least mean squares (LMS) algorithm and recursive least squares (RLS) have been widely used for feedforward adaptive control. However, when the reference is a sequence of impulses or wavelets, these algorithms may converge slowly or even diverge.In the first part of this work, a novel iterative batch least squares (IBLS) learning algorithm is developed for adaptive filtering with the reference consisting of a sequence of impulses or wavelets. The algorithm is formulated as a stochastic Newton optimization method with batch processing. The IBLS algorithm has been applied to a multi-actuator hard disk drive (HDD) to attenuate the vibration generated by the seeking actuator. For a large-scale multi-input-multi-output (MIMO) nonlinear system, the track following problem could be hard to solve. Recently, significant progress has been made in deep reinforcement learning that provides the flexibility to solve complex tasks from high-dimensional sensory inputs without knowing the dynamics of the environment. To do so, deep neural networks were used to approximate the action-value function or Q-function. In the second part of this thesis, we developed a modified deep deterministic policy gradient (DDPG) algorithm to address the trajectory following problem in a large-scale system with unknown dynamics. In this method, the reward function is defined as a function of the system state and its reference, which is maximized as long as the state of the system follows the desired trajectory. The modified DDPG algorithm is applied to a freeform 3D printing system to neutralize the effect of gravity and build a filament with the desired shape.
ISBN: 9798535551432Subjects--Topical Terms:
649730
Mechanical engineering.
Subjects--Index Terms:
3 dimensional printers
Feedforward Learning Control for Multi Actuator Hard Drives and Freeform 3D Printers.
LDR
:03532nmm a2200337 4500
001
2283811
005
20211115071654.5
008
220723s2021 ||||||||||||||||| ||eng d
020
$a
9798535551432
035
$a
(MiAaPQ)AAI28491639
035
$a
AAI28491639
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Chen, Zhi.
$3
940397
245
1 0
$a
Feedforward Learning Control for Multi Actuator Hard Drives and Freeform 3D Printers.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
110 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
500
$a
Advisor: Horowitz, Roberto.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
This work addresses feedforward disturbance problems and trajectory tracking problems for both small-scale linear systems and large-scale nonlinear systems. The feedforward disturbance rejection problem, acoustic control problem, and trajectory tracking problem can all be formulated in a united manner as an optimization problem to minimize the difference between the plant output, under time-varying disturbances, and the desired reference response. For a single-input-single-output (SISO) linear system, the compensator or the tracking controller can be represented by an FIR filter or IIR filter. When statistics of the reference signals are unknown, an adaptive filter is used to find the optimal controller parameters based on some recursive algorithms. The least mean squares (LMS) algorithm and recursive least squares (RLS) have been widely used for feedforward adaptive control. However, when the reference is a sequence of impulses or wavelets, these algorithms may converge slowly or even diverge.In the first part of this work, a novel iterative batch least squares (IBLS) learning algorithm is developed for adaptive filtering with the reference consisting of a sequence of impulses or wavelets. The algorithm is formulated as a stochastic Newton optimization method with batch processing. The IBLS algorithm has been applied to a multi-actuator hard disk drive (HDD) to attenuate the vibration generated by the seeking actuator. For a large-scale multi-input-multi-output (MIMO) nonlinear system, the track following problem could be hard to solve. Recently, significant progress has been made in deep reinforcement learning that provides the flexibility to solve complex tasks from high-dimensional sensory inputs without knowing the dynamics of the environment. To do so, deep neural networks were used to approximate the action-value function or Q-function. In the second part of this thesis, we developed a modified deep deterministic policy gradient (DDPG) algorithm to address the trajectory following problem in a large-scale system with unknown dynamics. In this method, the reward function is defined as a function of the system state and its reference, which is maximized as long as the state of the system follows the desired trajectory. The modified DDPG algorithm is applied to a freeform 3D printing system to neutralize the effect of gravity and build a filament with the desired shape.
590
$a
School code: 0028.
650
4
$a
Mechanical engineering.
$3
649730
650
4
$a
Computer science.
$3
523869
653
$a
3 dimensional printers
653
$a
Multi actuator hard drives
653
$a
Learning control
690
$a
0548
690
$a
0984
710
2
$a
University of California, Berkeley.
$b
Mechanical Engineering.
$3
1043692
773
0
$t
Dissertations Abstracts International
$g
83-03B.
790
$a
0028
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28491639
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9435544
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入