語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Error Bounds and Applications for St...
~
Zhu, Jingyi.
FindBook
Google Book
Amazon
博客來
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain./
作者:
Zhu, Jingyi.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
308 p.
附註:
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
Contained By:
Dissertations Abstracts International82-03B.
標題:
Statistics. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28068816
ISBN:
9798662430402
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
Zhu, Jingyi.
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 308 p.
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
Thesis (Ph.D.)--The Johns Hopkins University, 2020.
This item must not be sold to any third party vendors.
This work analyzes the stochastic approximation algorithm with non-decaying gains as applied in time-varying problems. The setting is to minimize a sequence of scalar-valued loss functions fk(·) at sampling times τk or to locate the root of a sequence of vector-valued functions gk(·) at τk with respect to a parameter θ ∈ Rp. The available information is the noise-corrupted observation(s) of either fk(·) or gk(·) evaluated at one or two design points only. Given the time-varying stochastic approximation setup, we apply stochastic approximation algorithms. The gain has to be bounded away from zero so that the recursive estimate denoted as θˆk can maintain its momentum in tracking the time-varying optimum denoted as θ∗k. Given that {θk∗ } is perpetually varying, the best property that θˆk can have is to be near the solution θ∗k (concentration behavior) in place of the improbable convergence. Chapter 3 provides a bound for the root-mean-squared error and a bound for the mean-absolute-deviation. Note that the only assumption imposed on {θ∗k} is that the average distance between two consecutive underlying optimal parameter vectors is bounded from above. Overall, the bounds are applicable under a mild assumption on the time-varying drift and a modest restriction on the observation noise and the bias term. After establishing the tracking capability in Chapter 3, we also discuss the concentration behavior of θˆk in Chapter 4. The weak convergence limit of the continuous interpolation of θˆk is shown to follow the trajectory of a non-autonomous ordinary differential equation. Then we apply the formula for variation of parameters to derive a computable upper-bound for the probability that θˆk deviates from θ∗k beyond a certain threshold. Both Chapter 3 and Chapter 4 are probabilistic arguments and may not provide much guidance on the gain-tuning strategies useful for one single experiment run. Therefore, Chapter 5 discusses a data-dependent gain-tuning strategy based on estimating the Hessian information and the noise level. Overall, this work answers the questions "what is the estimate for the dynamical system θ∗k" and "how much we can trust θˆk as an estimate for θ∗k.".
ISBN: 9798662430402Subjects--Topical Terms:
517247
Statistics.
Subjects--Index Terms:
Stochastic approximation
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
LDR
:03541nmm a2200409 4500
001
2277352
005
20210521101704.5
008
220723s2020 ||||||||||||||||| ||eng d
020
$a
9798662430402
035
$a
(MiAaPQ)AAI28068816
035
$a
(MiAaPQ)0098vireo5334Zhu
035
$a
AAI28068816
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Zhu, Jingyi.
$3
3555664
245
1 0
$a
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
308 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
500
$a
Advisor: Hobbs, Benjamin F.
502
$a
Thesis (Ph.D.)--The Johns Hopkins University, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
This work analyzes the stochastic approximation algorithm with non-decaying gains as applied in time-varying problems. The setting is to minimize a sequence of scalar-valued loss functions fk(·) at sampling times τk or to locate the root of a sequence of vector-valued functions gk(·) at τk with respect to a parameter θ ∈ Rp. The available information is the noise-corrupted observation(s) of either fk(·) or gk(·) evaluated at one or two design points only. Given the time-varying stochastic approximation setup, we apply stochastic approximation algorithms. The gain has to be bounded away from zero so that the recursive estimate denoted as θˆk can maintain its momentum in tracking the time-varying optimum denoted as θ∗k. Given that {θk∗ } is perpetually varying, the best property that θˆk can have is to be near the solution θ∗k (concentration behavior) in place of the improbable convergence. Chapter 3 provides a bound for the root-mean-squared error and a bound for the mean-absolute-deviation. Note that the only assumption imposed on {θ∗k} is that the average distance between two consecutive underlying optimal parameter vectors is bounded from above. Overall, the bounds are applicable under a mild assumption on the time-varying drift and a modest restriction on the observation noise and the bias term. After establishing the tracking capability in Chapter 3, we also discuss the concentration behavior of θˆk in Chapter 4. The weak convergence limit of the continuous interpolation of θˆk is shown to follow the trajectory of a non-autonomous ordinary differential equation. Then we apply the formula for variation of parameters to derive a computable upper-bound for the probability that θˆk deviates from θ∗k beyond a certain threshold. Both Chapter 3 and Chapter 4 are probabilistic arguments and may not provide much guidance on the gain-tuning strategies useful for one single experiment run. Therefore, Chapter 5 discusses a data-dependent gain-tuning strategy based on estimating the Hessian information and the noise level. Overall, this work answers the questions "what is the estimate for the dynamical system θ∗k" and "how much we can trust θˆk as an estimate for θ∗k.".
590
$a
School code: 0098.
650
4
$a
Statistics.
$3
517247
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Systems science.
$3
3168411
653
$a
Stochastic approximation
653
$a
Non-decaying gain
653
$a
Constant gain
653
$a
Error bound
653
$a
Time-varying systems
653
$a
ODE limit
653
$a
Second-order algorithms
690
$a
0463
690
$a
0790
690
$a
0800
710
2
$a
The Johns Hopkins University.
$b
Applied Mathematics and Statistics.
$3
3550441
773
0
$t
Dissertations Abstracts International
$g
82-03B.
790
$a
0098
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28068816
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9429086
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入