語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Statistics Meets Optimization: Compu...
~
Yang, Fan.
FindBook
Google Book
Amazon
博客來
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms./
作者:
Yang, Fan.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
141 p.
附註:
Source: Dissertation Abstracts International, Volume: 80-08(E), Section: B.
Contained By:
Dissertation Abstracts International80-08B(E).
標題:
Electrical engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10931587
ISBN:
9781392035030
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
Yang, Fan.
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 141 p.
Source: Dissertation Abstracts International, Volume: 80-08(E), Section: B.
Thesis (Ph.D.)--University of California, Berkeley, 2018.
Modern technological advances have prompted massive scale data collection in many modern fields such as artificial intelligence, and traditional sciences alike. This has led to an increasing need for scalable machine learning algorithms and statistical methods to draw conclusions about the world. In all data-driven procedures, the data scientist faces the following fundamental questions: How should I design the learning algorithm and how long should I run it? Which samples should I collect for training and how many are sufficient to generalize conclusions to unseen data? These questions relate to statistical and computational properties of both the data and the algorithm. This thesis explores their role in the areas of non-convex optimization, non-parametric estimation, active learning and multiple testing.
ISBN: 9781392035030Subjects--Topical Terms:
649834
Electrical engineering.
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
LDR
:03307nmm a2200337 4500
001
2205137
005
20190718100536.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9781392035030
035
$a
(MiAaPQ)AAI10931587
035
$a
(MiAaPQ)berkeley:18303
035
$a
AAI10931587
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Yang, Fan.
$3
1020735
245
1 0
$a
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
141 p.
500
$a
Source: Dissertation Abstracts International, Volume: 80-08(E), Section: B.
500
$a
Adviser: Martin J. Wainwright.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2018.
520
$a
Modern technological advances have prompted massive scale data collection in many modern fields such as artificial intelligence, and traditional sciences alike. This has led to an increasing need for scalable machine learning algorithms and statistical methods to draw conclusions about the world. In all data-driven procedures, the data scientist faces the following fundamental questions: How should I design the learning algorithm and how long should I run it? Which samples should I collect for training and how many are sufficient to generalize conclusions to unseen data? These questions relate to statistical and computational properties of both the data and the algorithm. This thesis explores their role in the areas of non-convex optimization, non-parametric estimation, active learning and multiple testing.
520
$a
In the first part, we provide insights of different flavor concerning the interplay between statistical and computational properties of first-order type methods on common estimation procedures. The expectation-maximization (EM) algorithm estimates parameters of a latent variable model by running a first-order type method on a non-convex landscape. We identify and characterize a general class of Hidden Markov Models for which linear convergence of EM to a statistically optimal point is provable for a large initialization radius. For non-parametric estimation problems, functional gradient descent type (also called boosting) algorithms are used to estimate the best fit in infinite dimensional function spaces. We develop a new proof technique showing that early stopping the algorithm instead may also yield an optimal estimator without explicit regularization. In fact, the same key quantities (localized complexities) are underlying both traditional penalty-based and algorithmic regularization.
520
$a
In the second part of the thesis, we explore how data collected adaptively with a constantly updated estimation can lead to significant reduction in sample complexity for multiple hypothesis testing problems. In particular, we show how adaptive strategies can be used to simultaneously control the false discovery rate over multiple tests and return the best alternative (among many) for each test with optimal sample complexity in an online manner.
590
$a
School code: 0028.
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Statistics.
$3
517247
650
4
$a
Computer science.
$3
523869
690
$a
0544
690
$a
0463
690
$a
0984
710
2
$a
University of California, Berkeley.
$b
Electrical Engineering and Computer Sciences.
$3
2096274
773
0
$t
Dissertation Abstracts International
$g
80-08B(E).
790
$a
0028
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10931587
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9381686
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入