Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Statistics Meets Optimization: Compu...
~
Yang, Fan.
Linked to FindBook
Google Book
Amazon
博客來
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms./
Author:
Yang, Fan.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
Description:
141 p.
Notes:
Source: Dissertation Abstracts International, Volume: 80-08(E), Section: B.
Contained By:
Dissertation Abstracts International80-08B(E).
Subject:
Electrical engineering. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10931587
ISBN:
9781392035030
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
Yang, Fan.
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 141 p.
Source: Dissertation Abstracts International, Volume: 80-08(E), Section: B.
Thesis (Ph.D.)--University of California, Berkeley, 2018.
Modern technological advances have prompted massive scale data collection in many modern fields such as artificial intelligence, and traditional sciences alike. This has led to an increasing need for scalable machine learning algorithms and statistical methods to draw conclusions about the world. In all data-driven procedures, the data scientist faces the following fundamental questions: How should I design the learning algorithm and how long should I run it? Which samples should I collect for training and how many are sufficient to generalize conclusions to unseen data? These questions relate to statistical and computational properties of both the data and the algorithm. This thesis explores their role in the areas of non-convex optimization, non-parametric estimation, active learning and multiple testing.
ISBN: 9781392035030Subjects--Topical Terms:
649834
Electrical engineering.
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
LDR
:03307nmm a2200337 4500
001
2205137
005
20190718100536.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9781392035030
035
$a
(MiAaPQ)AAI10931587
035
$a
(MiAaPQ)berkeley:18303
035
$a
AAI10931587
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Yang, Fan.
$3
1020735
245
1 0
$a
Statistics Meets Optimization: Computational Guarantees for Statistical Learning Algorithms.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
141 p.
500
$a
Source: Dissertation Abstracts International, Volume: 80-08(E), Section: B.
500
$a
Adviser: Martin J. Wainwright.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2018.
520
$a
Modern technological advances have prompted massive scale data collection in many modern fields such as artificial intelligence, and traditional sciences alike. This has led to an increasing need for scalable machine learning algorithms and statistical methods to draw conclusions about the world. In all data-driven procedures, the data scientist faces the following fundamental questions: How should I design the learning algorithm and how long should I run it? Which samples should I collect for training and how many are sufficient to generalize conclusions to unseen data? These questions relate to statistical and computational properties of both the data and the algorithm. This thesis explores their role in the areas of non-convex optimization, non-parametric estimation, active learning and multiple testing.
520
$a
In the first part, we provide insights of different flavor concerning the interplay between statistical and computational properties of first-order type methods on common estimation procedures. The expectation-maximization (EM) algorithm estimates parameters of a latent variable model by running a first-order type method on a non-convex landscape. We identify and characterize a general class of Hidden Markov Models for which linear convergence of EM to a statistically optimal point is provable for a large initialization radius. For non-parametric estimation problems, functional gradient descent type (also called boosting) algorithms are used to estimate the best fit in infinite dimensional function spaces. We develop a new proof technique showing that early stopping the algorithm instead may also yield an optimal estimator without explicit regularization. In fact, the same key quantities (localized complexities) are underlying both traditional penalty-based and algorithmic regularization.
520
$a
In the second part of the thesis, we explore how data collected adaptively with a constantly updated estimation can lead to significant reduction in sample complexity for multiple hypothesis testing problems. In particular, we show how adaptive strategies can be used to simultaneously control the false discovery rate over multiple tests and return the best alternative (among many) for each test with optimal sample complexity in an online manner.
590
$a
School code: 0028.
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Statistics.
$3
517247
650
4
$a
Computer science.
$3
523869
690
$a
0544
690
$a
0463
690
$a
0984
710
2
$a
University of California, Berkeley.
$b
Electrical Engineering and Computer Sciences.
$3
2096274
773
0
$t
Dissertation Abstracts International
$g
80-08B(E).
790
$a
0028
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10931587
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9381686
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login