Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Optimal learning in high dimensions.
~
Li, Yan.
Linked to FindBook
Google Book
Amazon
博客來
Optimal learning in high dimensions.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Optimal learning in high dimensions./
Author:
Li, Yan.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2016,
Description:
185 p.
Notes:
Source: Dissertation Abstracts International, Volume: 78-05(E), Section: B.
Contained By:
Dissertation Abstracts International78-05B(E).
Subject:
Operations research. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10239622
ISBN:
9781369378078
Optimal learning in high dimensions.
Li, Yan.
Optimal learning in high dimensions.
- Ann Arbor : ProQuest Dissertations & Theses, 2016 - 185 p.
Source: Dissertation Abstracts International, Volume: 78-05(E), Section: B.
Thesis (Ph.D.)--Princeton University, 2016.
Collecting information in the course of sequential decision-making can be extremely challenging in high-dimensional settings, where the number of measurement budget is much smaller than both the number of alternatives and the number of parameters in the model. In the parametric setting, we derive a knowledge gradient policy with high-dimensional sparse additive belief models, where there are hundreds or even thousands of features, but only a small portion of these features contain explanatory power. This policy is a unique and novel hybrid of Bayesian ranking and selection with a frequentist learning approach called Lasso. Particularly, our method naturally combines a B-spline basis of finite order and approximates the nonparametric additive model and functional ANOVA model. Theoretically, we provide the estimation error bounds of the posterior mean estimate and the functional estimate. We also demonstrate how this method is applied to learn the structure of large RNA molecules. In the nonparametric setting, we explore high-dimensional sparse belief functions, without putting any assumptions on the model structure. A knowledge gradient policy in the framework of regularized regression trees is developed. This policy provides an effective and efficient method for sequential information collection as well as feature selection for nonparametric belief models. We also show how this method can be used in two clinical settings: identifying optimal clinical pathways for patients, and reducing medical expenses in finding the best doctors for a sequence of patients.
ISBN: 9781369378078Subjects--Topical Terms:
547123
Operations research.
Optimal learning in high dimensions.
LDR
:02452nmm a2200289 4500
001
2122556
005
20170922124926.5
008
180830s2016 ||||||||||||||||| ||eng d
020
$a
9781369378078
035
$a
(MiAaPQ)AAI10239622
035
$a
AAI10239622
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Li, Yan.
$3
1028952
245
1 0
$a
Optimal learning in high dimensions.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2016
300
$a
185 p.
500
$a
Source: Dissertation Abstracts International, Volume: 78-05(E), Section: B.
500
$a
Adviser: Warren B. Powell.
502
$a
Thesis (Ph.D.)--Princeton University, 2016.
520
$a
Collecting information in the course of sequential decision-making can be extremely challenging in high-dimensional settings, where the number of measurement budget is much smaller than both the number of alternatives and the number of parameters in the model. In the parametric setting, we derive a knowledge gradient policy with high-dimensional sparse additive belief models, where there are hundreds or even thousands of features, but only a small portion of these features contain explanatory power. This policy is a unique and novel hybrid of Bayesian ranking and selection with a frequentist learning approach called Lasso. Particularly, our method naturally combines a B-spline basis of finite order and approximates the nonparametric additive model and functional ANOVA model. Theoretically, we provide the estimation error bounds of the posterior mean estimate and the functional estimate. We also demonstrate how this method is applied to learn the structure of large RNA molecules. In the nonparametric setting, we explore high-dimensional sparse belief functions, without putting any assumptions on the model structure. A knowledge gradient policy in the framework of regularized regression trees is developed. This policy provides an effective and efficient method for sequential information collection as well as feature selection for nonparametric belief models. We also show how this method can be used in two clinical settings: identifying optimal clinical pathways for patients, and reducing medical expenses in finding the best doctors for a sequence of patients.
590
$a
School code: 0181.
650
4
$a
Operations research.
$3
547123
650
4
$a
Statistics.
$3
517247
690
$a
0796
690
$a
0463
710
2
$a
Princeton University.
$b
Operations Research and Financial Engineering.
$3
2096743
773
0
$t
Dissertation Abstracts International
$g
78-05B(E).
790
$a
0181
791
$a
Ph.D.
792
$a
2016
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10239622
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9333171
電子資源
01.外借(書)_YB
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login