語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Optimal learning in high dimensions.
~
Li, Yan.
FindBook
Google Book
Amazon
博客來
Optimal learning in high dimensions.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Optimal learning in high dimensions./
作者:
Li, Yan.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2016,
面頁冊數:
185 p.
附註:
Source: Dissertation Abstracts International, Volume: 78-05(E), Section: B.
Contained By:
Dissertation Abstracts International78-05B(E).
標題:
Operations research. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10239622
ISBN:
9781369378078
Optimal learning in high dimensions.
Li, Yan.
Optimal learning in high dimensions.
- Ann Arbor : ProQuest Dissertations & Theses, 2016 - 185 p.
Source: Dissertation Abstracts International, Volume: 78-05(E), Section: B.
Thesis (Ph.D.)--Princeton University, 2016.
Collecting information in the course of sequential decision-making can be extremely challenging in high-dimensional settings, where the number of measurement budget is much smaller than both the number of alternatives and the number of parameters in the model. In the parametric setting, we derive a knowledge gradient policy with high-dimensional sparse additive belief models, where there are hundreds or even thousands of features, but only a small portion of these features contain explanatory power. This policy is a unique and novel hybrid of Bayesian ranking and selection with a frequentist learning approach called Lasso. Particularly, our method naturally combines a B-spline basis of finite order and approximates the nonparametric additive model and functional ANOVA model. Theoretically, we provide the estimation error bounds of the posterior mean estimate and the functional estimate. We also demonstrate how this method is applied to learn the structure of large RNA molecules. In the nonparametric setting, we explore high-dimensional sparse belief functions, without putting any assumptions on the model structure. A knowledge gradient policy in the framework of regularized regression trees is developed. This policy provides an effective and efficient method for sequential information collection as well as feature selection for nonparametric belief models. We also show how this method can be used in two clinical settings: identifying optimal clinical pathways for patients, and reducing medical expenses in finding the best doctors for a sequence of patients.
ISBN: 9781369378078Subjects--Topical Terms:
547123
Operations research.
Optimal learning in high dimensions.
LDR
:02452nmm a2200289 4500
001
2122556
005
20170922124926.5
008
180830s2016 ||||||||||||||||| ||eng d
020
$a
9781369378078
035
$a
(MiAaPQ)AAI10239622
035
$a
AAI10239622
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Li, Yan.
$3
1028952
245
1 0
$a
Optimal learning in high dimensions.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2016
300
$a
185 p.
500
$a
Source: Dissertation Abstracts International, Volume: 78-05(E), Section: B.
500
$a
Adviser: Warren B. Powell.
502
$a
Thesis (Ph.D.)--Princeton University, 2016.
520
$a
Collecting information in the course of sequential decision-making can be extremely challenging in high-dimensional settings, where the number of measurement budget is much smaller than both the number of alternatives and the number of parameters in the model. In the parametric setting, we derive a knowledge gradient policy with high-dimensional sparse additive belief models, where there are hundreds or even thousands of features, but only a small portion of these features contain explanatory power. This policy is a unique and novel hybrid of Bayesian ranking and selection with a frequentist learning approach called Lasso. Particularly, our method naturally combines a B-spline basis of finite order and approximates the nonparametric additive model and functional ANOVA model. Theoretically, we provide the estimation error bounds of the posterior mean estimate and the functional estimate. We also demonstrate how this method is applied to learn the structure of large RNA molecules. In the nonparametric setting, we explore high-dimensional sparse belief functions, without putting any assumptions on the model structure. A knowledge gradient policy in the framework of regularized regression trees is developed. This policy provides an effective and efficient method for sequential information collection as well as feature selection for nonparametric belief models. We also show how this method can be used in two clinical settings: identifying optimal clinical pathways for patients, and reducing medical expenses in finding the best doctors for a sequence of patients.
590
$a
School code: 0181.
650
4
$a
Operations research.
$3
547123
650
4
$a
Statistics.
$3
517247
690
$a
0796
690
$a
0463
710
2
$a
Princeton University.
$b
Operations Research and Financial Engineering.
$3
2096743
773
0
$t
Dissertation Abstracts International
$g
78-05B(E).
790
$a
0181
791
$a
Ph.D.
792
$a
2016
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10239622
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9333171
電子資源
01.外借(書)_YB
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入