語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Some perspectives of sparse statisti...
~
Zou, Hui.
FindBook
Google Book
Amazon
博客來
Some perspectives of sparse statistical modeling.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Some perspectives of sparse statistical modeling./
作者:
Zou, Hui.
面頁冊數:
101 p.
附註:
Source: Dissertation Abstracts International, Volume: 66-08, Section: B, page: 4310.
Contained By:
Dissertation Abstracts International66-08B.
標題:
Statistics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3186437
ISBN:
9780542287398
Some perspectives of sparse statistical modeling.
Zou, Hui.
Some perspectives of sparse statistical modeling.
- 101 p.
Source: Dissertation Abstracts International, Volume: 66-08, Section: B, page: 4310.
Thesis (Ph.D.)--Stanford University, 2005.
In this thesis we develop some new sparse modeling techniques and related theory. We first point out the fundamental drawbacks of the lasso in some scenarios: (1) the number of predictors (greatly) exceeds the number of observations; (2) the predictors are highly correlated and form "groups". A typical example where these scenarios naturally occur is the gene selection problem in microarray analysis. We then propose the elastic net, a new regularization and variable selection method, to improve upon the lasso. In this domain we show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together. The elastic net is particularly useful when the number of predictors is much bigger that the number of samples. We also propose an algorithm called LARS-EN for efficiently computing the entire elastic-net regularization path, much like the LARS algorithm does for the lasso.
ISBN: 9780542287398Subjects--Topical Terms:
517247
Statistics.
Some perspectives of sparse statistical modeling.
LDR
:03016nmm 2200289 4500
001
1827998
005
20061228142248.5
008
130610s2005 eng d
020
$a
9780542287398
035
$a
(UnM)AAI3186437
035
$a
AAI3186437
040
$a
UnM
$c
UnM
100
1
$a
Zou, Hui.
$3
1916910
245
1 0
$a
Some perspectives of sparse statistical modeling.
300
$a
101 p.
500
$a
Source: Dissertation Abstracts International, Volume: 66-08, Section: B, page: 4310.
500
$a
Adviser: Trevor Hastie.
502
$a
Thesis (Ph.D.)--Stanford University, 2005.
520
$a
In this thesis we develop some new sparse modeling techniques and related theory. We first point out the fundamental drawbacks of the lasso in some scenarios: (1) the number of predictors (greatly) exceeds the number of observations; (2) the predictors are highly correlated and form "groups". A typical example where these scenarios naturally occur is the gene selection problem in microarray analysis. We then propose the elastic net, a new regularization and variable selection method, to improve upon the lasso. In this domain we show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together. The elastic net is particularly useful when the number of predictors is much bigger that the number of samples. We also propose an algorithm called LARS-EN for efficiently computing the entire elastic-net regularization path, much like the LARS algorithm does for the lasso.
520
$a
In the second part of the thesis, we propose a principled approach called SPCA for modifying PCA based on a novel sparse PCA criterion, in which an elastic net constraint is used to produce sparse loadings. To solve the optimization problem in SPCA, we consider an alternating algorithm which iterates between the elastic net and the reduced-rank Procrustes rotation. SPCA allows flexible control of the sparse structure of the resulting loadings and has the ability of identifying important variables.
520
$a
In the third part of the thesis, we study the degrees of freedom of the lasso in the framework of SURE theory. We prove that the number of non-zero coefficients is an unbiased estimate for the degrees of freedom of the lasso---a conclusion requiring no special assumption on the predictors. Our analysis also provides mathematical support for a related conjecture by Efron et al. (2004). As an application, various model selection criteria--- Cp, AIC and BIC---are defined, which, along with the LARS algorithm, provide a principled and efficient approach to obtaining the optimal Lasso fit.
590
$a
School code: 0212.
650
4
$a
Statistics.
$3
517247
690
$a
0463
710
2 0
$a
Stanford University.
$3
754827
773
0
$t
Dissertation Abstracts International
$g
66-08B.
790
1 0
$a
Hastie, Trevor,
$e
advisor
790
$a
0212
791
$a
Ph.D.
792
$a
2005
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3186437
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9218861
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入