語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Classification methods for high-dime...
~
Xiong, Tao.
FindBook
Google Book
Amazon
博客來
Classification methods for high-dimensional sparse data.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Classification methods for high-dimensional sparse data./
作者:
Xiong, Tao.
面頁冊數:
121 p.
附註:
Source: Dissertation Abstracts International, Volume: 68-02, Section: B, page: 1096.
Contained By:
Dissertation Abstracts International68-02B.
標題:
Engineering, Electronics and Electrical. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3250170
Classification methods for high-dimensional sparse data.
Xiong, Tao.
Classification methods for high-dimensional sparse data.
- 121 p.
Source: Dissertation Abstracts International, Volume: 68-02, Section: B, page: 1096.
Thesis (Ph.D.)--University of Minnesota, 2007.
Estimation of predictive classification models from high-dimensional, low sample size (HDLSS) data is becoming increasingly important in various applications such as gene microarray analysis, image based object recognition, functional magnetic resonance imaging (fMRI) analysis etc. For these applications, the dimensionality of the data vector is much larger than the sample size. Such sparse data sets present new challenges for classification learning methods.Subjects--Topical Terms:
626636
Engineering, Electronics and Electrical.
Classification methods for high-dimensional sparse data.
LDR
:03279nmm 2200289 4500
001
1833378
005
20071004071623.5
008
130610s2007 eng d
035
$a
(UMI)AAI3250170
035
$a
AAI3250170
040
$a
UMI
$c
UMI
100
1
$a
Xiong, Tao.
$3
1922082
245
1 0
$a
Classification methods for high-dimensional sparse data.
300
$a
121 p.
500
$a
Source: Dissertation Abstracts International, Volume: 68-02, Section: B, page: 1096.
500
$a
Adviser: Vladimir S. Cherkassky.
502
$a
Thesis (Ph.D.)--University of Minnesota, 2007.
520
$a
Estimation of predictive classification models from high-dimensional, low sample size (HDLSS) data is becoming increasingly important in various applications such as gene microarray analysis, image based object recognition, functional magnetic resonance imaging (fMRI) analysis etc. For these applications, the dimensionality of the data vector is much larger than the sample size. Such sparse data sets present new challenges for classification learning methods.
520
$a
Currently used algorithms include (a) dimensionality-reduction methods such as Linear Discriminant Analysis (LDA) and (b) margin-based methods such as Support Vector Machine (SVM). Both approaches effectively attempt to control the model complexity, but in a different way. Even though SVM and LDA have been introduced as general-purpose methodologies, their performance varies greatly depending on the statistical characteristics of available data. To gain a better understanding of these techniques, we analyze the properties of SVM and LDA classifiers applied to HDLSS data. We show that tuning the regularization parameter in Regularized LDA (RLDA) can alleviate the data piling phenomenon, thus providing one explanation why regularization is useful to improve performance of LDA for HDLSS data. Then we propose a very efficient algorithm to tune the regularization parameter of RLDA. For SVM, we show that when the regularization parameter C is larger than a threshold (which can be computed explicitly), SVM classifiers will perform similarly for 1-1DLSS data regardless of C. This result provides guidelines for practical application of SVM on real HDLSS data.
520
$a
Another principled approach is to consider new learning formulations when dealing with HDLSS data. Multi-task learning (MTL) has recently been introduced to the machine learning literature. We propose a novel joint feature selection framework under MTL setting. We propose a framework that embeds the feature selection process into the multitask learning. The benefits of the proposed method are in two folds. On the one hand, it compensates for small sample size problem of the task at hand by using additional samples from related tasks, thus fully taking advantage of the benefits offered by multitask learning. On the other hand, the feature selection mechanism reduces the essential dimensionality of data which can also improve generalization performance.
590
$a
School code: 0130.
650
4
$a
Engineering, Electronics and Electrical.
$3
626636
650
4
$a
Computer Science.
$3
626642
690
$a
0544
690
$a
0984
710
2 0
$a
University of Minnesota.
$3
676231
773
0
$t
Dissertation Abstracts International
$g
68-02B.
790
1 0
$a
Cherkassky, Vladimir S.,
$e
advisor
790
$a
0130
791
$a
Ph.D.
792
$a
2007
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3250170
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9224242
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入