語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Sparse machine learning models in bi...
~
Li, Yifeng.
FindBook
Google Book
Amazon
博客來
Sparse machine learning models in bioinformatics.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Sparse machine learning models in bioinformatics./
作者:
Li, Yifeng.
面頁冊數:
333 p.
附註:
Source: Dissertation Abstracts International, Volume: 75-05(E), Section: B.
Contained By:
Dissertation Abstracts International75-05B(E).
標題:
Computer Science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=NR98643
ISBN:
9780494986431
Sparse machine learning models in bioinformatics.
Li, Yifeng.
Sparse machine learning models in bioinformatics.
- 333 p.
Source: Dissertation Abstracts International, Volume: 75-05(E), Section: B.
Thesis (Ph.D.)--University of Windsor (Canada), 2014.
The meaning of parsimony is twofold in machine learning: either the structure or (and) the parameter of a model can be sparse. Sparse models have many strengths. First, sparsity is an important regularization principle to reduce model complexity and therefore avoid overfitting. Second, in many fields, for example bioinformatics, many high-dimensional data may be generated by a very few number of hidden factors, thus it is more reasonable to use a proper sparse model than a dense model. Third, a sparse model is often easy to interpret. In this dissertation, we investigate the sparse machine learning models and their applications in high-dimensional biological data analysis. We focus our research on five types of sparse models as follows.
ISBN: 9780494986431Subjects--Topical Terms:
626642
Computer Science.
Sparse machine learning models in bioinformatics.
LDR
:05156nam a2200349 4500
001
1964663
005
20141010092632.5
008
150210s2014 ||||||||||||||||| ||eng d
020
$a
9780494986431
035
$a
(MiAaPQ)AAINR98643
035
$a
AAINR98643
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Li, Yifeng.
$3
1030044
245
1 0
$a
Sparse machine learning models in bioinformatics.
300
$a
333 p.
500
$a
Source: Dissertation Abstracts International, Volume: 75-05(E), Section: B.
500
$a
Advisers: Alioune Ngom; Luis Rueda.
502
$a
Thesis (Ph.D.)--University of Windsor (Canada), 2014.
520
$a
The meaning of parsimony is twofold in machine learning: either the structure or (and) the parameter of a model can be sparse. Sparse models have many strengths. First, sparsity is an important regularization principle to reduce model complexity and therefore avoid overfitting. Second, in many fields, for example bioinformatics, many high-dimensional data may be generated by a very few number of hidden factors, thus it is more reasonable to use a proper sparse model than a dense model. Third, a sparse model is often easy to interpret. In this dissertation, we investigate the sparse machine learning models and their applications in high-dimensional biological data analysis. We focus our research on five types of sparse models as follows.
520
$a
First, sparse representation is a parsimonious principle that a sample can be approximated by a sparse linear combination of basis vectors. We explore existing sparse representation models and propose our own sparse representation methods for high dimensional biological data analysis. We derive different sparse representation models from a Bayesian perspective. Two generic dictionary learning frameworks are proposed. Also, kernel and supervised dictionary learning approaches are devised. Furthermore, we propose fast active-set and decomposition methods for the optimization of sparse coding models.
520
$a
Second, gene-sample-time data are promising in clinical study, but challenging in computation. We propose sparse tensor decomposition methods and kernel methods for the dimensionality reduction and classification of such data. As the extensions of matrix factorization, tensor decomposition techniques can reduce the dimensionality of the gene-sample-time data dramatically, and the kernel methods can run very efficiently on such data.
520
$a
Third, we explore two sparse regularized linear models for multi-class problems in bioinformatics. Our first method is called the nearest-border classification technique for data with many classes. Our second method is a hierarchical model. It can simultaneously select features and classify samples. Our experiment, on breast tumor subtyping, shows that this model outperforms the one-versus-all strategy in some cases.
520
$a
Fourth, we propose to use spectral clustering approaches for clustering microarray time-series data. The approaches are based on two transformations that have been recently introduced, especially for gene expression time-series data, namely, alignment-based and variation-based transformations. Both transformations have been devised in order to take into account temporal relationships in the data, and have been shown to increase the ability of a clustering method in detecting co-expressed genes. We investigate the performances of these transformations methods, when combined with spectral clustering on two microarray time-series datasets, and discuss their strengths and weaknesses. Our experiments on two well known real-life datasets show the superiority of the alignment-based over the variation-based transformation for finding meaningful groups of co-expressed genes.
520
$a
Fifth, we propose the max-min high-order dynamic Bayesian network (MMHO-DBN) learning algorithm, in order to reconstruct time-delayed gene regulatory networks. Due to the small sample size of the training data and the power-low nature of gene regulatory networks, the structure of the network is restricted by sparsity. We also apply the qualitative probabilistic networks (QPNs) to interpret the interactions learned. Our experiments on both synthetic and real gene expression time-series data show that, MMHO-DBN can obtain better precision than some existing methods, and perform very fast. The QPN analysis can accurately predict types of influences and synergies.
520
$a
Additionally, since many high dimensional biological data are subject to missing values, we survey various strategies for learning models from incomplete data. We extend the existing imputation methods, originally for two-way data, to methods for gene-sample-time data. We also propose a pair-wise weighting method for computing kernel matrices from incomplete data. Computational evaluations show that both approaches work very robustly.
590
$a
School code: 0115.
650
4
$a
Computer Science.
$3
626642
650
4
$a
Biology, Bioinformatics.
$3
1018415
690
$a
0984
690
$a
0715
710
2
$a
University of Windsor (Canada).
$b
COMPUTER SCIENCE.
$3
2093677
773
0
$t
Dissertation Abstracts International
$g
75-05B(E).
790
$a
0115
791
$a
Ph.D.
792
$a
2014
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=NR98643
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9259662
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入