語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Statistical Machine Learning Approaches for Data Integration and Graphical Models.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Statistical Machine Learning Approaches for Data Integration and Graphical Models./
作者:
Wang, Minjie.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
218 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-04, Section: B.
Contained By:
Dissertations Abstracts International83-04B.
標題:
Statistics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28735646
ISBN:
9798535522371
Statistical Machine Learning Approaches for Data Integration and Graphical Models.
Wang, Minjie.
Statistical Machine Learning Approaches for Data Integration and Graphical Models.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 218 p.
Source: Dissertations Abstracts International, Volume: 83-04, Section: B.
Thesis (Ph.D.)--Rice University, 2021.
This item must not be sold to any third party vendors.
Unsupervised learning aims to identify underlying patterns in unlabeled data. In this thesis, we develop methodologies involving two popular unsupervised learning problems: clustering with application to data integration and graphical models. As the volume and variety of data grows, data integration, which analyzes multiple sources of data simultaneously, has gained increasing popularity. We study mixed multi-view data, where multiple sets of diverse features are measured on the same set of samples. In the first project, by integrating all available data sources, we seek to uncover common group structure among the samples from unlabeled mixed multi-view data that may be hidden in individualistic cluster analyses of a single data view. To achieve this, we propose and develop a convex formalization that inherits the strong mathematical and empirical properties of increasingly popular convex clustering methods. Specifically, our Integrative Generalized Convex Clustering Optimization (iGecco) method employs different convex distances, losses, or divergences for each of the different data views with a joint convex fusion penalty that leads to common groups. Additionally, integrating mixed multi-view data is often challenging when each data source is high-dimensional. To perform feature selection in such scenarios, we develop an adaptive shifted group-lasso penalty that selects features by shrinking them towards their loss-specific centers. Our iGecco+ approach selects features from each data view that are best for determining the groups, often leading to improved integrative clustering. Through a series of numerical experiments and real data examples on text mining and genomics, we show that iGecco+ achieves superior empirical performance for high-dimensional mixed multi-view data. In the second project, we seek to come up with more meaningful interpretations of clustering, which has often been challenging due to its unsupervised nature. Meanwhile, in many real-world scenarios, there are some noisy "supervising auxiliary variables", for instance, subjective diagnostic opinions, that are related to the observed heterogeneity of the unlabeled data. By leveraging information from both supervising auxiliary variables and unlabeled data, we seek to uncover more scientifically interpretable group structures that may be hidden by completely unsupervised analyses. We propose and develop a new statistical pattern discovery method named Supervised Convex Clustering (SCC) that borrows strength from both unlabeled data and the so-called supervising auxiliary variable in order to find more interpretable patterns with a joint convex fusion penalty. Graphical models, statistical machine learning models defined on graphs, have been widely studied to understand conditional dependencies among a collection of random variables. In the third project, we consider graph selection in the presence of latent variables, a quite challenging problem in neuroscience where existing technologies can only record from a small subset of neurons. We propose an incredibly simple solution: apply a hard thresholding operator to existing graph selection methods, and demonstrate that thresholding the graphical Lasso, neighborhood selection, or CLIME estimators have superior theoretical properties in terms of graph selection consistency as well as stronger empirical results than existing approaches for the latent variable graphical model problem. We also demonstrate the applicability of our approach through a neuroscience case study on calcium-imaging data to estimate functional neural connections.
ISBN: 9798535522371Subjects--Topical Terms:
517247
Statistics.
Subjects--Index Terms:
Statistical machine learning
Statistical Machine Learning Approaches for Data Integration and Graphical Models.
LDR
:04857nmm a2200397 4500
001
2352153
005
20221118093825.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798535522371
035
$a
(MiAaPQ)AAI28735646
035
$a
(MiAaPQ)0187rice3739Wang
035
$a
AAI28735646
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Wang, Minjie.
$3
3691774
245
1 0
$a
Statistical Machine Learning Approaches for Data Integration and Graphical Models.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
218 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-04, Section: B.
500
$a
Advisor: Allen, Genevera.
502
$a
Thesis (Ph.D.)--Rice University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Unsupervised learning aims to identify underlying patterns in unlabeled data. In this thesis, we develop methodologies involving two popular unsupervised learning problems: clustering with application to data integration and graphical models. As the volume and variety of data grows, data integration, which analyzes multiple sources of data simultaneously, has gained increasing popularity. We study mixed multi-view data, where multiple sets of diverse features are measured on the same set of samples. In the first project, by integrating all available data sources, we seek to uncover common group structure among the samples from unlabeled mixed multi-view data that may be hidden in individualistic cluster analyses of a single data view. To achieve this, we propose and develop a convex formalization that inherits the strong mathematical and empirical properties of increasingly popular convex clustering methods. Specifically, our Integrative Generalized Convex Clustering Optimization (iGecco) method employs different convex distances, losses, or divergences for each of the different data views with a joint convex fusion penalty that leads to common groups. Additionally, integrating mixed multi-view data is often challenging when each data source is high-dimensional. To perform feature selection in such scenarios, we develop an adaptive shifted group-lasso penalty that selects features by shrinking them towards their loss-specific centers. Our iGecco+ approach selects features from each data view that are best for determining the groups, often leading to improved integrative clustering. Through a series of numerical experiments and real data examples on text mining and genomics, we show that iGecco+ achieves superior empirical performance for high-dimensional mixed multi-view data. In the second project, we seek to come up with more meaningful interpretations of clustering, which has often been challenging due to its unsupervised nature. Meanwhile, in many real-world scenarios, there are some noisy "supervising auxiliary variables", for instance, subjective diagnostic opinions, that are related to the observed heterogeneity of the unlabeled data. By leveraging information from both supervising auxiliary variables and unlabeled data, we seek to uncover more scientifically interpretable group structures that may be hidden by completely unsupervised analyses. We propose and develop a new statistical pattern discovery method named Supervised Convex Clustering (SCC) that borrows strength from both unlabeled data and the so-called supervising auxiliary variable in order to find more interpretable patterns with a joint convex fusion penalty. Graphical models, statistical machine learning models defined on graphs, have been widely studied to understand conditional dependencies among a collection of random variables. In the third project, we consider graph selection in the presence of latent variables, a quite challenging problem in neuroscience where existing technologies can only record from a small subset of neurons. We propose an incredibly simple solution: apply a hard thresholding operator to existing graph selection methods, and demonstrate that thresholding the graphical Lasso, neighborhood selection, or CLIME estimators have superior theoretical properties in terms of graph selection consistency as well as stronger empirical results than existing approaches for the latent variable graphical model problem. We also demonstrate the applicability of our approach through a neuroscience case study on calcium-imaging data to estimate functional neural connections.
590
$a
School code: 0187.
650
4
$a
Statistics.
$3
517247
650
4
$a
Statistical physics.
$3
536281
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Information science.
$3
554358
653
$a
Statistical machine learning
653
$a
Data integration
653
$a
Clustering
653
$a
Graphical models
653
$a
Convex optimization
690
$a
0463
690
$a
0723
690
$a
0800
690
$a
0217
710
2
$a
Rice University.
$b
Statistics.
$3
3691775
773
0
$t
Dissertations Abstracts International
$g
83-04B.
790
$a
0187
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28735646
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9474591
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入