語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Metric and Representation Learning.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Metric and Representation Learning./
作者:
Sonthalia, Rishi.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
389 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-05, Section: B.
Contained By:
Dissertations Abstracts International83-05B.
標題:
Applied mathematics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28845725
ISBN:
9798471104914
Metric and Representation Learning.
Sonthalia, Rishi.
Metric and Representation Learning.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 389 p.
Source: Dissertations Abstracts International, Volume: 83-05, Section: B.
Thesis (Ph.D.)--University of Michigan, 2021.
This item must not be sold to any third party vendors.
All data has some inherent mathematical structure. I am interested in understanding the intrinsic geometric and probabilistic structure of data to design effective algorithms and tools that can be applied to machine learning and across all branches of science. The focus of this thesis is to increase the effectiveness of machine learning techniques by developing a mathematical and algorithmic framework using which, given any type of data, we can learn an optimal representation. Representation learning is done for many reasons. It could be done to fix the corruption given corrupted data or to learn a low dimensional or simpler representation, given high dimensional data or a very complex representation of the data. It could also be that the current representation of the data does not capture the important geometric features of the data. One of the many challenges in representation learning is determining ways to judge the quality of the representation learned. In many cases, the consensus is that if d is the natural metric on the representation, then this metric should provide meaningful information about the data. Many examples of this can be seen in areas such as metric learning, manifold learning, and graph embedding. However, most algorithms that solve these problems learn a representation in a metric space first and then extract a metric. A large part of my research is exploring what happens if the order is switched, that is, learn the appropriate metric first and the embedding later. The philosophy behind this approach is that understanding the inherent geometry of the data is the most crucial part of representation learning. Often, studying the properties of the appropriate metric on the input data sets indicates the type of space, we should be seeking for the representation. Hence giving us more robust representations. Optimizing for the appropriate metric can also help overcome issues such as missing and noisy data. My projects fall into three different areas of representation learning. 1) Geometric and probabilistic analysis of representation learning methods. 2) Developing methods to learn optimal metrics on large datasets. 3) Applications. For the category of geometric and probabilistic analysis of representation learning methods, we have three projects. First, designing optimal training data for denoising autoencoders. Second, formulating a new optimal transport problem and understanding the geometric structure. Third, analyzing the robustness to perturbations of the solutions obtained from the classical multidimensional scaling algorithm versus that of the true solutions to the multidimensional scaling problem. For learning optimal metric, we are given a dissimilarity matrix $hat{D}$, some function $f$ and some a subset $S$ of the space of all metrics and we want to find $D in S$ that minimizes $f(D,hat{D})$. In this thesis, we consider the version of the problem when $S$ is the space of metrics defined on a fixed graph. That is, given a graph $G$, we let $S$, be the space of all metrics defined via $G$. For this $S$, we consider the sparse objective function as well as convex objective functions. We also looked at the problem where we want to learn a tree. We also show how the ideas behind learning the optimal metric can be applied to dimensionality reduction in the presence of missing data. Finally, we look at an application to real world data. Specifically trying to reconstruct ancient Greek text.
ISBN: 9798471104914Subjects--Topical Terms:
2122814
Applied mathematics.
Subjects--Index Terms:
Metric learning
Metric and Representation Learning.
LDR
:04749nmm a2200409 4500
001
2351739
005
20221111101723.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798471104914
035
$a
(MiAaPQ)AAI28845725
035
$a
(MiAaPQ)umichrackham003886
035
$a
AAI28845725
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Sonthalia, Rishi.
$3
3691315
245
1 0
$a
Metric and Representation Learning.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
389 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-05, Section: B.
500
$a
Advisor: Gilbert, Anna;Nadakuditi, Raj Rao.
502
$a
Thesis (Ph.D.)--University of Michigan, 2021.
506
$a
This item must not be sold to any third party vendors.
506
$a
This item must not be added to any third party search indexes.
520
$a
All data has some inherent mathematical structure. I am interested in understanding the intrinsic geometric and probabilistic structure of data to design effective algorithms and tools that can be applied to machine learning and across all branches of science. The focus of this thesis is to increase the effectiveness of machine learning techniques by developing a mathematical and algorithmic framework using which, given any type of data, we can learn an optimal representation. Representation learning is done for many reasons. It could be done to fix the corruption given corrupted data or to learn a low dimensional or simpler representation, given high dimensional data or a very complex representation of the data. It could also be that the current representation of the data does not capture the important geometric features of the data. One of the many challenges in representation learning is determining ways to judge the quality of the representation learned. In many cases, the consensus is that if d is the natural metric on the representation, then this metric should provide meaningful information about the data. Many examples of this can be seen in areas such as metric learning, manifold learning, and graph embedding. However, most algorithms that solve these problems learn a representation in a metric space first and then extract a metric. A large part of my research is exploring what happens if the order is switched, that is, learn the appropriate metric first and the embedding later. The philosophy behind this approach is that understanding the inherent geometry of the data is the most crucial part of representation learning. Often, studying the properties of the appropriate metric on the input data sets indicates the type of space, we should be seeking for the representation. Hence giving us more robust representations. Optimizing for the appropriate metric can also help overcome issues such as missing and noisy data. My projects fall into three different areas of representation learning. 1) Geometric and probabilistic analysis of representation learning methods. 2) Developing methods to learn optimal metrics on large datasets. 3) Applications. For the category of geometric and probabilistic analysis of representation learning methods, we have three projects. First, designing optimal training data for denoising autoencoders. Second, formulating a new optimal transport problem and understanding the geometric structure. Third, analyzing the robustness to perturbations of the solutions obtained from the classical multidimensional scaling algorithm versus that of the true solutions to the multidimensional scaling problem. For learning optimal metric, we are given a dissimilarity matrix $hat{D}$, some function $f$ and some a subset $S$ of the space of all metrics and we want to find $D in S$ that minimizes $f(D,hat{D})$. In this thesis, we consider the version of the problem when $S$ is the space of metrics defined on a fixed graph. That is, given a graph $G$, we let $S$, be the space of all metrics defined via $G$. For this $S$, we consider the sparse objective function as well as convex objective functions. We also looked at the problem where we want to learn a tree. We also show how the ideas behind learning the optimal metric can be applied to dimensionality reduction in the presence of missing data. Finally, we look at an application to real world data. Specifically trying to reconstruct ancient Greek text.
590
$a
School code: 0127.
650
4
$a
Applied mathematics.
$3
2122814
650
4
$a
Mathematics education.
$3
641129
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Computer science.
$3
523869
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Information science.
$3
554358
653
$a
Metric learning
653
$a
Representation learning
653
$a
Machine learning
690
$a
0364
690
$a
0280
690
$a
0984
690
$a
0464
690
$a
0723
690
$a
0800
710
2
$a
University of Michigan.
$b
Applied and Interdisciplinary Mathematics.
$3
3182543
773
0
$t
Dissertations Abstracts International
$g
83-05B.
790
$a
0127
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28845725
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9474177
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入