語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Convex Optimization Algorithms and R...
~
Huang, Bo.
FindBook
Google Book
Amazon
博客來
Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning./
作者:
Huang, Bo.
面頁冊數:
153 p.
附註:
Source: Dissertation Abstracts International, Volume: 75-08(E), Section: B.
Contained By:
Dissertation Abstracts International75-08B(E).
標題:
Mathematics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3617734
ISBN:
9781303855153
Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning.
Huang, Bo.
Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning.
- 153 p.
Source: Dissertation Abstracts International, Volume: 75-08(E), Section: B.
Thesis (Ph.D.)--Columbia University, 2014.
Sparse modeling is a rapidly developing topic that arises frequently in areas such as machine learning, data analysis and signal processing. One important application of sparse modeling is the recovery of a high-dimensional object from relatively low number of noisy observations, which is the main focuses of the Compressed Sensing [14,22],Matrix Completion(MC) [13,34,68] and Robust Principal Component Analysis (RPCA) [12]. However, the power of sparse models is hampered by the unprecedented size of the data that has become more and more available in practice. Therefore, it has become increasingly important to better harnessing the convex optimization techniques to take advantage of any underlying "sparsity" structure in problems of extremely large size.
ISBN: 9781303855153Subjects--Topical Terms:
515831
Mathematics.
Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning.
LDR
:05894nam a2200361 4500
001
1967469
005
20141124080923.5
008
150210s2014 ||||||||||||||||| ||eng d
020
$a
9781303855153
035
$a
(MiAaPQ)AAI3617734
035
$a
AAI3617734
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Huang, Bo.
$3
1917542
245
1 0
$a
Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning.
300
$a
153 p.
500
$a
Source: Dissertation Abstracts International, Volume: 75-08(E), Section: B.
500
$a
Adviser: Donald Goldfarb.
502
$a
Thesis (Ph.D.)--Columbia University, 2014.
520
$a
Sparse modeling is a rapidly developing topic that arises frequently in areas such as machine learning, data analysis and signal processing. One important application of sparse modeling is the recovery of a high-dimensional object from relatively low number of noisy observations, which is the main focuses of the Compressed Sensing [14,22],Matrix Completion(MC) [13,34,68] and Robust Principal Component Analysis (RPCA) [12]. However, the power of sparse models is hampered by the unprecedented size of the data that has become more and more available in practice. Therefore, it has become increasingly important to better harnessing the convex optimization techniques to take advantage of any underlying "sparsity" structure in problems of extremely large size.
520
$a
This thesis focuses on two main aspects of sparse modeling. From the modeling perspective, it extends convex programming formulations for matrix completion and robust principal component analysis problems to the case of tensors, and derives theoretical guarantees for exact tensor recovery under a framework of strongly convex programming. On the optimization side, an efficient first-order algorithm with the optimal convergence rate has been proposed and studied for a wide range of problems of linearly constraint sparse modeling problems.
520
$a
In Chapter 2, we generalize matrix completion and matrix RPCA models and rigorously study tractable models for provably recovering low-rank tensors. Unlike their matrix-based predecessors, current convex approaches for recovering low-rank tensors based on incomplete (tensor completion) or/and grossly corrupted (tensor robust principal analysis) observations still suffer from a lack of theoretical guarantees, although they have been used in various recent applications and have exhibited promising empirical performance. In Chapter 2, we attempt to fill this gap. Specifically, we propose a class of convex recovery models (including strongly convex programs) that can be proved to guarantee exact recoveries under certain conditions.
520
$a
In all of the sparse models for low-rank tensor recovery problems discussed in Chapter 2, the most popular convex relaxations currently being used minimize the sum of the nuclear norms (SNN) of the unfolding matrices of the tensor. In Chapter 3, we show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires O( rnK-1) observations. In contrast, a certain (intractable) non-convex formulation needs only O(rK +nrK) observations. We introduce a simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O( rk2 n nk2 ) observations. The lower bound for the SNN model follows from our new result on recovering signals with multiple structures (e.g. sparse, low rank), which demonstrates the significant sub-optimality of the common approach of minimizing the sum of individual sparsity inducing norms (e.g. l1 norm, nuclear norm). Our new formulation for low-rank tensor recovery shows how the sample complexity can be reduced by designing convex regularizers that exploit several structures jointly.
520
$a
In Chapter 4, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the sparse models discussed in Chapter 2 and 3. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm first proposed by Stanley Osher and his collaborators is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB method requires O(1/epsilon) iterations to obtain an epsilon-optimal solution and the ALB algorithm reduces this iteration complexity to O(1/ 3 ) while requiring almost the same computational effort on each iteration. Numerical results on compressed sensing and matrix completion problems are presented that demonstrate that the ALB method can be significantly faster than the LB method.
520
$a
In Chapter 5, we extend the arguments of Chapter 4 and apply the linearized Bregman (LB) method to solving linear programming problems. Numerical experiments show that neither the LB method or the accelerated LB method works well on linear programming problems, especially on real data sets. Inspired by the observation that the linearized Bregman, when solving linear programming problems, can be viewed as a sequence of box-constrained quadratic minimizations that are restricted to different supports of the solution variable, we propose new algorithms that combine the Conjugate Gredient/BFGS update with the linearized Bregman method. Numerical experiments on both synthetic and real data sets were conducted, and the new CGLB algorithm not only significantly outperformed the accelerated linearized Bregman proposed in Chapter 4, but was also comparable with the MATLAB solver on small-medium scale real data sets.
590
$a
School code: 0054.
650
4
$a
Mathematics.
$3
515831
650
4
$a
Statistics.
$3
517247
650
4
$a
Operations Research.
$3
626629
650
4
$a
Artificial Intelligence.
$3
769149
690
$a
0405
690
$a
0463
690
$a
0796
690
$a
0800
710
2
$a
Columbia University.
$b
Industrial Engineering.
$3
2104478
773
0
$t
Dissertation Abstracts International
$g
75-08B(E).
790
$a
0054
791
$a
Ph.D.
792
$a
2014
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3617734
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9262475
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入