語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Accelerating Convex Optimization in ...
~
Xu, Yi.
FindBook
Google Book
Amazon
博客來
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions./
作者:
Xu, Yi.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2019,
面頁冊數:
240 p.
附註:
Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
Contained By:
Dissertations Abstracts International81-04B.
標題:
Computer science. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13886145
ISBN:
9781085792813
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
Xu, Yi.
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
- Ann Arbor : ProQuest Dissertations & Theses, 2019 - 240 p.
Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
Thesis (Ph.D.)--The University of Iowa, 2019.
This item must not be sold to any third party vendors.
In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important to develop efficient and effective optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are black-box algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional black-box algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional black-box algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic optimization problems with explicit max-structural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic optimization.We first considered stochastic optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of O(1/ε1-θ) and O(1/ε2(1-θ)) respectively. Then, we considered a family of optimization problems with an explicit max-structural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition so-called local error bound (LEB) condition, it can improve the iteration complexity of O(1/ε) to O(1/ε1-θ) omitting a logarithmic factor with θ ∈ (0,1]. Next, we proposed new restarted stochastic primal-dual (RSPD) algorithms for solving the problem with stochastic explicit max-structural loss. We successfully got a better iteration complexity than O(1/ε2) without bilinear structure assumption, which is a big challenge of obtaining faster convergence for the considered problem. Finally, we consider finite-sum optimization problems with smooth loss and simple regularizer. We proposed novel techniques to automatically search for the unknown parameter on the fly of optimization while maintaining almost the same convergence rate as an oracle setting assuming the involved parameter is given. Under the Holderian error bound (HEB) condition with θ ∈ (0,1/2)$, the proposed algorithm also enjoys intermediate faster convergence rates than its standard counterparts with only the smoothness assumption.
ISBN: 9781085792813Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Convex Optimization
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
LDR
:05130nmm a2200361 4500
001
2280562
005
20210907071056.5
008
220723s2019 ||||||||||||||||| ||eng d
020
$a
9781085792813
035
$a
(MiAaPQ)AAI13886145
035
$a
AAI13886145
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Xu, Yi.
$3
1265878
245
1 0
$a
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2019
300
$a
240 p.
500
$a
Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
500
$a
Advisor: Yang, Tianbao.
502
$a
Thesis (Ph.D.)--The University of Iowa, 2019.
506
$a
This item must not be sold to any third party vendors.
520
$a
In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important to develop efficient and effective optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are black-box algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional black-box algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional black-box algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic optimization problems with explicit max-structural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic optimization.We first considered stochastic optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of O(1/ε1-θ) and O(1/ε2(1-θ)) respectively. Then, we considered a family of optimization problems with an explicit max-structural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition so-called local error bound (LEB) condition, it can improve the iteration complexity of O(1/ε) to O(1/ε1-θ) omitting a logarithmic factor with θ ∈ (0,1]. Next, we proposed new restarted stochastic primal-dual (RSPD) algorithms for solving the problem with stochastic explicit max-structural loss. We successfully got a better iteration complexity than O(1/ε2) without bilinear structure assumption, which is a big challenge of obtaining faster convergence for the considered problem. Finally, we consider finite-sum optimization problems with smooth loss and simple regularizer. We proposed novel techniques to automatically search for the unknown parameter on the fly of optimization while maintaining almost the same convergence rate as an oracle setting assuming the involved parameter is given. Under the Holderian error bound (HEB) condition with θ ∈ (0,1/2)$, the proposed algorithm also enjoys intermediate faster convergence rates than its standard counterparts with only the smoothness assumption.
590
$a
School code: 0096.
650
4
$a
Computer science.
$3
523869
653
$a
Convex Optimization
653
$a
Local Error Bound
653
$a
Structured regularizers
653
$a
Explicit max-structural loss
653
$a
Stochastic explicit max-structural loss
653
$a
Finite-sum optimization problems
690
$a
0984
710
2
$a
The University of Iowa.
$b
Computer Science.
$3
1672962
773
0
$t
Dissertations Abstracts International
$g
81-04B.
790
$a
0096
791
$a
Ph.D.
792
$a
2019
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13886145
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9432295
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入