Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Accelerating Convex Optimization in ...
~
Xu, Yi.
Linked to FindBook
Google Book
Amazon
博客來
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions./
Author:
Xu, Yi.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2019,
Description:
240 p.
Notes:
Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
Contained By:
Dissertations Abstracts International81-04B.
Subject:
Computer science. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13886145
ISBN:
9781085792813
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
Xu, Yi.
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
- Ann Arbor : ProQuest Dissertations & Theses, 2019 - 240 p.
Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
Thesis (Ph.D.)--The University of Iowa, 2019.
This item must not be sold to any third party vendors.
In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important to develop efficient and effective optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are black-box algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional black-box algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional black-box algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic optimization problems with explicit max-structural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic optimization.We first considered stochastic optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of O(1/ε1-θ) and O(1/ε2(1-θ)) respectively. Then, we considered a family of optimization problems with an explicit max-structural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition so-called local error bound (LEB) condition, it can improve the iteration complexity of O(1/ε) to O(1/ε1-θ) omitting a logarithmic factor with θ ∈ (0,1]. Next, we proposed new restarted stochastic primal-dual (RSPD) algorithms for solving the problem with stochastic explicit max-structural loss. We successfully got a better iteration complexity than O(1/ε2) without bilinear structure assumption, which is a big challenge of obtaining faster convergence for the considered problem. Finally, we consider finite-sum optimization problems with smooth loss and simple regularizer. We proposed novel techniques to automatically search for the unknown parameter on the fly of optimization while maintaining almost the same convergence rate as an oracle setting assuming the involved parameter is given. Under the Holderian error bound (HEB) condition with θ ∈ (0,1/2)$, the proposed algorithm also enjoys intermediate faster convergence rates than its standard counterparts with only the smoothness assumption.
ISBN: 9781085792813Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Convex Optimization
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
LDR
:05130nmm a2200361 4500
001
2280562
005
20210907071056.5
008
220723s2019 ||||||||||||||||| ||eng d
020
$a
9781085792813
035
$a
(MiAaPQ)AAI13886145
035
$a
AAI13886145
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Xu, Yi.
$3
1265878
245
1 0
$a
Accelerating Convex Optimization in Machine Learning by Leveraging Functional Growth Conditions.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2019
300
$a
240 p.
500
$a
Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
500
$a
Advisor: Yang, Tianbao.
502
$a
Thesis (Ph.D.)--The University of Iowa, 2019.
506
$a
This item must not be sold to any third party vendors.
520
$a
In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important to develop efficient and effective optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are black-box algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional black-box algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional black-box algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic optimization problems with explicit max-structural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic optimization.We first considered stochastic optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of O(1/ε1-θ) and O(1/ε2(1-θ)) respectively. Then, we considered a family of optimization problems with an explicit max-structural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition so-called local error bound (LEB) condition, it can improve the iteration complexity of O(1/ε) to O(1/ε1-θ) omitting a logarithmic factor with θ ∈ (0,1]. Next, we proposed new restarted stochastic primal-dual (RSPD) algorithms for solving the problem with stochastic explicit max-structural loss. We successfully got a better iteration complexity than O(1/ε2) without bilinear structure assumption, which is a big challenge of obtaining faster convergence for the considered problem. Finally, we consider finite-sum optimization problems with smooth loss and simple regularizer. We proposed novel techniques to automatically search for the unknown parameter on the fly of optimization while maintaining almost the same convergence rate as an oracle setting assuming the involved parameter is given. Under the Holderian error bound (HEB) condition with θ ∈ (0,1/2)$, the proposed algorithm also enjoys intermediate faster convergence rates than its standard counterparts with only the smoothness assumption.
590
$a
School code: 0096.
650
4
$a
Computer science.
$3
523869
653
$a
Convex Optimization
653
$a
Local Error Bound
653
$a
Structured regularizers
653
$a
Explicit max-structural loss
653
$a
Stochastic explicit max-structural loss
653
$a
Finite-sum optimization problems
690
$a
0984
710
2
$a
The University of Iowa.
$b
Computer Science.
$3
1672962
773
0
$t
Dissertations Abstracts International
$g
81-04B.
790
$a
0096
791
$a
Ph.D.
792
$a
2019
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13886145
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9432295
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login