Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Distributed Optimization: Algorithms...
~
Jakovetic, Dusan.
Linked to FindBook
Google Book
Amazon
博客來
Distributed Optimization: Algorithms and Convergence Rates.
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Distributed Optimization: Algorithms and Convergence Rates./
Author:
Jakovetic, Dusan.
Description:
229 p.
Notes:
Source: Dissertation Abstracts International, Volume: 74-12(E), Section: B.
Contained By:
Dissertation Abstracts International74-12B(E).
Subject:
Engineering, Electronics and Electrical. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3573468
ISBN:
9781303436543
Distributed Optimization: Algorithms and Convergence Rates.
Jakovetic, Dusan.
Distributed Optimization: Algorithms and Convergence Rates.
- 229 p.
Source: Dissertation Abstracts International, Volume: 74-12(E), Section: B.
Thesis (Ph.D.)--Carnegie Mellon University, 2013.
This thesis develops and analyzes distributed algorithms for convex optimization in networks, when nodes cooperatively minimize the sum of their locally known costs subject to a global variable of common interest. This setup encompasses very relevant applications in networked systems, including distributed estimation and source localization in sensor networks, and distributed learning. Generally, existing literature offers two types of distributed algorithms to solve the above problem: 1) distributed (consensus-based) gradient methods; and 2) distributed augmented Lagrangian methods; but both types present several limitations. 1) Distributed gradient-like methods have slow practical convergence rate; further, they are usually studied for very general, non-differentiable costs, and the possibilities for speed-ups on more structured functions are lot sufficiently explored. 2) Distributed augmented Lagrangian methods generally show good performance n practice, but there is a limited understanding of their convergence rates, specially how the rates depend in the underlying network.
ISBN: 9781303436543Subjects--Topical Terms:
626636
Engineering, Electronics and Electrical.
Distributed Optimization: Algorithms and Convergence Rates.
LDR
:03280nam a2200289 4500
001
1967876
005
20141121132935.5
008
150210s2013 ||||||||||||||||| ||eng d
020
$a
9781303436543
035
$a
(MiAaPQ)AAI3573468
035
$a
AAI3573468
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Jakovetic, Dusan.
$3
2104969
245
1 0
$a
Distributed Optimization: Algorithms and Convergence Rates.
300
$a
229 p.
500
$a
Source: Dissertation Abstracts International, Volume: 74-12(E), Section: B.
500
$a
Adviser: Jose M. F. Moura.
502
$a
Thesis (Ph.D.)--Carnegie Mellon University, 2013.
520
$a
This thesis develops and analyzes distributed algorithms for convex optimization in networks, when nodes cooperatively minimize the sum of their locally known costs subject to a global variable of common interest. This setup encompasses very relevant applications in networked systems, including distributed estimation and source localization in sensor networks, and distributed learning. Generally, existing literature offers two types of distributed algorithms to solve the above problem: 1) distributed (consensus-based) gradient methods; and 2) distributed augmented Lagrangian methods; but both types present several limitations. 1) Distributed gradient-like methods have slow practical convergence rate; further, they are usually studied for very general, non-differentiable costs, and the possibilities for speed-ups on more structured functions are lot sufficiently explored. 2) Distributed augmented Lagrangian methods generally show good performance n practice, but there is a limited understanding of their convergence rates, specially how the rates depend in the underlying network.
520
$a
This thesis contributes to both classes of algorithms in several ways. We propose a new class of fast distributed gradient algorithms that are Nesterov-like. We achieve this by exploiting the structure of convex, differeniable costs with Lipschitz continuous and bounded gradients. We establish their fast convergence rates in terms of the number of per-node communications, per-node gradient evaluations, and the network spectral gap. Furthermore, we show that current distributed gradient methods cannot achieve the rates of our methods under the same function classes. Our distributed Nesterov-like gradient algorithms achieve guaranteed rates for both static and random networks, including the scenario with intermittently failing inks or randomized communication protocols. With respect to distributed augmented Lagrangian methods, ye consider both deterministic and randomized distributed methods, subsuming known methods but also introducing novel algorithms. Assuming twice continuously differentiable costs with a bounded Hessian, ye establish global linear convergence rates, in terms of the number of per-node communications, and, unlike most of the existing work, in terms of the network spectral gap. We illustrate our methods with reveal applications in sensor networks and distributed learning.
590
$a
School code: 0041.
650
4
$a
Engineering, Electronics and Electrical.
$3
626636
650
4
$a
Engineering, Computer.
$3
1669061
690
$a
0544
690
$a
0464
710
2
$a
Carnegie Mellon University.
$3
1018096
773
0
$t
Dissertation Abstracts International
$g
74-12B(E).
790
$a
0041
791
$a
Ph.D.
792
$a
2013
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3573468
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9262882
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login