語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Distributed Optimization: Algorithms...
~
Jakovetic, Dusan.
FindBook
Google Book
Amazon
博客來
Distributed Optimization: Algorithms and Convergence Rates.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Distributed Optimization: Algorithms and Convergence Rates./
作者:
Jakovetic, Dusan.
面頁冊數:
229 p.
附註:
Source: Dissertation Abstracts International, Volume: 74-12(E), Section: B.
Contained By:
Dissertation Abstracts International74-12B(E).
標題:
Engineering, Electronics and Electrical. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3573468
ISBN:
9781303436543
Distributed Optimization: Algorithms and Convergence Rates.
Jakovetic, Dusan.
Distributed Optimization: Algorithms and Convergence Rates.
- 229 p.
Source: Dissertation Abstracts International, Volume: 74-12(E), Section: B.
Thesis (Ph.D.)--Carnegie Mellon University, 2013.
This thesis develops and analyzes distributed algorithms for convex optimization in networks, when nodes cooperatively minimize the sum of their locally known costs subject to a global variable of common interest. This setup encompasses very relevant applications in networked systems, including distributed estimation and source localization in sensor networks, and distributed learning. Generally, existing literature offers two types of distributed algorithms to solve the above problem: 1) distributed (consensus-based) gradient methods; and 2) distributed augmented Lagrangian methods; but both types present several limitations. 1) Distributed gradient-like methods have slow practical convergence rate; further, they are usually studied for very general, non-differentiable costs, and the possibilities for speed-ups on more structured functions are lot sufficiently explored. 2) Distributed augmented Lagrangian methods generally show good performance n practice, but there is a limited understanding of their convergence rates, specially how the rates depend in the underlying network.
ISBN: 9781303436543Subjects--Topical Terms:
626636
Engineering, Electronics and Electrical.
Distributed Optimization: Algorithms and Convergence Rates.
LDR
:03280nam a2200289 4500
001
1967876
005
20141121132935.5
008
150210s2013 ||||||||||||||||| ||eng d
020
$a
9781303436543
035
$a
(MiAaPQ)AAI3573468
035
$a
AAI3573468
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Jakovetic, Dusan.
$3
2104969
245
1 0
$a
Distributed Optimization: Algorithms and Convergence Rates.
300
$a
229 p.
500
$a
Source: Dissertation Abstracts International, Volume: 74-12(E), Section: B.
500
$a
Adviser: Jose M. F. Moura.
502
$a
Thesis (Ph.D.)--Carnegie Mellon University, 2013.
520
$a
This thesis develops and analyzes distributed algorithms for convex optimization in networks, when nodes cooperatively minimize the sum of their locally known costs subject to a global variable of common interest. This setup encompasses very relevant applications in networked systems, including distributed estimation and source localization in sensor networks, and distributed learning. Generally, existing literature offers two types of distributed algorithms to solve the above problem: 1) distributed (consensus-based) gradient methods; and 2) distributed augmented Lagrangian methods; but both types present several limitations. 1) Distributed gradient-like methods have slow practical convergence rate; further, they are usually studied for very general, non-differentiable costs, and the possibilities for speed-ups on more structured functions are lot sufficiently explored. 2) Distributed augmented Lagrangian methods generally show good performance n practice, but there is a limited understanding of their convergence rates, specially how the rates depend in the underlying network.
520
$a
This thesis contributes to both classes of algorithms in several ways. We propose a new class of fast distributed gradient algorithms that are Nesterov-like. We achieve this by exploiting the structure of convex, differeniable costs with Lipschitz continuous and bounded gradients. We establish their fast convergence rates in terms of the number of per-node communications, per-node gradient evaluations, and the network spectral gap. Furthermore, we show that current distributed gradient methods cannot achieve the rates of our methods under the same function classes. Our distributed Nesterov-like gradient algorithms achieve guaranteed rates for both static and random networks, including the scenario with intermittently failing inks or randomized communication protocols. With respect to distributed augmented Lagrangian methods, ye consider both deterministic and randomized distributed methods, subsuming known methods but also introducing novel algorithms. Assuming twice continuously differentiable costs with a bounded Hessian, ye establish global linear convergence rates, in terms of the number of per-node communications, and, unlike most of the existing work, in terms of the network spectral gap. We illustrate our methods with reveal applications in sensor networks and distributed learning.
590
$a
School code: 0041.
650
4
$a
Engineering, Electronics and Electrical.
$3
626636
650
4
$a
Engineering, Computer.
$3
1669061
690
$a
0544
690
$a
0464
710
2
$a
Carnegie Mellon University.
$3
1018096
773
0
$t
Dissertation Abstracts International
$g
74-12B(E).
790
$a
0041
791
$a
Ph.D.
792
$a
2013
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3573468
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9262882
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入