語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Improving Training Performance in Federated Learning.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Improving Training Performance in Federated Learning./
作者:
Ying, Chen.
面頁冊數:
1 online resource (171 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
Contained By:
Dissertations Abstracts International84-12B.
標題:
Computer engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30250238click for full text (PQDT)
ISBN:
9798379763121
Improving Training Performance in Federated Learning.
Ying, Chen.
Improving Training Performance in Federated Learning.
- 1 online resource (171 pages)
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
Thesis (Ph.D.)--University of Toronto (Canada), 2023.
Includes bibliographical references
To utilize enormous data generated on numerous edge devices such as mobile phones to train a high-performance machine learning model while protecting users' data privacy, federated learning has been proposed and has become one of the most essential paradigms in distributed machine learning. Under the coordination of a central server, users collaboratively train a shared global model without sharing their data. They conduct local training with their data and only send their model updates to the central server for aggregating an improved global model. To improve training performance in federated learning, a myriad of new mechanisms have been proposed. However, our extensive evaluation indicated that most state-of-the-art mechanisms failed to perform as well as they claimed. We thus explored potential directions that can consistently decrease the elapsed training time for the global model to converge to a target accuracy, and found that changing the conventional synchronous aggregation paradigm, where the server does not conduct aggregation until it receives updates from all the selected users, to the asynchronous one, where the server aggregates without waiting for slow users, significantly improved the training performance. However, asynchronous federated learning has not been as widely studied as synchronous federated learning, and its existing mechanisms have not reached the best possible performance. We thus propose Blade, a new staleness-aware framework that seeks to improve the performance of asynchronous federated learning by designing new mechanisms in all important design aspects of the training process. In an extensive array of performance evaluations, Blade consistently showed its substantial performance superiority over its state-of-the-art competitors.Under some scenarios of federated learning, users are institutions such as hospitals and banks, which implicitly require centrally storing data of their clients to conduct local training. To protect clients' data privacy, these scenarios motivated us to study three-layer federated learning, where institutions serve as edge servers on the middle layer, between the central server and clients. With empirical and theoretical studies, we observe that pruning and quantization could largely reduce communication overhead with a negligible reduction, sometimes even a slight increase, in training performance. Also, the number of clients' local training epochs affects the training performance. We thus propose two new mechanisms, FedSaw which prunes and quantizes updates, and Tempo which adaptively tunes the number of each client's local training epochs, to improve training performance in three-layer federated learning.Inspired by the advantages of using asynchronous aggregation in two-layer federated learning, we investigate asynchronous three-layer federated learning which also demonstrates its superiority over synchronous three-layer federated learning in our empirical study. We thus modify Blade to adapt it to the three-layer setting. Experimental results show that Blade can also significantly improve training performance in three-layer federated learning.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798379763121Subjects--Topical Terms:
621879
Computer engineering.
Subjects--Index Terms:
Federated learningIndex Terms--Genre/Form:
542853
Electronic books.
Improving Training Performance in Federated Learning.
LDR
:04521nmm a2200409K 4500
001
2362370
005
20231027104017.5
006
m o d
007
cr mn ---uuuuu
008
241011s2023 xx obm 000 0 eng d
020
$a
9798379763121
035
$a
(MiAaPQ)AAI30250238
035
$a
AAI30250238
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Ying, Chen.
$3
3703088
245
1 0
$a
Improving Training Performance in Federated Learning.
264
0
$c
2023
300
$a
1 online resource (171 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
500
$a
Advisor: Li, Baochun.
502
$a
Thesis (Ph.D.)--University of Toronto (Canada), 2023.
504
$a
Includes bibliographical references
520
$a
To utilize enormous data generated on numerous edge devices such as mobile phones to train a high-performance machine learning model while protecting users' data privacy, federated learning has been proposed and has become one of the most essential paradigms in distributed machine learning. Under the coordination of a central server, users collaboratively train a shared global model without sharing their data. They conduct local training with their data and only send their model updates to the central server for aggregating an improved global model. To improve training performance in federated learning, a myriad of new mechanisms have been proposed. However, our extensive evaluation indicated that most state-of-the-art mechanisms failed to perform as well as they claimed. We thus explored potential directions that can consistently decrease the elapsed training time for the global model to converge to a target accuracy, and found that changing the conventional synchronous aggregation paradigm, where the server does not conduct aggregation until it receives updates from all the selected users, to the asynchronous one, where the server aggregates without waiting for slow users, significantly improved the training performance. However, asynchronous federated learning has not been as widely studied as synchronous federated learning, and its existing mechanisms have not reached the best possible performance. We thus propose Blade, a new staleness-aware framework that seeks to improve the performance of asynchronous federated learning by designing new mechanisms in all important design aspects of the training process. In an extensive array of performance evaluations, Blade consistently showed its substantial performance superiority over its state-of-the-art competitors.Under some scenarios of federated learning, users are institutions such as hospitals and banks, which implicitly require centrally storing data of their clients to conduct local training. To protect clients' data privacy, these scenarios motivated us to study three-layer federated learning, where institutions serve as edge servers on the middle layer, between the central server and clients. With empirical and theoretical studies, we observe that pruning and quantization could largely reduce communication overhead with a negligible reduction, sometimes even a slight increase, in training performance. Also, the number of clients' local training epochs affects the training performance. We thus propose two new mechanisms, FedSaw which prunes and quantizes updates, and Tempo which adaptively tunes the number of each client's local training epochs, to improve training performance in three-layer federated learning.Inspired by the advantages of using asynchronous aggregation in two-layer federated learning, we investigate asynchronous three-layer federated learning which also demonstrates its superiority over synchronous three-layer federated learning in our empirical study. We thus modify Blade to adapt it to the three-layer setting. Experimental results show that Blade can also significantly improve training performance in three-layer federated learning.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Information technology.
$3
532993
650
4
$a
Electrical engineering.
$3
649834
653
$a
Federated learning
653
$a
Data privacy
653
$a
FEDSAW
653
$a
TEMPO
653
$a
Training performance
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0464
690
$a
0544
690
$a
0489
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
University of Toronto (Canada).
$b
Electrical and Computer Engineering.
$3
2096349
773
0
$t
Dissertations Abstracts International
$g
84-12B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30250238
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9484726
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入