語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Mathematical Optimization Algorithms for Model Compression and Adversarial Learning in Deep Neural Networks.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Mathematical Optimization Algorithms for Model Compression and Adversarial Learning in Deep Neural Networks./
作者:
Zhang, Tianyun.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
128 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
Contained By:
Dissertations Abstracts International83-02B.
標題:
Computer engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28647368
ISBN:
9798535511436
Mathematical Optimization Algorithms for Model Compression and Adversarial Learning in Deep Neural Networks.
Zhang, Tianyun.
Mathematical Optimization Algorithms for Model Compression and Adversarial Learning in Deep Neural Networks.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 128 p.
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
Thesis (Ph.D.)--Syracuse University, 2021.
This item must not be sold to any third party vendors.
Large-scale deep neural networks (DNNs) have made breakthroughs in a variety of tasks, such as image recognition, speech recognition and self-driving cars. However, their large model size and computational requirements add a significant burden to state-of-the-art computing systems. Weight pruning is an effective approach to reduce the model size and computational requirements of DNNs. However, prior works in this area are mainly heuristic methods. As a result, the performance of a DNN cannot maintain for a high weight pruning ratio. To mitigate this limitation, we propose a systematic weight pruning framework for DNNs based on mathematical optimization. We first formulate the weight pruning for DNNs as a non-convex optimization problem, and then systematically solve it using alternating direction method of multipliers (ADMM). Our work achieves a higher weight pruning ratio on DNNs without accuracy loss and a higher acceleration on the inference of DNNs on CPU and GPU platforms compared with prior works. Besides the issue of model size, DNNs are also sensitive to adversarial attacks, a small invisible noise on the input data can fully mislead a DNN. Research on the robustness of DNNs follows two directions in general. The first is to enhance the robustness of DNNs, which increases the degree of difficulty for adversarial attacks to fool DNNs. The second is to design adversarial attack methods to test the robustness of DNNs. These two aspects reciprocally benefit each other towards hardening DNNs. In our work, we propose to generate adversarial attacks with low distortion via convex optimization, which achieves 100% attack success rate with lower distortion compared with prior works. We also propose a unified min-max optimization framework for the adversarial attack and defense on DNNs over multiple domains. Our proposed method performs better compared with the prior works, which use average-based strategies to solve the problems over multiple domains.
ISBN: 9798535511436Subjects--Topical Terms:
621879
Computer engineering.
Subjects--Index Terms:
Adversarial learning
Mathematical Optimization Algorithms for Model Compression and Adversarial Learning in Deep Neural Networks.
LDR
:03217nmm a2200385 4500
001
2349588
005
20230509091120.5
006
m o d
007
cr#unu||||||||
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798535511436
035
$a
(MiAaPQ)AAI28647368
035
$a
AAI28647368
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Zhang, Tianyun.
$3
3688998
245
1 0
$a
Mathematical Optimization Algorithms for Model Compression and Adversarial Learning in Deep Neural Networks.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
128 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
500
$a
Advisor: Fardad, Makan.
502
$a
Thesis (Ph.D.)--Syracuse University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Large-scale deep neural networks (DNNs) have made breakthroughs in a variety of tasks, such as image recognition, speech recognition and self-driving cars. However, their large model size and computational requirements add a significant burden to state-of-the-art computing systems. Weight pruning is an effective approach to reduce the model size and computational requirements of DNNs. However, prior works in this area are mainly heuristic methods. As a result, the performance of a DNN cannot maintain for a high weight pruning ratio. To mitigate this limitation, we propose a systematic weight pruning framework for DNNs based on mathematical optimization. We first formulate the weight pruning for DNNs as a non-convex optimization problem, and then systematically solve it using alternating direction method of multipliers (ADMM). Our work achieves a higher weight pruning ratio on DNNs without accuracy loss and a higher acceleration on the inference of DNNs on CPU and GPU platforms compared with prior works. Besides the issue of model size, DNNs are also sensitive to adversarial attacks, a small invisible noise on the input data can fully mislead a DNN. Research on the robustness of DNNs follows two directions in general. The first is to enhance the robustness of DNNs, which increases the degree of difficulty for adversarial attacks to fool DNNs. The second is to design adversarial attack methods to test the robustness of DNNs. These two aspects reciprocally benefit each other towards hardening DNNs. In our work, we propose to generate adversarial attacks with low distortion via convex optimization, which achieves 100% attack success rate with lower distortion compared with prior works. We also propose a unified min-max optimization framework for the adversarial attack and defense on DNNs over multiple domains. Our proposed method performs better compared with the prior works, which use average-based strategies to solve the problems over multiple domains.
590
$a
School code: 0659.
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Computer science.
$3
523869
650
4
$a
Sparsity.
$3
3680690
650
4
$a
Accuracy.
$3
3559958
650
4
$a
Datasets.
$3
3541416
650
4
$a
Success.
$3
518195
650
4
$a
Experiments.
$3
525909
650
4
$a
Neural networks.
$3
677449
650
4
$a
Dissertations & theses.
$3
3560115
650
4
$a
Approximation.
$3
3560410
650
4
$a
Convex analysis.
$3
3681761
650
4
$a
Regularization methods.
$3
3688999
650
4
$a
Linear programming.
$3
560448
650
4
$a
Heuristic.
$3
568476
650
4
$a
Optimization algorithms.
$3
3683616
653
$a
Adversarial learning
653
$a
Deep neural networks
653
$a
Mathematical optimization
653
$a
Model compression
690
$a
0464
690
$a
0544
690
$a
0984
710
2
$a
Syracuse University.
$b
Electrical Engineering and Computer Science.
$3
3169988
773
0
$t
Dissertations Abstracts International
$g
83-02B.
790
$a
0659
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28647368
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9472026
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入