語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Building Secure and Reliable Deep Learning Systems from a Systems Security Perspective.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Building Secure and Reliable Deep Learning Systems from a Systems Security Perspective./
作者:
Hong, Sanghyun.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
153 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-04, Section: B.
Contained By:
Dissertations Abstracts International83-04B.
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28712745
ISBN:
9798460476473
Building Secure and Reliable Deep Learning Systems from a Systems Security Perspective.
Hong, Sanghyun.
Building Secure and Reliable Deep Learning Systems from a Systems Security Perspective.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 153 p.
Source: Dissertations Abstracts International, Volume: 83-04, Section: B.
Thesis (Ph.D.)--University of Maryland, College Park, 2021.
This item must not be sold to any third party vendors.
As deep learning (DL) is becoming a key component in many business and safety-critical systems, such as self-driving cars or AI-assisted robotic surgery, adversaries have started placing them on their radar. To understand their potential threats, recent work studied the worst-case behaviors of deep neural networks (DNNs), such as mispredictions caused by adversarial examples or models altered by data poisoning attacks. However, most of the prior work narrowly considers DNNs as an isolated mathematical concept, and this perspective overlooks a holistic picture-leaving out the security threats that involve vulnerable interactions between DNNs and hardware or system-level components.In this dissertation, on three separate projects, I conduct a study on how DL systems, owing to the computational properties of DNNs, become particularly vulnerable to existing well-studied attacks. First, I study how over-parameterization hurts a system's resilience to fault-injection attacks. Even with a single bit-flip, when chosen carefully, an attacker can inflict an accuracy drop up to 100%, and half of a DNN's parameters have at least one bit that degrades its accuracy over 10%. An adversary who wields Rowhammer, a fault attack that flips random or targeted bits in the physical memory (DRAM), can exploit this graceless degradation in practice. Second, I study how computational regularities compromise the confidentiality of a system. Leveraging the information leaked by a DNN processing a single sample, an adversary can steal the DNN's often proprietary architecture. An attacker armed with Flush+Reload, a remote side-channel attack, can accurately perform this reconstruction against a DNN deployed in the cloud. Third, I will show how input-adaptive DNNs, e.g., multi-exit networks, fail to promise computational efficiency in an adversarial setting. By adding imperceptible input perturbations, an attacker can significantly increase a multi-exit network's computations to have predictions on an input. This vulnerability also leads to exploitation in resource-constrained settings such as an IoT scenario, where input-adaptive networks are gaining traction. Finally, building on the lessons learned from my projects, I conclude my dissertation by outlining future research directions for designing secure and reliable DL systems.
ISBN: 9798460476473Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Cybersecurity
Building Secure and Reliable Deep Learning Systems from a Systems Security Perspective.
LDR
:03599nmm a2200397 4500
001
2343884
005
20220513114342.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798460476473
035
$a
(MiAaPQ)AAI28712745
035
$a
AAI28712745
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Hong, Sanghyun.
$3
3175321
245
1 0
$a
Building Secure and Reliable Deep Learning Systems from a Systems Security Perspective.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
153 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-04, Section: B.
500
$a
Advisor: Dumitras, Tudor.
502
$a
Thesis (Ph.D.)--University of Maryland, College Park, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
As deep learning (DL) is becoming a key component in many business and safety-critical systems, such as self-driving cars or AI-assisted robotic surgery, adversaries have started placing them on their radar. To understand their potential threats, recent work studied the worst-case behaviors of deep neural networks (DNNs), such as mispredictions caused by adversarial examples or models altered by data poisoning attacks. However, most of the prior work narrowly considers DNNs as an isolated mathematical concept, and this perspective overlooks a holistic picture-leaving out the security threats that involve vulnerable interactions between DNNs and hardware or system-level components.In this dissertation, on three separate projects, I conduct a study on how DL systems, owing to the computational properties of DNNs, become particularly vulnerable to existing well-studied attacks. First, I study how over-parameterization hurts a system's resilience to fault-injection attacks. Even with a single bit-flip, when chosen carefully, an attacker can inflict an accuracy drop up to 100%, and half of a DNN's parameters have at least one bit that degrades its accuracy over 10%. An adversary who wields Rowhammer, a fault attack that flips random or targeted bits in the physical memory (DRAM), can exploit this graceless degradation in practice. Second, I study how computational regularities compromise the confidentiality of a system. Leveraging the information leaked by a DNN processing a single sample, an adversary can steal the DNN's often proprietary architecture. An attacker armed with Flush+Reload, a remote side-channel attack, can accurately perform this reconstruction against a DNN deployed in the cloud. Third, I will show how input-adaptive DNNs, e.g., multi-exit networks, fail to promise computational efficiency in an adversarial setting. By adding imperceptible input perturbations, an attacker can significantly increase a multi-exit network's computations to have predictions on an input. This vulnerability also leads to exploitation in resource-constrained settings such as an IoT scenario, where input-adaptive networks are gaining traction. Finally, building on the lessons learned from my projects, I conclude my dissertation by outlining future research directions for designing secure and reliable DL systems.
590
$a
School code: 0117.
650
4
$a
Computer science.
$3
523869
650
4
$a
Systems science.
$3
3168411
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Information science.
$3
554358
653
$a
Cybersecurity
653
$a
Deep neural networks
653
$a
Hardware attacks
653
$a
System-level components
653
$a
Over-parameterization
653
$a
Input-adaptive DNNs
690
$a
0984
690
$a
0723
690
$a
0800
690
$a
0790
710
2
$a
University of Maryland, College Park.
$b
Computer Science.
$3
1018451
773
0
$t
Dissertations Abstracts International
$g
83-04B.
790
$a
0117
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28712745
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9466322
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入