語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Measuring and Enhancing the Security of Machine Learning.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Measuring and Enhancing the Security of Machine Learning./
作者:
Tramer, Florian Simon.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
196 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-05, Section: B.
Contained By:
Dissertations Abstracts International83-05B.
標題:
Families & family life. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28828009
ISBN:
9798494462206
Measuring and Enhancing the Security of Machine Learning.
Tramer, Florian Simon.
Measuring and Enhancing the Security of Machine Learning.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 196 p.
Source: Dissertations Abstracts International, Volume: 83-05, Section: B.
Thesis (Ph.D.)--Stanford University, 2021.
This item must not be sold to any third party vendors.
The surprising failure modes of machine learning systems threaten their viability in security-critical settings. For example, machine learning models are easily fooled by adversarially chosen inputs, and have the propensity to leak the sensitive data of their users. In this dissertation, we introduce new techniques to proactively measure and enhance the security of machine learning systems. We begin by formally analyzing the threat posed by adversarial examples to the integrity of machine learning models. We argue that the security implications of these attacks has been overstated for many applications, yet demonstrate one application where these attacks are indeed realistic-for evading online content moderation systems. We then show that existing defense techniques operate in fundamentally limited threat models, and therefore cannot hope to prevent realistic attacks. We further introduce new techniques for protecting the privacy of users of machine learning systems-both at training and deployment time. For training, we show how feature engineering techniques can substantially improve differentially private learning algorithms. For deployment, we design a system that combines hardware protections and cryptography to privately outsource machine learning workloads to the cloud. In both cases, we protect a user's sensitive data from other parties while achieving significantly better utility than in prior work. We hope that our results will pave the way towards a more rigorous assessment of machine learning models' vulnerability against evasion attacks, and motivate the deployment of efficient privacy-preserving learning systems.
ISBN: 9798494462206Subjects--Topical Terms:
3422406
Families & family life.
Measuring and Enhancing the Security of Machine Learning.
LDR
:02677nmm a2200313 4500
001
2349895
005
20221010063650.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798494462206
035
$a
(MiAaPQ)AAI28828009
035
$a
(MiAaPQ)STANFORDyz747qq9787
035
$a
AAI28828009
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Tramer, Florian Simon.
$3
3689321
245
1 0
$a
Measuring and Enhancing the Security of Machine Learning.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
196 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-05, Section: B.
500
$a
Advisor: Boneh, Dan;Liang, Percy;Valiant, Gregory.
502
$a
Thesis (Ph.D.)--Stanford University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
The surprising failure modes of machine learning systems threaten their viability in security-critical settings. For example, machine learning models are easily fooled by adversarially chosen inputs, and have the propensity to leak the sensitive data of their users. In this dissertation, we introduce new techniques to proactively measure and enhance the security of machine learning systems. We begin by formally analyzing the threat posed by adversarial examples to the integrity of machine learning models. We argue that the security implications of these attacks has been overstated for many applications, yet demonstrate one application where these attacks are indeed realistic-for evading online content moderation systems. We then show that existing defense techniques operate in fundamentally limited threat models, and therefore cannot hope to prevent realistic attacks. We further introduce new techniques for protecting the privacy of users of machine learning systems-both at training and deployment time. For training, we show how feature engineering techniques can substantially improve differentially private learning algorithms. For deployment, we design a system that combines hardware protections and cryptography to privately outsource machine learning workloads to the cloud. In both cases, we protect a user's sensitive data from other parties while achieving significantly better utility than in prior work. We hope that our results will pave the way towards a more rigorous assessment of machine learning models' vulnerability against evasion attacks, and motivate the deployment of efficient privacy-preserving learning systems.
590
$a
School code: 0212.
650
4
$a
Families & family life.
$3
3422406
650
4
$a
Privacy.
$3
528582
650
4
$a
Neural networks.
$3
677449
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Individual & family studies.
$3
2122770
690
$a
0800
690
$a
0628
710
2
$a
Stanford University.
$3
754827
773
0
$t
Dissertations Abstracts International
$g
83-05B.
790
$a
0212
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28828009
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9472333
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入