語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Adversarial Machine Learning in Comp...
~
Qin, Yi.
FindBook
Google Book
Amazon
博客來
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models./
作者:
Qin, Yi.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
142 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-01, Section: B.
Contained By:
Dissertations Abstracts International83-01B.
標題:
Computer science. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28413408
ISBN:
9798516977770
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
Qin, Yi.
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 142 p.
Source: Dissertations Abstracts International, Volume: 83-01, Section: B.
Thesis (Ph.D.)--Colorado School of Mines, 2021.
This item must not be sold to any third party vendors.
Machine learning models, including neural networks, have gained great popularity in recent years. Deep neural networks are able to directly learn from raw data and can outperform traditional machine learning models. As a result, they have been increasingly used in a variety of application domains such as image classification, natural language processing, and malware detection. However, deep neural networks are demonstrated to be vulnerable to adversarial examples at the test time. Adversarial examples are malicious inputs generated from the legitimate inputs by adding small perturbations in order to fool machine learning models to misclassify. We mainly aim to answer two research questions in this thesis: How are machine learning models vulnerable to adversarial examples? How can we better defend against the adversarial examples? We first improve the effectiveness of adversarial training by designing an experimental framework to study Method-Based Ensemble Adversarial Training (MBEAT) and Round Gap Of Adversarial Training (RGOAT). We then demonstrate the strong distinguishability of adversarial examples and design a simple yet effective approach called defensive distinction under the formulation of multi-label classification to protect against adversarial examples. We also propose fuzzing-based hard-label black-box attacks against machine learning models. We design an AdvFuzzer to explore multiple paths between a source image and a guidance image, and design a LocalFuzzer to explore the nearby space around a given input for identifying potential adversarial examples. Lastly, we propose a key-based input transformation defense to defend against adversarial examples.
ISBN: 9798516977770Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Machine learning
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
LDR
:02848nmm a2200361 4500
001
2283460
005
20211029101452.5
008
220723s2021 ||||||||||||||||| ||eng d
020
$a
9798516977770
035
$a
(MiAaPQ)AAI28413408
035
$a
AAI28413408
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Qin, Yi.
$3
1944293
245
1 0
$a
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
142 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-01, Section: B.
500
$a
Advisor: Yue, Chuan.
502
$a
Thesis (Ph.D.)--Colorado School of Mines, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Machine learning models, including neural networks, have gained great popularity in recent years. Deep neural networks are able to directly learn from raw data and can outperform traditional machine learning models. As a result, they have been increasingly used in a variety of application domains such as image classification, natural language processing, and malware detection. However, deep neural networks are demonstrated to be vulnerable to adversarial examples at the test time. Adversarial examples are malicious inputs generated from the legitimate inputs by adding small perturbations in order to fool machine learning models to misclassify. We mainly aim to answer two research questions in this thesis: How are machine learning models vulnerable to adversarial examples? How can we better defend against the adversarial examples? We first improve the effectiveness of adversarial training by designing an experimental framework to study Method-Based Ensemble Adversarial Training (MBEAT) and Round Gap Of Adversarial Training (RGOAT). We then demonstrate the strong distinguishability of adversarial examples and design a simple yet effective approach called defensive distinction under the formulation of multi-label classification to protect against adversarial examples. We also propose fuzzing-based hard-label black-box attacks against machine learning models. We design an AdvFuzzer to explore multiple paths between a source image and a guidance image, and design a LocalFuzzer to explore the nearby space around a given input for identifying potential adversarial examples. Lastly, we propose a key-based input transformation defense to defend against adversarial examples.
590
$a
School code: 0052.
650
4
$a
Computer science.
$3
523869
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Neural networks.
$3
677449
650
4
$a
Accuracy.
$3
3559958
650
4
$a
Methods.
$3
3560391
650
4
$a
Algorithms.
$3
536374
650
4
$a
Success.
$3
518195
650
4
$a
Experiments.
$3
525909
653
$a
Machine learning
653
$a
Artificial intelligence
653
$a
Neural networks
653
$a
Computer science
690
$a
0984
690
$a
0464
690
$a
0800
710
2
$a
Colorado School of Mines.
$b
Computer Science.
$3
3562426
773
0
$t
Dissertations Abstracts International
$g
83-01B.
790
$a
0052
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28413408
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9435193
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入