Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Adversarial Machine Learning in Comp...
~
Qin, Yi.
Linked to FindBook
Google Book
Amazon
博客來
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models./
Author:
Qin, Yi.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
Description:
142 p.
Notes:
Source: Dissertations Abstracts International, Volume: 83-01, Section: B.
Contained By:
Dissertations Abstracts International83-01B.
Subject:
Computer science. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28413408
ISBN:
9798516977770
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
Qin, Yi.
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 142 p.
Source: Dissertations Abstracts International, Volume: 83-01, Section: B.
Thesis (Ph.D.)--Colorado School of Mines, 2021.
This item must not be sold to any third party vendors.
Machine learning models, including neural networks, have gained great popularity in recent years. Deep neural networks are able to directly learn from raw data and can outperform traditional machine learning models. As a result, they have been increasingly used in a variety of application domains such as image classification, natural language processing, and malware detection. However, deep neural networks are demonstrated to be vulnerable to adversarial examples at the test time. Adversarial examples are malicious inputs generated from the legitimate inputs by adding small perturbations in order to fool machine learning models to misclassify. We mainly aim to answer two research questions in this thesis: How are machine learning models vulnerable to adversarial examples? How can we better defend against the adversarial examples? We first improve the effectiveness of adversarial training by designing an experimental framework to study Method-Based Ensemble Adversarial Training (MBEAT) and Round Gap Of Adversarial Training (RGOAT). We then demonstrate the strong distinguishability of adversarial examples and design a simple yet effective approach called defensive distinction under the formulation of multi-label classification to protect against adversarial examples. We also propose fuzzing-based hard-label black-box attacks against machine learning models. We design an AdvFuzzer to explore multiple paths between a source image and a guidance image, and design a LocalFuzzer to explore the nearby space around a given input for identifying potential adversarial examples. Lastly, we propose a key-based input transformation defense to defend against adversarial examples.
ISBN: 9798516977770Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Machine learning
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
LDR
:02848nmm a2200361 4500
001
2283460
005
20211029101452.5
008
220723s2021 ||||||||||||||||| ||eng d
020
$a
9798516977770
035
$a
(MiAaPQ)AAI28413408
035
$a
AAI28413408
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Qin, Yi.
$3
1944293
245
1 0
$a
Adversarial Machine Learning in Computer Vision: Attacks and Defenses on Machine Learning Models.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
142 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-01, Section: B.
500
$a
Advisor: Yue, Chuan.
502
$a
Thesis (Ph.D.)--Colorado School of Mines, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Machine learning models, including neural networks, have gained great popularity in recent years. Deep neural networks are able to directly learn from raw data and can outperform traditional machine learning models. As a result, they have been increasingly used in a variety of application domains such as image classification, natural language processing, and malware detection. However, deep neural networks are demonstrated to be vulnerable to adversarial examples at the test time. Adversarial examples are malicious inputs generated from the legitimate inputs by adding small perturbations in order to fool machine learning models to misclassify. We mainly aim to answer two research questions in this thesis: How are machine learning models vulnerable to adversarial examples? How can we better defend against the adversarial examples? We first improve the effectiveness of adversarial training by designing an experimental framework to study Method-Based Ensemble Adversarial Training (MBEAT) and Round Gap Of Adversarial Training (RGOAT). We then demonstrate the strong distinguishability of adversarial examples and design a simple yet effective approach called defensive distinction under the formulation of multi-label classification to protect against adversarial examples. We also propose fuzzing-based hard-label black-box attacks against machine learning models. We design an AdvFuzzer to explore multiple paths between a source image and a guidance image, and design a LocalFuzzer to explore the nearby space around a given input for identifying potential adversarial examples. Lastly, we propose a key-based input transformation defense to defend against adversarial examples.
590
$a
School code: 0052.
650
4
$a
Computer science.
$3
523869
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Neural networks.
$3
677449
650
4
$a
Accuracy.
$3
3559958
650
4
$a
Methods.
$3
3560391
650
4
$a
Algorithms.
$3
536374
650
4
$a
Success.
$3
518195
650
4
$a
Experiments.
$3
525909
653
$a
Machine learning
653
$a
Artificial intelligence
653
$a
Neural networks
653
$a
Computer science
690
$a
0984
690
$a
0464
690
$a
0800
710
2
$a
Colorado School of Mines.
$b
Computer Science.
$3
3562426
773
0
$t
Dissertations Abstracts International
$g
83-01B.
790
$a
0052
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28413408
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9435193
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login