語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Can We Trust AI? Towards Practical Implementation and Theoretical Analysis in Trustworthy Machine Learning.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Can We Trust AI? Towards Practical Implementation and Theoretical Analysis in Trustworthy Machine Learning./
作者:
Xu, Kaidi.
面頁冊數:
1 online resource (116 pages)
附註:
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
Contained By:
Dissertations Abstracts International83-02B.
標題:
Computer engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28646717click for full text (PQDT)
ISBN:
9798535511139
Can We Trust AI? Towards Practical Implementation and Theoretical Analysis in Trustworthy Machine Learning.
Xu, Kaidi.
Can We Trust AI? Towards Practical Implementation and Theoretical Analysis in Trustworthy Machine Learning.
- 1 online resource (116 pages)
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
Thesis (Ph.D.)--Northeastern University, 2021.
Includes bibliographical references
Deep learning or deep neural networks (DNNs) have achieved extraordinary performance in many application domains such as image classification, object detection and recognition, natural language processing and medical image analysis. It has been well accepted that DNNs are vulnerable to adversarial attacks, which raises concerns of DNNs in security-critical applications and may result in disastrous consequences. Adversarial attacks are usually implemented by generating adversarial examples, i.e., adding sophisticated perturbations onto benign examples, such that adversarial examples are classified by the DNN as target (wrong) labels instead of the correct labels of the benign examples. The adversarial machine learning aims to study this phenomenon and leverage it to build robust machine learning systems and explain DNNs.In this dissertation, we present the mechanism of adversarial machine learning in both empirical and theoretical ways. Specifically, we first introduce a uniform adversarial attack generation framework, structured attack (StrAttack), which explores group sparsity in adversarial perturbations by sliding a mask through images aiming for extracting key spatial structures. Second, we discuss the feasibility of adversarial attack in the physical world and introduce a powerful framework, Expectation over Transformation (EoT). Utilize EoT with Thin Plate Spline (TPS) transformation, we can generate Adversarial T-shirts, a robust physical adversarial example for evading person detectors even if it could undergo non-rigid deformation due to a moving person's pose changes. Third, we stand on the defense side and propose the first adversarial training method based on Graph Neural Network. Fourth, we introduce Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes the provable linear bounds of output neurons given a certain amount of input perturbation. LiRPA studies the adversarial example in a theoretical way and can guarantee the test accuracy of a model by given perturbation constraints. Finally, leveraging the efficient LiRPA with branch and bound, we speed up the conventional Linear Programming-based complete verification framework by an order of magnitude.In the future, we plan to study on a novel patch transformer network to truthfully model real-world physical transformations empirically. In addition, at the formal robustness direction, we plan to explore the complete verification in real-time, that given sufficient time, the verifier should give a definite "yes/no" answer for a property under verification efficiently. Our LiRPA framework combined with GPUs can accelerate this procedure potentially.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798535511139Subjects--Topical Terms:
621879
Computer engineering.
Subjects--Index Terms:
Adversarial Machine LearningIndex Terms--Genre/Form:
542853
Electronic books.
Can We Trust AI? Towards Practical Implementation and Theoretical Analysis in Trustworthy Machine Learning.
LDR
:04097nmm a2200397K 4500
001
2357216
005
20230622065018.5
006
m o d
007
cr mn ---uuuuu
008
241011s2021 xx obm 000 0 eng d
020
$a
9798535511139
035
$a
(MiAaPQ)AAI28646717
035
$a
AAI28646717
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Xu, Kaidi.
$3
3697746
245
1 0
$a
Can We Trust AI? Towards Practical Implementation and Theoretical Analysis in Trustworthy Machine Learning.
264
0
$c
2021
300
$a
1 online resource (116 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
500
$a
Advisor: Lin, Xue.
502
$a
Thesis (Ph.D.)--Northeastern University, 2021.
504
$a
Includes bibliographical references
520
$a
Deep learning or deep neural networks (DNNs) have achieved extraordinary performance in many application domains such as image classification, object detection and recognition, natural language processing and medical image analysis. It has been well accepted that DNNs are vulnerable to adversarial attacks, which raises concerns of DNNs in security-critical applications and may result in disastrous consequences. Adversarial attacks are usually implemented by generating adversarial examples, i.e., adding sophisticated perturbations onto benign examples, such that adversarial examples are classified by the DNN as target (wrong) labels instead of the correct labels of the benign examples. The adversarial machine learning aims to study this phenomenon and leverage it to build robust machine learning systems and explain DNNs.In this dissertation, we present the mechanism of adversarial machine learning in both empirical and theoretical ways. Specifically, we first introduce a uniform adversarial attack generation framework, structured attack (StrAttack), which explores group sparsity in adversarial perturbations by sliding a mask through images aiming for extracting key spatial structures. Second, we discuss the feasibility of adversarial attack in the physical world and introduce a powerful framework, Expectation over Transformation (EoT). Utilize EoT with Thin Plate Spline (TPS) transformation, we can generate Adversarial T-shirts, a robust physical adversarial example for evading person detectors even if it could undergo non-rigid deformation due to a moving person's pose changes. Third, we stand on the defense side and propose the first adversarial training method based on Graph Neural Network. Fourth, we introduce Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes the provable linear bounds of output neurons given a certain amount of input perturbation. LiRPA studies the adversarial example in a theoretical way and can guarantee the test accuracy of a model by given perturbation constraints. Finally, leveraging the efficient LiRPA with branch and bound, we speed up the conventional Linear Programming-based complete verification framework by an order of magnitude.In the future, we plan to study on a novel patch transformer network to truthfully model real-world physical transformations empirically. In addition, at the formal robustness direction, we plan to explore the complete verification in real-time, that given sufficient time, the verifier should give a definite "yes/no" answer for a property under verification efficiently. Our LiRPA framework combined with GPUs can accelerate this procedure potentially.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Computer science.
$3
523869
650
4
$a
Information technology.
$3
532993
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Sparsity.
$3
3680690
650
4
$a
Internships.
$3
3560137
650
4
$a
Deep learning.
$3
3554982
650
4
$a
Datasets.
$3
3541416
650
4
$a
Success.
$3
518195
650
4
$a
Dissertations & theses.
$3
3560115
650
4
$a
Noise.
$3
598816
650
4
$a
Advisors.
$3
3560734
650
4
$a
Defense.
$3
3681633
650
4
$a
Performance evaluation.
$3
3562292
650
4
$a
COVID-19.
$3
3554449
650
4
$a
Power.
$3
518736
650
4
$a
Experiments.
$3
525909
650
4
$a
Neural networks.
$3
677449
650
4
$a
Medical research.
$2
bicssc
$3
1556686
650
4
$a
Classification.
$3
595585
650
4
$a
Linear programming.
$3
560448
650
4
$a
Natural language processing.
$3
1073412
650
4
$a
Methods.
$3
3560391
650
4
$a
Algorithms.
$3
536374
650
4
$a
Ablation.
$3
3562462
653
$a
Adversarial Machine Learning
653
$a
AI Security
653
$a
Deep Learning
653
$a
Trustworthy Machine Learning
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0464
690
$a
0489
690
$a
0984
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
Northeastern University.
$b
Electrical and Computer Engineering.
$3
1018491
773
0
$t
Dissertations Abstracts International
$g
83-02B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28646717
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9479572
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入