語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Access Control of Deep Neural Networks.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Access Control of Deep Neural Networks./
作者:
Tian, Jinyu.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2022,
面頁冊數:
130 p.
附註:
Source: Dissertations Abstracts International, Volume: 84-01, Section: B.
Contained By:
Dissertations Abstracts International84-01B.
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29254970
ISBN:
9798834020486
Access Control of Deep Neural Networks.
Tian, Jinyu.
Access Control of Deep Neural Networks.
- Ann Arbor : ProQuest Dissertations & Theses, 2022 - 130 p.
Source: Dissertations Abstracts International, Volume: 84-01, Section: B.
Thesis (Ph.D.)--University of Macau, 2022.
This item must not be sold to any third party vendors.
Deep Neural Networks (DNN) have been widely used in fields of entertainment, medicine, transportation, etc. The construction of a successful CNN model is not a trivial task, which usually requires substantial investments in expertise, time, and resources. To encourage healthy business investment and competitions, it is crucial to protect the intellectual property (IP) of CNN models by preventing the model from unauthorized accesses. On the other hand, although DNNs have achieved the state-of-the-art performance on a wide range of tasks including image classification, speech recognition, etc., their security is significantly challenged by malicious accesses using adversarial examples (AEs). These examples are manipulated normal inputs (such as natural images or speech signals) with the imperceptible noise while being able to cause severe model output errors. Clearly, the two types of access problems considerably hinder the healthily commercial application and the security of DNNs. This motivates us to design a framework to control the access of DNNs.For the first line of defenses, this thesis proposes a selective encryption (SE) algorithm to protect CNN models from unauthorized access, with a unique feature of providing hierarchical services to users. The proposed algorithm firstly selects important model parameters via the proposed Probabilistic Selection Strategy (PSS). It then encrypts the most important parameters with the designed encryption method called Distribution Preserving Random Mask (DPRM), so as to maximize the performance degradation by encrypting only a very small portion of model parameters. This work also designs a set of access permissions, using which different amount of most important model parameters can be decrypted. Hence, different levels of model performance can be naturally provided for users.Even if a user has been authorized to access the DNNs, he/she possibly is a malicious user who attempts to attack the DNNs by utilizing AEs. Therefore, this thesis also proposes the Sensitivity Inconsistency Detector (SID) to construct the second line of defenses. This detector is derived from an important observation that normal examples (NEs) are insensitive to the fluctuations occurring at the highly-curved region of the decision boundary, while AEs typically designed over one single domain (mostly spatial domain) exhibit exorbitant sensitivity on such fluctuations. Along this line, we design another classifier (called dual classifier) with transformed decision boundary, which can be collaboratively used with the original classifier (called primal classifier) to detect AEs, by virtue of the sensitivity inconsistency.After the adversarial detector captures the AEs of malicious users, we further analyze those AEs so as to guide the design of a robust DNN in the future. Indeed, we observed that existing malicious users generally produce AEs from a continuous perspective. The produced AEs are continuous examples that conflict with some real scenarios. For example, adversarial images should be digital images in the discrete domain. Thus, continuous AEs typically have to be discretized, which inevitably will degrade their attack capability. According to our analysis, such a degradation caused by the discretization is attributed to the obvious difference between continuous AEs and their discrete counterparts. To overcome this limitation, we propose a novel adversarial attack called Discrete Attack (DATK) to produce continuous AEs tightly close to the discrete versions. Owning the negligible distance between them, the expected discrete AEs perform with the same powerful attack capability as the continuous AEs without an extra distortion overhead. More precisely, the proposed DATK generates AEs from a novel perspective by directly modeling adversarial perturbations (APs) as discrete random variables. The AE generation problem thus reduces to the estimation of the distribution of discrete APs. Since this problem typically is non-differential, we relax it with the proposed reparameterizing tricks and obtain an approximated continuous distribution of discrete APs. By virtue of the powerful AEs conforming with real scenarios, we can potentially improve adversarial training techniques for constructing robust DNNs because the existing techniques generally are based on continuous AEs.
ISBN: 9798834020486Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Deep Neural Network access
Access Control of Deep Neural Networks.
LDR
:05505nmm a2200373 4500
001
2351344
005
20221107085357.5
008
241004s2022 ||||||||||||||||| ||eng d
020
$a
9798834020486
035
$a
(MiAaPQ)AAI29254970
035
$a
AAI29254970
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Tian, Jinyu.
$3
3690906
245
1 0
$a
Access Control of Deep Neural Networks.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2022
300
$a
130 p.
500
$a
Source: Dissertations Abstracts International, Volume: 84-01, Section: B.
500
$a
Advisor: Zhou, Jiantao.
502
$a
Thesis (Ph.D.)--University of Macau, 2022.
506
$a
This item must not be sold to any third party vendors.
520
$a
Deep Neural Networks (DNN) have been widely used in fields of entertainment, medicine, transportation, etc. The construction of a successful CNN model is not a trivial task, which usually requires substantial investments in expertise, time, and resources. To encourage healthy business investment and competitions, it is crucial to protect the intellectual property (IP) of CNN models by preventing the model from unauthorized accesses. On the other hand, although DNNs have achieved the state-of-the-art performance on a wide range of tasks including image classification, speech recognition, etc., their security is significantly challenged by malicious accesses using adversarial examples (AEs). These examples are manipulated normal inputs (such as natural images or speech signals) with the imperceptible noise while being able to cause severe model output errors. Clearly, the two types of access problems considerably hinder the healthily commercial application and the security of DNNs. This motivates us to design a framework to control the access of DNNs.For the first line of defenses, this thesis proposes a selective encryption (SE) algorithm to protect CNN models from unauthorized access, with a unique feature of providing hierarchical services to users. The proposed algorithm firstly selects important model parameters via the proposed Probabilistic Selection Strategy (PSS). It then encrypts the most important parameters with the designed encryption method called Distribution Preserving Random Mask (DPRM), so as to maximize the performance degradation by encrypting only a very small portion of model parameters. This work also designs a set of access permissions, using which different amount of most important model parameters can be decrypted. Hence, different levels of model performance can be naturally provided for users.Even if a user has been authorized to access the DNNs, he/she possibly is a malicious user who attempts to attack the DNNs by utilizing AEs. Therefore, this thesis also proposes the Sensitivity Inconsistency Detector (SID) to construct the second line of defenses. This detector is derived from an important observation that normal examples (NEs) are insensitive to the fluctuations occurring at the highly-curved region of the decision boundary, while AEs typically designed over one single domain (mostly spatial domain) exhibit exorbitant sensitivity on such fluctuations. Along this line, we design another classifier (called dual classifier) with transformed decision boundary, which can be collaboratively used with the original classifier (called primal classifier) to detect AEs, by virtue of the sensitivity inconsistency.After the adversarial detector captures the AEs of malicious users, we further analyze those AEs so as to guide the design of a robust DNN in the future. Indeed, we observed that existing malicious users generally produce AEs from a continuous perspective. The produced AEs are continuous examples that conflict with some real scenarios. For example, adversarial images should be digital images in the discrete domain. Thus, continuous AEs typically have to be discretized, which inevitably will degrade their attack capability. According to our analysis, such a degradation caused by the discretization is attributed to the obvious difference between continuous AEs and their discrete counterparts. To overcome this limitation, we propose a novel adversarial attack called Discrete Attack (DATK) to produce continuous AEs tightly close to the discrete versions. Owning the negligible distance between them, the expected discrete AEs perform with the same powerful attack capability as the continuous AEs without an extra distortion overhead. More precisely, the proposed DATK generates AEs from a novel perspective by directly modeling adversarial perturbations (APs) as discrete random variables. The AE generation problem thus reduces to the estimation of the distribution of discrete APs. Since this problem typically is non-differential, we relax it with the proposed reparameterizing tricks and obtain an approximated continuous distribution of discrete APs. By virtue of the powerful AEs conforming with real scenarios, we can potentially improve adversarial training techniques for constructing robust DNNs because the existing techniques generally are based on continuous AEs.
590
$a
School code: 1382.
650
4
$a
Computer science.
$3
523869
650
4
$a
Intellectual property.
$3
572975
650
4
$a
Artificial intelligence.
$3
516317
653
$a
Deep Neural Network access
653
$a
Cybersecurity
653
$a
Selective encryption
653
$a
Unauthorized access
653
$a
Adversarial training
690
$a
0984
690
$a
0800
690
$a
0513
710
2
$a
University of Macau.
$b
Computer and Information Science.
$3
3546400
773
0
$t
Dissertations Abstracts International
$g
84-01B.
790
$a
1382
791
$a
Ph.D.
792
$a
2022
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29254970
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9473782
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入