Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Study of Deep Neural Networks on Gra...
~
Jiang, Chao.
Linked to FindBook
Google Book
Amazon
博客來
Study of Deep Neural Networks on Graph Data in a Generative Learning Regime.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Study of Deep Neural Networks on Graph Data in a Generative Learning Regime./
Author:
Jiang, Chao.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2022,
Description:
90 p.
Notes:
Source: Dissertations Abstracts International, Volume: 84-05, Section: B.
Contained By:
Dissertations Abstracts International84-05B.
Subject:
Neurons. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29756110
ISBN:
9798352976197
Study of Deep Neural Networks on Graph Data in a Generative Learning Regime.
Jiang, Chao.
Study of Deep Neural Networks on Graph Data in a Generative Learning Regime.
- Ann Arbor : ProQuest Dissertations & Theses, 2022 - 90 p.
Source: Dissertations Abstracts International, Volume: 84-05, Section: B.
Thesis (Ph.D.)--Auburn University, 2022.
This item must not be sold to any third party vendors.
Graph-formatted data is ubiquitous among different domains from social networks and academic citation networks to drug-target interactions and others. Graph neural networks (GNNs) have achieved outstanding performance in applying node classification, link pre- diction, and node clustering, etc. However, there are two common questions asked by researchers.First, how to get a considerable amount of labeled high quality data? Data quality is crucial in training deep neural network models. However, most of the current works in this area have focused on improving a model's performance with the assumption that the preprocessed data are clean. Our first result is about improving data quality by removing noise information. Here we build a real knowledge graph from data sets LitCovid and Pubtator. The multiple types of biomedical associations of the real knowledge graphs, including the COVID-19-related ones, are based upon the co-occurring biomedical entities retrieved from recent literature. However, the applications derived from these raw graphs (e.g., association predictions amongst genes, drugs, and diseases) have a high probability of false-positive predictions as the co-occurrences in literature do not always mean a true biomedical association between two entities. We proposed a framework that utilized generative-based deep neural networks to generate a graph that can distinguish the unknown associations in the raw training graph. Two Generative Adversarial Network models, NetGAN and CELL, were adopted for the edge classification (i.e., link prediction), leveraging unlabeled link information based on the real knowledge graph. The performance of link prediction, especially in the extreme case of training data versus test data at a ratio of 1:9, demonstrated that the promised method still achieved favorable results (AUCROC > 0.8 for synthetic and 0.7 for real dataset) despite the limited amount of testing data available.Second, what is the decision-making process of GNNs as it often remains a black box? In addition, many of the models are vulnerable to adversarial attacks. Our second result focuses on the study of the robustness of GNNs. Recent studies revealed that the GNNs are vulnerable to adversarial attacks, where feeding GNNs with poisoned data at training time can lead them to yield devastative test accuracy. However, the prior studies mainly posit that the adversaries can access freely and manipulate the original graph while obtaining such access could be too costly in practice. To fill this gap, we propose a novel attacking paradigm, named Generative Adversarial Fake Node Camouflaging(GAFNC), with its crux laying in crafting a set of fake nodes in a generative-adversarial regime. These nodes carry camouflaged malicious features and can poison the victim GNN by passing their harmful messages to the original graph via learned topological structures. These messages can maximize the devastation of classification accuracy (i.e., global attack) or enforce the victim GNN to misclassify a targeted node set into prescribed classes (i.e., target attack).
ISBN: 9798352976197Subjects--Topical Terms:
588699
Neurons.
Study of Deep Neural Networks on Graph Data in a Generative Learning Regime.
LDR
:04175nmm a2200337 4500
001
2393594
005
20240414211446.5
006
m o d
007
cr#unu||||||||
008
251215s2022 ||||||||||||||||| ||eng d
020
$a
9798352976197
035
$a
(MiAaPQ)AAI29756110
035
$a
(MiAaPQ)Auburn104158415
035
$a
AAI29756110
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Jiang, Chao.
$3
3344261
245
1 0
$a
Study of Deep Neural Networks on Graph Data in a Generative Learning Regime.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2022
300
$a
90 p.
500
$a
Source: Dissertations Abstracts International, Volume: 84-05, Section: B.
500
$a
Advisor: Chapman, Richard.
502
$a
Thesis (Ph.D.)--Auburn University, 2022.
506
$a
This item must not be sold to any third party vendors.
520
$a
Graph-formatted data is ubiquitous among different domains from social networks and academic citation networks to drug-target interactions and others. Graph neural networks (GNNs) have achieved outstanding performance in applying node classification, link pre- diction, and node clustering, etc. However, there are two common questions asked by researchers.First, how to get a considerable amount of labeled high quality data? Data quality is crucial in training deep neural network models. However, most of the current works in this area have focused on improving a model's performance with the assumption that the preprocessed data are clean. Our first result is about improving data quality by removing noise information. Here we build a real knowledge graph from data sets LitCovid and Pubtator. The multiple types of biomedical associations of the real knowledge graphs, including the COVID-19-related ones, are based upon the co-occurring biomedical entities retrieved from recent literature. However, the applications derived from these raw graphs (e.g., association predictions amongst genes, drugs, and diseases) have a high probability of false-positive predictions as the co-occurrences in literature do not always mean a true biomedical association between two entities. We proposed a framework that utilized generative-based deep neural networks to generate a graph that can distinguish the unknown associations in the raw training graph. Two Generative Adversarial Network models, NetGAN and CELL, were adopted for the edge classification (i.e., link prediction), leveraging unlabeled link information based on the real knowledge graph. The performance of link prediction, especially in the extreme case of training data versus test data at a ratio of 1:9, demonstrated that the promised method still achieved favorable results (AUCROC > 0.8 for synthetic and 0.7 for real dataset) despite the limited amount of testing data available.Second, what is the decision-making process of GNNs as it often remains a black box? In addition, many of the models are vulnerable to adversarial attacks. Our second result focuses on the study of the robustness of GNNs. Recent studies revealed that the GNNs are vulnerable to adversarial attacks, where feeding GNNs with poisoned data at training time can lead them to yield devastative test accuracy. However, the prior studies mainly posit that the adversaries can access freely and manipulate the original graph while obtaining such access could be too costly in practice. To fill this gap, we propose a novel attacking paradigm, named Generative Adversarial Fake Node Camouflaging(GAFNC), with its crux laying in crafting a set of fake nodes in a generative-adversarial regime. These nodes carry camouflaged malicious features and can poison the victim GNN by passing their harmful messages to the original graph via learned topological structures. These messages can maximize the devastation of classification accuracy (i.e., global attack) or enforce the victim GNN to misclassify a targeted node set into prescribed classes (i.e., target attack).
590
$a
School code: 0012.
650
4
$a
Neurons.
$3
588699
650
4
$a
Motivation.
$3
532704
650
4
$a
Deep learning.
$3
3554982
650
4
$a
Poisoning.
$3
770903
650
4
$a
Graph representations.
$3
3560730
650
4
$a
Optimization.
$3
891104
650
4
$a
Decision making.
$3
517204
650
4
$a
Neural networks.
$3
677449
650
4
$a
Design.
$3
518875
650
4
$a
Coronaviruses.
$3
894828
650
4
$a
COVID-19.
$3
3554449
690
$a
0389
690
$a
0800
710
2
$a
Auburn University.
$3
1020457
773
0
$t
Dissertations Abstracts International
$g
84-05B.
790
$a
0012
791
$a
Ph.D.
792
$a
2022
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29756110
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9501914
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login