語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Development of Deep Learning Models ...
~
Li, Xiang.
FindBook
Google Book
Amazon
博客來
Development of Deep Learning Models for Attributed Graphs.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Development of Deep Learning Models for Attributed Graphs./
作者:
Li, Xiang.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2023,
面頁冊數:
168 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Contained By:
Dissertations Abstracts International85-04B.
標題:
Computer engineering. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30788287
ISBN:
9798380595339
Development of Deep Learning Models for Attributed Graphs.
Li, Xiang.
Development of Deep Learning Models for Attributed Graphs.
- Ann Arbor : ProQuest Dissertations & Theses, 2023 - 168 p.
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Thesis (Ph.D.)--The Ohio State University, 2023.
Attributed graphs, i.e. graphs with attributes associated with nodes, are popular data representations used to capture interactions between entities. In recent years, interest has been growing in developing data mining techniques on attributed graphs for different learning tasks such as node classification and clustering, link prediction and graph classification. Graph Neural Networks (GNN) are emerging as the state-of-the-art graph mining models. Within GNN, Graph Convolutional Networks (GCN) have been used with great success in various domains such as recommendation systems, social network analysis, AI-powered drug discovery etc. However, there are multiple challenges from different aspects in using GCN based approaches: 1) high time cost resulting from the frequent loading of data to GPUs during training; 2) limited learning ability resulting from over-smoothing issues inherent to GCN; 3) difficulties in scaling to large-scale graphs due to limited GPU memory; 4) restrictions on access to centralized training datasets when data sharing is prohibited due to privacy or commercial restrictions.This dissertation focuses on developing multiple efficient GNN based approaches for attributed graphs learning aimed at solving the aforementioned challenges. First, we present a general framework, in which multiple GCN methods (GraphSage, clusterGCN, GraphSaint) can be accelerated by reducing the frequency of data transfer to GPUs without noticeable degradation on learning ability. Second, to relieve both the over-smoothing and scalability issues of GCN, we describe our scalable deep clustering framework, Random-walk based Scalable Learning (RwSL) focusing on the node clustering task. Previous work like GCN based DGI, SDCN, DMoN or graph filtering based AGC, SSGC, AGE do not scale to large-scale graphs due to their use of non-scalable graph convolution operations. In contrast, RwSL can scale to graphs of arbitrary size by employing a parallelizable random-walk based algorithm as a graph filter and a subsequent DNN based module for clustering-oriented mini-batch training. Third, we present a scalable GNN method, Deep Metric Learning with Multi-class Tuplet Loss (DMT) where a resulting embedding can support multiple downstream learning tasks (node classification/clustering, link prediction) with superior performance and training efficiency. This further extends the application scope of scalable GNN approaches and provides improved learning capability.Finally, we consider scenarios, where graph-structure data are stored in a decentralized manner and transfer of the raw data, is prohibited. We propose a framework, Federated Contrastive Learning of Graph-level representation (FCLG). Our goal is to train a global GNN based model that exploits data from decentralized clients. In order to address the Non-IID (independent and identical distributed) issues inherent to Federated Learning (FL), we employ a two-level contrastive learning mechanism. On each client, we use the contrast between multiple augmented views of input graphs to encode robust characteristics within different graphs into a local GNN model. We then contrast the global model learned on the server with the local models learned on clients to improve the generalization performance of the global model.
ISBN: 9798380595339Subjects--Topical Terms:
621879
Computer engineering.
Subjects--Index Terms:
Attributed graphs
Development of Deep Learning Models for Attributed Graphs.
LDR
:04464nmm a2200397 4500
001
2402055
005
20241028114743.5
006
m o d
007
cr#unu||||||||
008
251215s2023 ||||||||||||||||| ||eng d
020
$a
9798380595339
035
$a
(MiAaPQ)AAI30788287
035
$a
AAI30788287
035
$a
2402055
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Li, Xiang.
$3
927884
245
1 0
$a
Development of Deep Learning Models for Attributed Graphs.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
168 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
500
$a
Advisor: Ramnath, Rajiv.
502
$a
Thesis (Ph.D.)--The Ohio State University, 2023.
520
$a
Attributed graphs, i.e. graphs with attributes associated with nodes, are popular data representations used to capture interactions between entities. In recent years, interest has been growing in developing data mining techniques on attributed graphs for different learning tasks such as node classification and clustering, link prediction and graph classification. Graph Neural Networks (GNN) are emerging as the state-of-the-art graph mining models. Within GNN, Graph Convolutional Networks (GCN) have been used with great success in various domains such as recommendation systems, social network analysis, AI-powered drug discovery etc. However, there are multiple challenges from different aspects in using GCN based approaches: 1) high time cost resulting from the frequent loading of data to GPUs during training; 2) limited learning ability resulting from over-smoothing issues inherent to GCN; 3) difficulties in scaling to large-scale graphs due to limited GPU memory; 4) restrictions on access to centralized training datasets when data sharing is prohibited due to privacy or commercial restrictions.This dissertation focuses on developing multiple efficient GNN based approaches for attributed graphs learning aimed at solving the aforementioned challenges. First, we present a general framework, in which multiple GCN methods (GraphSage, clusterGCN, GraphSaint) can be accelerated by reducing the frequency of data transfer to GPUs without noticeable degradation on learning ability. Second, to relieve both the over-smoothing and scalability issues of GCN, we describe our scalable deep clustering framework, Random-walk based Scalable Learning (RwSL) focusing on the node clustering task. Previous work like GCN based DGI, SDCN, DMoN or graph filtering based AGC, SSGC, AGE do not scale to large-scale graphs due to their use of non-scalable graph convolution operations. In contrast, RwSL can scale to graphs of arbitrary size by employing a parallelizable random-walk based algorithm as a graph filter and a subsequent DNN based module for clustering-oriented mini-batch training. Third, we present a scalable GNN method, Deep Metric Learning with Multi-class Tuplet Loss (DMT) where a resulting embedding can support multiple downstream learning tasks (node classification/clustering, link prediction) with superior performance and training efficiency. This further extends the application scope of scalable GNN approaches and provides improved learning capability.Finally, we consider scenarios, where graph-structure data are stored in a decentralized manner and transfer of the raw data, is prohibited. We propose a framework, Federated Contrastive Learning of Graph-level representation (FCLG). Our goal is to train a global GNN based model that exploits data from decentralized clients. In order to address the Non-IID (independent and identical distributed) issues inherent to Federated Learning (FL), we employ a two-level contrastive learning mechanism. On each client, we use the contrast between multiple augmented views of input graphs to encode robust characteristics within different graphs into a local GNN model. We then contrast the global model learned on the server with the local models learned on clients to improve the generalization performance of the global model.
590
$a
School code: 0168.
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Information technology.
$3
532993
650
4
$a
Computer science.
$3
523869
653
$a
Attributed graphs
653
$a
Deep learning
653
$a
Graph neural networks
653
$a
Scalability
653
$a
Federated learning
690
$a
0464
690
$a
0489
690
$a
0984
710
2
$a
The Ohio State University.
$b
Computer Science and Engineering.
$3
1674144
773
0
$t
Dissertations Abstracts International
$g
85-04B.
790
$a
0168
791
$a
Ph.D.
792
$a
2023
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30788287
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9510375
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入