語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Toward Knowledge-Centric Natural Language Processing : = Acquisition, Representation, Transfer, and Reasoning.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Toward Knowledge-Centric Natural Language Processing :/
其他題名:
Acquisition, Representation, Transfer, and Reasoning.
作者:
Wang, Zhen.
面頁冊數:
1 online resource (269 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-09, Section: B.
Contained By:
Dissertations Abstracts International84-09B.
標題:
Language. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30360090click for full text (PQDT)
ISBN:
9798371914057
Toward Knowledge-Centric Natural Language Processing : = Acquisition, Representation, Transfer, and Reasoning.
Wang, Zhen.
Toward Knowledge-Centric Natural Language Processing :
Acquisition, Representation, Transfer, and Reasoning. - 1 online resource (269 pages)
Source: Dissertations Abstracts International, Volume: 84-09, Section: B.
Thesis (Ph.D.)--The Ohio State University, 2022.
Includes bibliographical references
Past decades have witnessed the great success of modern Artificial Intelligence (AI) via learning incredible statistical correlations from large-scale data. However, a knowledge gap still exists between the statistical learning of AI and the human-like learning process. Unlike machines, humans can first accumulate enormous background knowledge about how the world works and then quickly adapt it to new environments by understanding the underlying concepts. For example, given the limited life experience with mammals, a child can quickly learn the new concept of a dog to infer knowledge, like a dog is a mammal, a mammal has a heart, and thus, a dog has a heart. Then the child can generalize the concept to new cases, such as a golden retriever, a beagle, or a chihuahua. However, an AI system trained on a large-scale mammal but not dog-focused dataset cannot do such learning and generalization. AI techniques will fundamentally influence our everyday lives, and bridging this knowledge gap to empower existing AI systems with more explicit human knowledge is both timely and necessary to make them more generalizable, robust, trustworthy, interpretable, and efficient.To close this gap, we seek inspiration from how humans learn, such as the ability to abstract knowledge from data, generalize knowledge to new tasks, and reason to solve complex problems. Inspired by the human learning process, in this dissertation, we present our research efforts to address the knowledge gap between AI and human learning with a systematic study of the full life cycle of how to incorporate more explicit human knowledge in intelligent systems. Specifically, we need first to extract high-quality knowledge from the real world (knowledge acquisition), such as raw data or model parameters. We then transform various types of knowledge into neural representations (knowledge representation). We can also transfer existing knowledge between neural systems (knowledge transfer) or perform human-like complex reasoning to enable more transparent and generalizable inference (knowledge reasoning). All stages pose unique research challenges but are also intertwined, potentially leading to a unified framework of knowledge-centric natural language processing (NLP).This dissertation demonstrates our established achievements along the previous four directions. The introduction first elaborates on our motivation and research vision to construct a holistic and systematic view of knowledge-centric natural language processing. We describe our contributions distributed in these four directions in each chapter separately. For knowledge acquisition, we study extracting structured knowledge (e.g., synonyms, relations) from the text corpus that can be leveraged to build a better knowledge space. We leverage the corpus-level co-occurrence statistics to preserve privacy and personal information better. Our proposed framework can fully utilize the surface form and global context information for advanced performance. For knowledge representation, we focus on graph representation learning and propose to learn better representations of node pairs for pairwise prediction tasks on graphs, such as link prediction or relation classification. Our proposed method encourages the interaction between local contexts and would generate more interpretable results. For knowledge transfer, we present two works. The first one transfers knowledge between structured (Knowledge Base) and unstructured (text corpus) knowledge sources, and the second one transfers knowledge from pre-trained large language models (LLMs) to downstream tasks via multitask prompt tuning. For knowledge reasoning, we present two works. The first one shows a self-interpretable framework for medical relation prediction that can generate human-intuitive rationales to explain neural prediction. It relied on a recall and recognition process inspired by the human memory theory from cognitive science. We verify the trustworthiness of generated rationales by conducting a human evaluation of the medical expert. The second one focuses on commonsense reasoning for better word representation learning, in which an explicit reasoning module runs over a commonsense knowledge graph to perform multi-hop reasoning. The learned vector representations can benefit downstream tasks and show the reasoning steps as interpretations.In the last chapter, we summarize our key contributions and outline future research directions toward knowledge-centric natural language processing. Ultimately, we envision that human knowledge and reasoning should be indispensable components for the next generation of AI techniques.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798371914057Subjects--Topical Terms:
643551
Language.
Subjects--Index Terms:
Natural language processingIndex Terms--Genre/Form:
542853
Electronic books.
Toward Knowledge-Centric Natural Language Processing : = Acquisition, Representation, Transfer, and Reasoning.
LDR
:06249nmm a2200457K 4500
001
2362225
005
20231027103342.5
006
m o d
007
cr mn ---uuuuu
008
241011s2022 xx obm 000 0 eng d
020
$a
9798371914057
035
$a
(MiAaPQ)AAI30360090
035
$a
(MiAaPQ)OhioLINKosu1669945458375134
035
$a
AAI30360090
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Wang, Zhen.
$3
1291578
245
1 0
$a
Toward Knowledge-Centric Natural Language Processing :
$b
Acquisition, Representation, Transfer, and Reasoning.
264
0
$c
2022
300
$a
1 online resource (269 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-09, Section: B.
500
$a
Advisor: Sun, Huan.
502
$a
Thesis (Ph.D.)--The Ohio State University, 2022.
504
$a
Includes bibliographical references
520
$a
Past decades have witnessed the great success of modern Artificial Intelligence (AI) via learning incredible statistical correlations from large-scale data. However, a knowledge gap still exists between the statistical learning of AI and the human-like learning process. Unlike machines, humans can first accumulate enormous background knowledge about how the world works and then quickly adapt it to new environments by understanding the underlying concepts. For example, given the limited life experience with mammals, a child can quickly learn the new concept of a dog to infer knowledge, like a dog is a mammal, a mammal has a heart, and thus, a dog has a heart. Then the child can generalize the concept to new cases, such as a golden retriever, a beagle, or a chihuahua. However, an AI system trained on a large-scale mammal but not dog-focused dataset cannot do such learning and generalization. AI techniques will fundamentally influence our everyday lives, and bridging this knowledge gap to empower existing AI systems with more explicit human knowledge is both timely and necessary to make them more generalizable, robust, trustworthy, interpretable, and efficient.To close this gap, we seek inspiration from how humans learn, such as the ability to abstract knowledge from data, generalize knowledge to new tasks, and reason to solve complex problems. Inspired by the human learning process, in this dissertation, we present our research efforts to address the knowledge gap between AI and human learning with a systematic study of the full life cycle of how to incorporate more explicit human knowledge in intelligent systems. Specifically, we need first to extract high-quality knowledge from the real world (knowledge acquisition), such as raw data or model parameters. We then transform various types of knowledge into neural representations (knowledge representation). We can also transfer existing knowledge between neural systems (knowledge transfer) or perform human-like complex reasoning to enable more transparent and generalizable inference (knowledge reasoning). All stages pose unique research challenges but are also intertwined, potentially leading to a unified framework of knowledge-centric natural language processing (NLP).This dissertation demonstrates our established achievements along the previous four directions. The introduction first elaborates on our motivation and research vision to construct a holistic and systematic view of knowledge-centric natural language processing. We describe our contributions distributed in these four directions in each chapter separately. For knowledge acquisition, we study extracting structured knowledge (e.g., synonyms, relations) from the text corpus that can be leveraged to build a better knowledge space. We leverage the corpus-level co-occurrence statistics to preserve privacy and personal information better. Our proposed framework can fully utilize the surface form and global context information for advanced performance. For knowledge representation, we focus on graph representation learning and propose to learn better representations of node pairs for pairwise prediction tasks on graphs, such as link prediction or relation classification. Our proposed method encourages the interaction between local contexts and would generate more interpretable results. For knowledge transfer, we present two works. The first one transfers knowledge between structured (Knowledge Base) and unstructured (text corpus) knowledge sources, and the second one transfers knowledge from pre-trained large language models (LLMs) to downstream tasks via multitask prompt tuning. For knowledge reasoning, we present two works. The first one shows a self-interpretable framework for medical relation prediction that can generate human-intuitive rationales to explain neural prediction. It relied on a recall and recognition process inspired by the human memory theory from cognitive science. We verify the trustworthiness of generated rationales by conducting a human evaluation of the medical expert. The second one focuses on commonsense reasoning for better word representation learning, in which an explicit reasoning module runs over a commonsense knowledge graph to perform multi-hop reasoning. The learned vector representations can benefit downstream tasks and show the reasoning steps as interpretations.In the last chapter, we summarize our key contributions and outline future research directions toward knowledge-centric natural language processing. Ultimately, we envision that human knowledge and reasoning should be indispensable components for the next generation of AI techniques.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Language.
$3
643551
650
4
$a
Linguistics.
$3
524476
650
4
$a
Computer science.
$3
523869
650
4
$a
Educational technology.
$3
517670
650
4
$a
Language arts.
$3
532624
653
$a
Natural language processing
653
$a
Artificial intelligence
653
$a
Human knowledge
653
$a
Knowledge acquisition
653
$a
Knowledge representation
653
$a
Knowledge transfer
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0290
690
$a
0679
690
$a
0984
690
$a
0279
690
$a
0800
690
$a
0710
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
The Ohio State University.
$b
Computer Science and Engineering.
$3
1674144
773
0
$t
Dissertations Abstracts International
$g
84-09B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30360090
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9484581
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入