語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Generalized domain adaptation for se...
~
Xiao, Min.
FindBook
Google Book
Amazon
博客來
Generalized domain adaptation for sequence labeling in natural language processing.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Generalized domain adaptation for sequence labeling in natural language processing./
作者:
Xiao, Min.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2016,
面頁冊數:
100 p.
附註:
Source: Dissertation Abstracts International, Volume: 77-10(E), Section: B.
Contained By:
Dissertation Abstracts International77-10B(E).
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10112383
ISBN:
9781339755373
Generalized domain adaptation for sequence labeling in natural language processing.
Xiao, Min.
Generalized domain adaptation for sequence labeling in natural language processing.
- Ann Arbor : ProQuest Dissertations & Theses, 2016 - 100 p.
Source: Dissertation Abstracts International, Volume: 77-10(E), Section: B.
Thesis (Ph.D.)--Temple University, 2016.
Sequence labeling tasks have been widely studied in the natural language processing area, such as part-of-speech tagging, syntactic chunking, dependency parsing, and etc. Most of those systems are developed on a large amount of labeled training data via supervised learning. However, manually collecting labeled training data is too time-consuming and expensive. As an alternative, to alleviate the issue of label scarcity, domain adaptation has recently been proposed to train a statistical machine learning model in a target domain where there is no enough labeled training data by exploiting existing free labeled training data in a different but related source domain. The natural language processing community has witnessed the success of domain adaptation in a variety of sequence labeling tasks.
ISBN: 9781339755373Subjects--Topical Terms:
523869
Computer science.
Generalized domain adaptation for sequence labeling in natural language processing.
LDR
:03620nmm a2200301 4500
001
2121901
005
20170830070054.5
008
180830s2016 ||||||||||||||||| ||eng d
020
$a
9781339755373
035
$a
(MiAaPQ)AAI10112383
035
$a
AAI10112383
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Xiao, Min.
$3
2016333
245
1 0
$a
Generalized domain adaptation for sequence labeling in natural language processing.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2016
300
$a
100 p.
500
$a
Source: Dissertation Abstracts International, Volume: 77-10(E), Section: B.
500
$a
Adviser: Yuhong Guo.
502
$a
Thesis (Ph.D.)--Temple University, 2016.
520
$a
Sequence labeling tasks have been widely studied in the natural language processing area, such as part-of-speech tagging, syntactic chunking, dependency parsing, and etc. Most of those systems are developed on a large amount of labeled training data via supervised learning. However, manually collecting labeled training data is too time-consuming and expensive. As an alternative, to alleviate the issue of label scarcity, domain adaptation has recently been proposed to train a statistical machine learning model in a target domain where there is no enough labeled training data by exploiting existing free labeled training data in a different but related source domain. The natural language processing community has witnessed the success of domain adaptation in a variety of sequence labeling tasks.
520
$a
Though the labeled training data in the source domain are available and free, however, they are not exactly as and can be very different from the test data in the target domain. Thus, simply applying naive supervised machine learning algorithms without considering domain differences may not fulfill the purpose. In this dissertation, we developed several novel representation learning approaches to address domain adaptation for sequence labeling in natural language processing. Those representation learning techniques aim to induce latent generalizable features to bridge domain divergence to enable cross-domain prediction.
520
$a
We first tackle a semi-supervised domain adaptation scenario where the target domain has a small amount of labeled training data and propose a distributed representation learning approach based on a probabilistic neural language model. We then relax the assumption of the availability of labeled training data in the target domain and study an unsupervised domain adaptation scenario where the target domain has only unlabeled training data, and give a task-informative representation learning approach based on dynamic dependency networks. Both works are developed in the setting where different domains contain sentences in different genres. We then extend and generalize domain adaptation into a more challenging scenario where different domains contain sentences in different languages and propose two cross-lingual representation learning approaches, one is based on deep neural networks with auxiliary bilingual word pairs and the other is based on annotation projection with auxiliary parallel sentences. All four specific learning scenarios are extensively evaluated with different sequence labeling tasks. The empirical results demonstrate the effectiveness of those generalized domain adaptation techniques for sequence labeling in natural language processing.
590
$a
School code: 0225.
650
4
$a
Computer science.
$3
523869
690
$a
0984
710
2
$a
Temple University.
$b
Computer and Information Science.
$3
1065462
773
0
$t
Dissertation Abstracts International
$g
77-10B(E).
790
$a
0225
791
$a
Ph.D.
792
$a
2016
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10112383
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9332517
電子資源
01.外借(書)_YB
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入