語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Improving Neural Language Models with Black-Box Analysis and Generalization Through Memorization.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Improving Neural Language Models with Black-Box Analysis and Generalization Through Memorization./
作者:
Khandelwal, Urvashi.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
110 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Contained By:
Dissertations Abstracts International83-03B.
標題:
Language. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28671189
ISBN:
9798538199174
Improving Neural Language Models with Black-Box Analysis and Generalization Through Memorization.
Khandelwal, Urvashi.
Improving Neural Language Models with Black-Box Analysis and Generalization Through Memorization.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 110 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Thesis (Ph.D.)--Stanford University, 2021.
This item must not be sold to any third party vendors.
Neural language models (LMs) have become the workhorse of most natural language processing tasks and systems today. Yet, they are not perfect, and the two most important challenges in improving them further are (1) their lack of interpretability, and (2) their inability to generalize consistently, both in- and out-of-distribution. In this dissertation, I first describe my work on studying these LMs via black-box analysis, in order to understand how their predictions change in response to strategic changes in inputs. This makes model predictions more transparent by highlighting the features of the input that the model relies on. Then, I describe my work on Generalization through Memorization -- exploiting the notion of similarity between examples by using data saved in an external memory and retrieving nearest neighbors from it. This approach improves existing LM and machine translation models in terms of both in- and out-of-domain generalization, without any added training costs. Beyond improving generalization, memorization also makes model predictions more interpretable.
ISBN: 9798538199174Subjects--Topical Terms:
643551
Language.
Improving Neural Language Models with Black-Box Analysis and Generalization Through Memorization.
LDR
:02148nmm a2200325 4500
001
2348623
005
20220912135620.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798538199174
035
$a
(MiAaPQ)AAI28671189
035
$a
(MiAaPQ)STANFORDst056pp9441
035
$a
AAI28671189
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Khandelwal, Urvashi.
$3
3687987
245
1 0
$a
Improving Neural Language Models with Black-Box Analysis and Generalization Through Memorization.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
110 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
500
$a
Advisor: Jurafsky, Dan.
502
$a
Thesis (Ph.D.)--Stanford University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Neural language models (LMs) have become the workhorse of most natural language processing tasks and systems today. Yet, they are not perfect, and the two most important challenges in improving them further are (1) their lack of interpretability, and (2) their inability to generalize consistently, both in- and out-of-distribution. In this dissertation, I first describe my work on studying these LMs via black-box analysis, in order to understand how their predictions change in response to strategic changes in inputs. This makes model predictions more transparent by highlighting the features of the input that the model relies on. Then, I describe my work on Generalization through Memorization -- exploiting the notion of similarity between examples by using data saved in an external memory and retrieving nearest neighbors from it. This approach improves existing LM and machine translation models in terms of both in- and out-of-domain generalization, without any added training costs. Beyond improving generalization, memorization also makes model predictions more interpretable.
590
$a
School code: 0212.
650
4
$a
Language.
$3
643551
650
4
$a
Internships.
$3
3560137
650
4
$a
Machine translation.
$3
3687988
650
4
$a
Collaboration.
$3
3556296
650
4
$a
Multilingualism.
$3
598147
650
4
$a
Friendship.
$3
611043
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Individual & family studies.
$3
2122770
650
4
$a
Experiments.
$3
525909
650
4
$a
Dissertations & theses.
$3
3560115
650
4
$a
Learning.
$3
516521
690
$a
0800
690
$a
0679
690
$a
0628
710
2
$a
Stanford University.
$3
754827
773
0
$t
Dissertations Abstracts International
$g
83-03B.
790
$a
0212
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28671189
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9471061
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入