語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Embodied language learning in humans...
~
Yu, Chen.
FindBook
Google Book
Amazon
博客來
Embodied language learning in humans and machines.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Embodied language learning in humans and machines./
作者:
Yu, Chen.
面頁冊數:
156 p.
附註:
Source: Dissertation Abstracts International, Volume: 65-08, Section: B, page: 4124.
Contained By:
Dissertation Abstracts International65-08B.
標題:
Computer Science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3142332
ISBN:
0496893130
Embodied language learning in humans and machines.
Yu, Chen.
Embodied language learning in humans and machines.
- 156 p.
Source: Dissertation Abstracts International, Volume: 65-08, Section: B, page: 4124.
Thesis (Ph.D.)--University of Rochester, 2004.
This thesis addresses questions of embodiment in language learning: how language is grounded in sensorimotor experience and how language development depends on complex interactions among brain, body and environment. Most studies of human language acquisition have focused on purely linguistic input. We believe, however, that non-linguistic information, such as visual attention and body movement, also serves as an important driving force in language learning. This work presents a formal model that explores the computational role of non-linguistic information through both empirical and computational studies with the hope to get a more complete picture. We first introduce a statistical learning mechanism that provides a formal account of cross-situational observation. Then a unified model is proposed which is able to make use of different kinds of social cues, such as joint attention and prosody in speech, in the statistical learning framework. In the next experiment, we use adult subjects exposed to a second language to study the role of non-linguistic information in word learning. The results show conclusively that eye gaze is a big help in both speech segmentation and word-meaning association.
ISBN: 0496893130Subjects--Topical Terms:
626642
Computer Science.
Embodied language learning in humans and machines.
LDR
:03209nmm 2200313 4500
001
1846761
005
20051103093544.5
008
130614s2004 eng d
020
$a
0496893130
035
$a
(UnM)AAI3142332
035
$a
AAI3142332
040
$a
UnM
$c
UnM
100
1
$a
Yu, Chen.
$3
1934859
245
1 0
$a
Embodied language learning in humans and machines.
300
$a
156 p.
500
$a
Source: Dissertation Abstracts International, Volume: 65-08, Section: B, page: 4124.
500
$a
Supervisor: Dana H. Ballard.
502
$a
Thesis (Ph.D.)--University of Rochester, 2004.
520
$a
This thesis addresses questions of embodiment in language learning: how language is grounded in sensorimotor experience and how language development depends on complex interactions among brain, body and environment. Most studies of human language acquisition have focused on purely linguistic input. We believe, however, that non-linguistic information, such as visual attention and body movement, also serves as an important driving force in language learning. This work presents a formal model that explores the computational role of non-linguistic information through both empirical and computational studies with the hope to get a more complete picture. We first introduce a statistical learning mechanism that provides a formal account of cross-situational observation. Then a unified model is proposed which is able to make use of different kinds of social cues, such as joint attention and prosody in speech, in the statistical learning framework. In the next experiment, we use adult subjects exposed to a second language to study the role of non-linguistic information in word learning. The results show conclusively that eye gaze is a big help in both speech segmentation and word-meaning association.
520
$a
In light of the findings of human language acquisition, we develop a multimodal embodied system that learns words from natural interactions with users. The learning system is trained in an unsupervised mode in which users perform everyday tasks while providing natural language descriptions of their behaviors. The system collects acoustic signals in concert with user-centric multisensory information from non-speech modalities, such as user's perspective video, gaze positions, head directions and hand movements. A multimodal learning algorithm uses this data to first spot words from continuous speech and then associate action verbs and object names with their perceptually grounded meanings. The central ideas are to make use of non-speech contextual information to facilitate word spotting, and utilize body movements as deictic references to associate temporally co-occurring data from different modalities and build lexical items. This advent represents the first steps of an ongoing progression toward computational systems capable of human-like sensory perception.
590
$a
School code: 0188.
650
4
$a
Computer Science.
$3
626642
650
4
$a
Psychology, Cognitive.
$3
1017810
650
4
$a
Psychology, Developmental.
$3
1017557
650
4
$a
Artificial Intelligence.
$3
769149
690
$a
0984
690
$a
0633
690
$a
0620
690
$a
0800
710
2 0
$a
University of Rochester.
$3
515736
773
0
$t
Dissertation Abstracts International
$g
65-08B.
790
1 0
$a
Ballard, Dana H.,
$e
advisor
790
$a
0188
791
$a
Ph.D.
792
$a
2004
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3142332
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9196275
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入