Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
An Agent Learning Dialogue Policies ...
~
Zare, Maryam.
Linked to FindBook
Google Book
Amazon
博客來
An Agent Learning Dialogue Policies for Sensing, Perceiving and Learning Through Multi-Modal Communication.
Record Type:
Electronic resources : Monograph/item
Title/Author:
An Agent Learning Dialogue Policies for Sensing, Perceiving and Learning Through Multi-Modal Communication./
Author:
Zare, Maryam.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
Description:
110 p.
Notes:
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Contained By:
Dissertations Abstracts International83-03B.
Subject:
Language. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28841699
ISBN:
9798460447770
An Agent Learning Dialogue Policies for Sensing, Perceiving and Learning Through Multi-Modal Communication.
Zare, Maryam.
An Agent Learning Dialogue Policies for Sensing, Perceiving and Learning Through Multi-Modal Communication.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 110 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Thesis (Ph.D.)--The Pennsylvania State University, 2021.
This item must not be sold to any third party vendors.
Language communication is an important part of human life and a natural and intuitive way of learning new things. It is easy to imagine intelligent agents that can learn through communication to for example, help us in rescue scenarios, surgery, or even agriculture. As natural as learning through language is to humans, developing such agents has numerous challenges: Language is ambiguous, and humans convey their intentions in different ways using different words. Tasks have different learning goals and some are more complex than others. Additionally, humans differ in their communicative skills, particularly, in how much information they share or know. Thus, the agent must be able to learn from a wide range of humans and to adapt to different knowledge goals. This work proposes SPACe, a novel dialogue policy that supports Sensing, Perceiving, and Acquiring knowledge through Communication. SPACe communicates using natural language, which is translated to an unambiguous meaning representation language (MRL). The MRL supports formulating novel, context-dependent questions (e.g. "wh-" questions). SPACe is a single adaptive policy for learning different tasks from humans who differ in informativeness. Policies are modeled as a Partially Observable Markov Decision Process (POMDP) and are trained using reinforcement learning. Adaptation to humans and to different learning goals arises from a rich state representation that goes beyond dialogue state tracking, to allow the agent to constantly sense the joint information behavior of itself and its partner and adjust accordingly, a novel reward function that is defined to encourage efficient questioning across all tasks and humans, and a general- purpose and extensible MRL. As the cost of training POMDP policies with humans is too high to be practical, SPACe is trained using a simulator. Experiments with human subjects show that the policies transfer well to online dialogues with humans. We use games as a testbed, and store the knowledge in a game tree. Games are similar to real-world tasks: families of related games vary in complexity as do related real-world tasks, and present a problem-solving task where the state changes unpredictably due to the actions of others. Game trees are a well-studied abstraction for representing game knowledge, reasoning over knowledge, and for acting on that knowledge during play. We have tested our agent on several board games, but the methodology applies to a wide range of other games. The agent's learning ability is tested in a single dialogue and across a sequence of two dialogues. The latter is particularly important for learning goals that are too complex to master in one dialogue. Tests of the agent to learn games not seen in training show the generality of its communication abilities. Human subjects found the agent easy to communicate with, and provided positive feedback, remarking favorably on its ability to learn across dialogues "to pull in old information as if it has a memory".
ISBN: 9798460447770Subjects--Topical Terms:
643551
Language.
Subjects--Index Terms:
Game trees
An Agent Learning Dialogue Policies for Sensing, Perceiving and Learning Through Multi-Modal Communication.
LDR
:04164nmm a2200349 4500
001
2283885
005
20211115071710.5
008
220723s2021 ||||||||||||||||| ||eng d
020
$a
9798460447770
035
$a
(MiAaPQ)AAI28841699
035
$a
(MiAaPQ)PennState_22618muz50
035
$a
AAI28841699
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Zare, Maryam.
$3
3562957
245
1 3
$a
An Agent Learning Dialogue Policies for Sensing, Perceiving and Learning Through Multi-Modal Communication.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
110 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
500
$a
Advisor: Passonneau, Rebecca.
502
$a
Thesis (Ph.D.)--The Pennsylvania State University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Language communication is an important part of human life and a natural and intuitive way of learning new things. It is easy to imagine intelligent agents that can learn through communication to for example, help us in rescue scenarios, surgery, or even agriculture. As natural as learning through language is to humans, developing such agents has numerous challenges: Language is ambiguous, and humans convey their intentions in different ways using different words. Tasks have different learning goals and some are more complex than others. Additionally, humans differ in their communicative skills, particularly, in how much information they share or know. Thus, the agent must be able to learn from a wide range of humans and to adapt to different knowledge goals. This work proposes SPACe, a novel dialogue policy that supports Sensing, Perceiving, and Acquiring knowledge through Communication. SPACe communicates using natural language, which is translated to an unambiguous meaning representation language (MRL). The MRL supports formulating novel, context-dependent questions (e.g. "wh-" questions). SPACe is a single adaptive policy for learning different tasks from humans who differ in informativeness. Policies are modeled as a Partially Observable Markov Decision Process (POMDP) and are trained using reinforcement learning. Adaptation to humans and to different learning goals arises from a rich state representation that goes beyond dialogue state tracking, to allow the agent to constantly sense the joint information behavior of itself and its partner and adjust accordingly, a novel reward function that is defined to encourage efficient questioning across all tasks and humans, and a general- purpose and extensible MRL. As the cost of training POMDP policies with humans is too high to be practical, SPACe is trained using a simulator. Experiments with human subjects show that the policies transfer well to online dialogues with humans. We use games as a testbed, and store the knowledge in a game tree. Games are similar to real-world tasks: families of related games vary in complexity as do related real-world tasks, and present a problem-solving task where the state changes unpredictably due to the actions of others. Game trees are a well-studied abstraction for representing game knowledge, reasoning over knowledge, and for acting on that knowledge during play. We have tested our agent on several board games, but the methodology applies to a wide range of other games. The agent's learning ability is tested in a single dialogue and across a sequence of two dialogues. The latter is particularly important for learning goals that are too complex to master in one dialogue. Tests of the agent to learn games not seen in training show the generality of its communication abilities. Human subjects found the agent easy to communicate with, and provided positive feedback, remarking favorably on its ability to learn across dialogues "to pull in old information as if it has a memory".
590
$a
School code: 0176.
650
4
$a
Language.
$3
643551
650
4
$a
Verbal communication.
$3
3560678
650
4
$a
Interactive computer systems.
$3
604826
650
4
$a
Robots.
$3
529507
650
4
$a
Adaptation.
$3
3562958
650
4
$a
Human subjects.
$3
3562959
650
4
$a
Games.
$3
525308
650
4
$a
Learning.
$3
516521
650
4
$a
Markov analysis.
$3
3562906
650
4
$a
Natural language.
$3
3562052
650
4
$a
Computer science.
$3
523869
650
4
$a
Artificial intelligence.
$3
516317
653
$a
Game trees
653
$a
Natural language
690
$a
0679
690
$a
0984
690
$a
0800
710
2
$a
The Pennsylvania State University.
$3
699896
773
0
$t
Dissertations Abstracts International
$g
83-03B.
790
$a
0176
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28841699
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9435618
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login