語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Coordinating multiagent teams in unc...
~
Nair, Ranjit.
FindBook
Google Book
Amazon
博客來
Coordinating multiagent teams in uncertain domains using distributed POMDPs.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Coordinating multiagent teams in uncertain domains using distributed POMDPs./
作者:
Nair, Ranjit.
面頁冊數:
148 p.
附註:
Source: Dissertation Abstracts International, Volume: 65-11, Section: B, page: 5837.
Contained By:
Dissertation Abstracts International65-11B.
標題:
Computer Science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3155457
ISBN:
0496162322
Coordinating multiagent teams in uncertain domains using distributed POMDPs.
Nair, Ranjit.
Coordinating multiagent teams in uncertain domains using distributed POMDPs.
- 148 p.
Source: Dissertation Abstracts International, Volume: 65-11, Section: B, page: 5837.
Thesis (Ph.D.)--University of Southern California, 2004.
Distributed Partially Observable Markov Decision Problems (POMDPs) have emerged as a popular decision-theoretic approach for planning for multiagent teams, where it is imperative for the agents to be able to reason about the rewards (and costs) for their actions in the presence of uncertainty. However, finding the optimal distributed POMDP policy is computationally intractable (NEXP-Complete). This dissertation presents two independent approaches which deal with this issue of intractability in distributed POMDPs. The primary focus is on the first approach, which represents a principled way to combine the two dominant paradigms for building multiagent team plans, namely the "belief-desire-intention" (BDI) approach and distributed POMDPs. In this hybrid BDI-POMDP approach, BDI team plans are exploited to improve distributed POMDP tractability and distributed POMDP-based analysis improves BDI team plan performance. Concretely, we focus on role allocation, a fundamental problem in BDI teams---which agents to allocate to the different roles in the team. The hybrid BDI-POMDP approach provides three key contributions. First, unlike prior work in multiagent role allocation, we describe a role allocation technique that takes into account future uncertainties in the domain. The second contribution is a novel decomposition technique, which exploits the structure in the BDI team plans to significantly prune the search space of combinatorially many role allocations. Our third key contribution is a significantly faster policy evaluation algorithm suited for our BDI-POMDP hybrid approach. Finally, we also present experimental results from two domains: mission rehearsal simulation and RoboCupRescue disaster rescue simulation. In the RoboCupRescue domain, we show that the role allocation technique presented in this dissertation is capable of performing at human expert levels by comparing with the allocations chosen by humans in the actual RoboCupRescue simulation environment. The second approach for dealing with the intractability of distributed POMDPs is based on finding locally optimal joint policies using Nash equilibrium as a solution concept. Through the introduction of communication, we not only show improved coordination but also develop a novel compact policy representation that results in savings of both space and time which are verified empirically.
ISBN: 0496162322Subjects--Topical Terms:
626642
Computer Science.
Coordinating multiagent teams in uncertain domains using distributed POMDPs.
LDR
:03275nmm 2200277 4500
001
1813351
005
20060503131715.5
008
130610s2004 eng d
020
$a
0496162322
035
$a
(UnM)AAI3155457
035
$a
AAI3155457
040
$a
UnM
$c
UnM
100
1
$a
Nair, Ranjit.
$3
1902858
245
1 0
$a
Coordinating multiagent teams in uncertain domains using distributed POMDPs.
300
$a
148 p.
500
$a
Source: Dissertation Abstracts International, Volume: 65-11, Section: B, page: 5837.
500
$a
Adviser: Milind Tambe.
502
$a
Thesis (Ph.D.)--University of Southern California, 2004.
520
$a
Distributed Partially Observable Markov Decision Problems (POMDPs) have emerged as a popular decision-theoretic approach for planning for multiagent teams, where it is imperative for the agents to be able to reason about the rewards (and costs) for their actions in the presence of uncertainty. However, finding the optimal distributed POMDP policy is computationally intractable (NEXP-Complete). This dissertation presents two independent approaches which deal with this issue of intractability in distributed POMDPs. The primary focus is on the first approach, which represents a principled way to combine the two dominant paradigms for building multiagent team plans, namely the "belief-desire-intention" (BDI) approach and distributed POMDPs. In this hybrid BDI-POMDP approach, BDI team plans are exploited to improve distributed POMDP tractability and distributed POMDP-based analysis improves BDI team plan performance. Concretely, we focus on role allocation, a fundamental problem in BDI teams---which agents to allocate to the different roles in the team. The hybrid BDI-POMDP approach provides three key contributions. First, unlike prior work in multiagent role allocation, we describe a role allocation technique that takes into account future uncertainties in the domain. The second contribution is a novel decomposition technique, which exploits the structure in the BDI team plans to significantly prune the search space of combinatorially many role allocations. Our third key contribution is a significantly faster policy evaluation algorithm suited for our BDI-POMDP hybrid approach. Finally, we also present experimental results from two domains: mission rehearsal simulation and RoboCupRescue disaster rescue simulation. In the RoboCupRescue domain, we show that the role allocation technique presented in this dissertation is capable of performing at human expert levels by comparing with the allocations chosen by humans in the actual RoboCupRescue simulation environment. The second approach for dealing with the intractability of distributed POMDPs is based on finding locally optimal joint policies using Nash equilibrium as a solution concept. Through the introduction of communication, we not only show improved coordination but also develop a novel compact policy representation that results in savings of both space and time which are verified empirically.
590
$a
School code: 0208.
650
4
$a
Computer Science.
$3
626642
650
4
$a
Artificial Intelligence.
$3
769149
690
$a
0984
690
$a
0800
710
2 0
$a
University of Southern California.
$3
700129
773
0
$t
Dissertation Abstracts International
$g
65-11B.
790
1 0
$a
Tambe, Milind,
$e
advisor
790
$a
0208
791
$a
Ph.D.
792
$a
2004
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3155457
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9204222
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入