語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Robust Methods for Influencing Strat...
~
Brown, Philip Nathaniel.
FindBook
Google Book
Amazon
博客來
Robust Methods for Influencing Strategic Behavior.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Robust Methods for Influencing Strategic Behavior./
作者:
Brown, Philip Nathaniel.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
204 p.
附註:
Source: Dissertation Abstracts International, Volume: 80-02(E), Section: B.
Contained By:
Dissertation Abstracts International80-02B(E).
標題:
Electrical engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10828283
ISBN:
9780438416727
Robust Methods for Influencing Strategic Behavior.
Brown, Philip Nathaniel.
Robust Methods for Influencing Strategic Behavior.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 204 p.
Source: Dissertation Abstracts International, Volume: 80-02(E), Section: B.
Thesis (Ph.D.)--University of California, Santa Barbara, 2018.
Today's world contains many examples of engineered systems that are tightly coupled with their users and customers. In these settings, the strategic and economic behavior of users and customers can have a significant impact on the performance of the overall system, and it may be desirable for an engineer to devise appropriate methods of incentivizing human behavior to improve system performance. This work seeks to understand the fundamental tradeoffs involved in designing behavior-influencing mechanisms for complex interconnected sociotechnical systems. We study several examples and pose them as problems of game design: a planner chooses appropriate ways to select or modify the utility functions of individual agents in order to promote desired behavior. In social systems these modifications take the form of monetary or other incentives, whereas in multiagent engineered systems the modifications may be algorithmic. Here, we ask questions of sensitivity and robustness: for example, if the quality of information available to the planner changes, how can we quantify the impact of this change on the planner's ability to influence behavior? We propose a simple overarching framework for studying this, and then apply it to three distinct domains: incentives for network routing, distributed control design for multiagent engineered systems, and impersonation attacks in networked systems. We ask the following questions: - What features of a behavior-influencing mechanism directly confer robustness? We show weaknesses of several existing methodologies which use pricing for congestion control in transportation networks. In response to these issues, we propose a universal taxation mechanism which can incentivize optimal routing in transportation networks, requiring no information about network structure or user sensitivities, provided that it can charge sufficiently large prices. This suggests that large prices have more robustness than small ones. We also directly compare flow-varying tolls to fixed tolls, and show that a great deal of robustness can be gained by using a flow-varying approach. - How much information does a planner need to be confident that an incentive mechanism will not inadvertently induce pathological behavior? We show that for simple enough transportation networks (symmetric parallel networks are sufficient), a planner can provably avoid perverse incentives by applying a generalized marginal-cost taxation approach. On the other hand, we show that on general networks, perverse incentives are always a risk unless the incentive mechanism is given some information about network structure. - How can robust games be designed for multiagent coordination? We investigate a setting of multiagent coordination in which autonomous agents may suffer from unplanned communication loss events; the planner's task is to program agents with a policy (analogous to an incentive mechanism) for updating their utility functions in response to such events. We show that even when the nominal game is well-behaved and the communication loss is between weakly-coupled agents, there exists no utility update policy which can prevent arbitrarily-poor states from emerging. We also investigate a setting in which an adversary attempts to influence a distributed system in a robust way; here, by understanding susceptibility to adversarial influence, we hope to inform the design of more robust network systems.
ISBN: 9780438416727Subjects--Topical Terms:
649834
Electrical engineering.
Robust Methods for Influencing Strategic Behavior.
LDR
:04398nmm a2200313 4500
001
2201665
005
20190429091135.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9780438416727
035
$a
(MiAaPQ)AAI10828283
035
$a
(MiAaPQ)ucsb:13908
035
$a
AAI10828283
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Brown, Philip Nathaniel.
$3
3428387
245
1 0
$a
Robust Methods for Influencing Strategic Behavior.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
204 p.
500
$a
Source: Dissertation Abstracts International, Volume: 80-02(E), Section: B.
500
$a
Adviser: Jason R. Marden.
502
$a
Thesis (Ph.D.)--University of California, Santa Barbara, 2018.
520
$a
Today's world contains many examples of engineered systems that are tightly coupled with their users and customers. In these settings, the strategic and economic behavior of users and customers can have a significant impact on the performance of the overall system, and it may be desirable for an engineer to devise appropriate methods of incentivizing human behavior to improve system performance. This work seeks to understand the fundamental tradeoffs involved in designing behavior-influencing mechanisms for complex interconnected sociotechnical systems. We study several examples and pose them as problems of game design: a planner chooses appropriate ways to select or modify the utility functions of individual agents in order to promote desired behavior. In social systems these modifications take the form of monetary or other incentives, whereas in multiagent engineered systems the modifications may be algorithmic. Here, we ask questions of sensitivity and robustness: for example, if the quality of information available to the planner changes, how can we quantify the impact of this change on the planner's ability to influence behavior? We propose a simple overarching framework for studying this, and then apply it to three distinct domains: incentives for network routing, distributed control design for multiagent engineered systems, and impersonation attacks in networked systems. We ask the following questions: - What features of a behavior-influencing mechanism directly confer robustness? We show weaknesses of several existing methodologies which use pricing for congestion control in transportation networks. In response to these issues, we propose a universal taxation mechanism which can incentivize optimal routing in transportation networks, requiring no information about network structure or user sensitivities, provided that it can charge sufficiently large prices. This suggests that large prices have more robustness than small ones. We also directly compare flow-varying tolls to fixed tolls, and show that a great deal of robustness can be gained by using a flow-varying approach. - How much information does a planner need to be confident that an incentive mechanism will not inadvertently induce pathological behavior? We show that for simple enough transportation networks (symmetric parallel networks are sufficient), a planner can provably avoid perverse incentives by applying a generalized marginal-cost taxation approach. On the other hand, we show that on general networks, perverse incentives are always a risk unless the incentive mechanism is given some information about network structure. - How can robust games be designed for multiagent coordination? We investigate a setting of multiagent coordination in which autonomous agents may suffer from unplanned communication loss events; the planner's task is to program agents with a policy (analogous to an incentive mechanism) for updating their utility functions in response to such events. We show that even when the nominal game is well-behaved and the communication loss is between weakly-coupled agents, there exists no utility update policy which can prevent arbitrarily-poor states from emerging. We also investigate a setting in which an adversary attempts to influence a distributed system in a robust way; here, by understanding susceptibility to adversarial influence, we hope to inform the design of more robust network systems.
590
$a
School code: 0035.
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Economic theory.
$3
1556984
650
4
$a
Operations research.
$3
547123
690
$a
0544
690
$a
0511
690
$a
0796
710
2
$a
University of California, Santa Barbara.
$b
Electrical and Computer Engineering.
$3
2095334
773
0
$t
Dissertation Abstracts International
$g
80-02B(E).
790
$a
0035
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10828283
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9378214
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入