語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Simulation-based optimization = para...
~
Gosavi, Abhijit.
FindBook
Google Book
Amazon
博客來
Simulation-based optimization = parametric optimization techniques and reinforcement learning /
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Simulation-based optimization/ by Abhijit Gosavi.
其他題名:
parametric optimization techniques and reinforcement learning /
作者:
Gosavi, Abhijit.
出版者:
Boston, MA :Springer US : : 2015.,
面頁冊數:
xxvi, 508 p. :ill., digital ;24 cm.
內容註:
Background -- Simulation basics -- Simulation optimization: an overview -- Response surfaces and neural nets -- Parametric optimization -- Dynamic programming -- Reinforcement learning -- Stochastic search for controls -- Convergence: background material -- Convergence: parametric optimization -- Convergence: control optimization -- Case studies.
Contained By:
Springer eBooks
標題:
Probabilities. -
電子資源:
http://dx.doi.org/10.1007/978-1-4899-7491-4
ISBN:
9781489974914 (electronic bk.)
Simulation-based optimization = parametric optimization techniques and reinforcement learning /
Gosavi, Abhijit.
Simulation-based optimization
parametric optimization techniques and reinforcement learning /[electronic resource] :by Abhijit Gosavi. - 2nd ed. - Boston, MA :Springer US :2015. - xxvi, 508 p. :ill., digital ;24 cm. - Operations research/computer science interfaces series,v.551387-666X ;. - Operations research/computer science interfaces series ;v.55..
Background -- Simulation basics -- Simulation optimization: an overview -- Response surfaces and neural nets -- Parametric optimization -- Dynamic programming -- Reinforcement learning -- Stochastic search for controls -- Convergence: background material -- Convergence: parametric optimization -- Convergence: control optimization -- Case studies.
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms. Key features of this revised and improved Second Edition include: Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search, and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search, and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online), and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations Themed around three areas in separate sets of chapters Static Simulation Optimization, Reinforcement Learning, and Convergence Analysis this book is written for researchers and students in the fields of engineering (industrial, systems, electrical, and computer), operations research, computer science, and applied mathematics.
ISBN: 9781489974914 (electronic bk.)
Standard No.: 10.1007/978-1-4899-7491-4doiSubjects--Topical Terms:
518889
Probabilities.
LC Class. No.: TA340
Dewey Class. No.: 519.2
Simulation-based optimization = parametric optimization techniques and reinforcement learning /
LDR
:03401nmm a2200349 a 4500
001
1992992
003
DE-He213
005
20150610110138.0
006
m d
007
cr nn 008maaau
008
151019s2015 mau s 0 eng d
020
$a
9781489974914 (electronic bk.)
020
$a
9781489974907 (paper)
024
7
$a
10.1007/978-1-4899-7491-4
$2
doi
035
$a
978-1-4899-7491-4
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
TA340
072
7
$a
KJT
$2
bicssc
072
7
$a
KJMD
$2
bicssc
072
7
$a
BUS049000
$2
bisacsh
082
0 4
$a
519.2
$2
22
090
$a
TA340
$b
.G676 2015
100
1
$a
Gosavi, Abhijit.
$3
757066
245
1 0
$a
Simulation-based optimization
$h
[electronic resource] :
$b
parametric optimization techniques and reinforcement learning /
$c
by Abhijit Gosavi.
250
$a
2nd ed.
260
$a
Boston, MA :
$b
Springer US :
$b
Imprint: Springer,
$c
2015.
300
$a
xxvi, 508 p. :
$b
ill., digital ;
$c
24 cm.
490
1
$a
Operations research/computer science interfaces series,
$x
1387-666X ;
$v
v.55
505
0
$a
Background -- Simulation basics -- Simulation optimization: an overview -- Response surfaces and neural nets -- Parametric optimization -- Dynamic programming -- Reinforcement learning -- Stochastic search for controls -- Convergence: background material -- Convergence: parametric optimization -- Convergence: control optimization -- Case studies.
520
$a
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms. Key features of this revised and improved Second Edition include: Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search, and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search, and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online), and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations Themed around three areas in separate sets of chapters Static Simulation Optimization, Reinforcement Learning, and Convergence Analysis this book is written for researchers and students in the fields of engineering (industrial, systems, electrical, and computer), operations research, computer science, and applied mathematics.
650
0
$a
Probabilities.
$3
518889
650
0
$a
Mathematical optimization.
$3
517763
650
1 4
$a
Economics/Management Science.
$3
890844
650
2 4
$a
Operation Research/Decision Theory.
$3
1620900
650
2 4
$a
Operations Research, Management Science.
$3
1532996
650
2 4
$a
Simulation and Modeling.
$3
890873
710
2
$a
SpringerLink (Online service)
$3
836513
773
0
$t
Springer eBooks
830
0
$a
Operations research/computer science interfaces series ;
$v
v.55.
$3
2130911
856
4 0
$u
http://dx.doi.org/10.1007/978-1-4899-7491-4
950
$a
Business and Economics (Springer-11643)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9265701
電子資源
11.線上閱覽_V
電子書
EB TA340
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入