Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
A derivative-free two level random s...
~
Andrei, Neculai.
Linked to FindBook
Google Book
Amazon
博客來
A derivative-free two level random search method for unconstrained optimization
Record Type:
Electronic resources : Monograph/item
Title/Author:
A derivative-free two level random search method for unconstrained optimization/ by Neculai Andrei.
Author:
Andrei, Neculai.
Published:
Cham :Springer International Publishing : : 2021.,
Description:
xi, 118 p. :ill., digital ;24 cm.
[NT 15003449]:
1. Introduction -- 2. A Derivative-free Two Level Random Search Method for Unconstrained Optimization -- 3. Convergence of the Algorithm -- 4. Numerical Results -- 5. Conclusions -- Annex A. List of Applications -- Annex B. List of Test Functions -- Annex C. Detailed Results for 30 Large-Scale Problems -- Annex D. Detailed Results for 140 Problems.
Contained By:
Springer Nature eBook
Subject:
Mathematical optimization. -
Online resource:
https://doi.org/10.1007/978-3-030-68517-1
ISBN:
9783030685171
A derivative-free two level random search method for unconstrained optimization
Andrei, Neculai.
A derivative-free two level random search method for unconstrained optimization
[electronic resource] /by Neculai Andrei. - Cham :Springer International Publishing :2021. - xi, 118 p. :ill., digital ;24 cm. - SpringerBriefs in optimization,2190-8354. - SpringerBriefs in optimization..
1. Introduction -- 2. A Derivative-free Two Level Random Search Method for Unconstrained Optimization -- 3. Convergence of the Algorithm -- 4. Numerical Results -- 5. Conclusions -- Annex A. List of Applications -- Annex B. List of Test Functions -- Annex C. Detailed Results for 30 Large-Scale Problems -- Annex D. Detailed Results for 140 Problems.
The book is intended for graduate students and researchers in mathematics, computer science, and operational research. The book presents a new derivative-free optimization method/algorithm based on randomly generated trial points in specified domains and where the best ones are selected at each iteration by using a number of rules. This method is different from many other well established methods presented in the literature and proves to be competitive for solving many unconstrained optimization problems with different structures and complexities, with a relative large number of variables. Intensive numerical experiments with 140 unconstrained optimization problems, with up to 500 variables, have shown that this approach is efficient and robust. Structured into 4 chapters, Chapter 1 is introductory. Chapter 2 is dedicated to presenting a two level derivative-free random search method for unconstrained optimization. It is assumed that the minimizing function is continuous, lower bounded and its minimum value is known. Chapter 3 proves the convergence of the algorithm. In Chapter 4, the numerical performances of the algorithm are shown for solving 140 unconstrained optimization problems, out of which 16 are real applications. This shows that the optimization process has two phases: the reduction phase and the stalling one. Finally, the performances of the algorithm for solving a number of 30 large-scale unconstrained optimization problems up to 500 variables are presented. These numerical results show that this approach based on the two level random search method for unconstrained optimization is able to solve a large diversity of problems with different structures and complexities. There are a number of open problems which refer to the following aspects: the selection of the number of trial or the number of the local trial points, the selection of the bounds of the domains where the trial points and the local trial points are randomly generated and a criterion for initiating the line search.
ISBN: 9783030685171
Standard No.: 10.1007/978-3-030-68517-1doiSubjects--Topical Terms:
517763
Mathematical optimization.
LC Class. No.: QA402.5 / .A537 2021
Dewey Class. No.: 519.6
A derivative-free two level random search method for unconstrained optimization
LDR
:03441nmm a2200337 a 4500
001
2239325
003
DE-He213
005
20210715142529.0
006
m d
007
cr nn 008maaau
008
211111s2021 sz s 0 eng d
020
$a
9783030685171
$q
(electronic bk.)
020
$a
9783030685164
$q
(paper)
024
7
$a
10.1007/978-3-030-68517-1
$2
doi
035
$a
978-3-030-68517-1
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
QA402.5
$b
.A537 2021
072
7
$a
PBU
$2
bicssc
072
7
$a
MAT003000
$2
bisacsh
072
7
$a
PBU
$2
thema
082
0 4
$a
519.6
$2
23
090
$a
QA402.5
$b
.A559 2021
100
1
$a
Andrei, Neculai.
$3
3270881
245
1 2
$a
A derivative-free two level random search method for unconstrained optimization
$h
[electronic resource] /
$c
by Neculai Andrei.
260
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2021.
300
$a
xi, 118 p. :
$b
ill., digital ;
$c
24 cm.
490
1
$a
SpringerBriefs in optimization,
$x
2190-8354
505
0
$a
1. Introduction -- 2. A Derivative-free Two Level Random Search Method for Unconstrained Optimization -- 3. Convergence of the Algorithm -- 4. Numerical Results -- 5. Conclusions -- Annex A. List of Applications -- Annex B. List of Test Functions -- Annex C. Detailed Results for 30 Large-Scale Problems -- Annex D. Detailed Results for 140 Problems.
520
$a
The book is intended for graduate students and researchers in mathematics, computer science, and operational research. The book presents a new derivative-free optimization method/algorithm based on randomly generated trial points in specified domains and where the best ones are selected at each iteration by using a number of rules. This method is different from many other well established methods presented in the literature and proves to be competitive for solving many unconstrained optimization problems with different structures and complexities, with a relative large number of variables. Intensive numerical experiments with 140 unconstrained optimization problems, with up to 500 variables, have shown that this approach is efficient and robust. Structured into 4 chapters, Chapter 1 is introductory. Chapter 2 is dedicated to presenting a two level derivative-free random search method for unconstrained optimization. It is assumed that the minimizing function is continuous, lower bounded and its minimum value is known. Chapter 3 proves the convergence of the algorithm. In Chapter 4, the numerical performances of the algorithm are shown for solving 140 unconstrained optimization problems, out of which 16 are real applications. This shows that the optimization process has two phases: the reduction phase and the stalling one. Finally, the performances of the algorithm for solving a number of 30 large-scale unconstrained optimization problems up to 500 variables are presented. These numerical results show that this approach based on the two level random search method for unconstrained optimization is able to solve a large diversity of problems with different structures and complexities. There are a number of open problems which refer to the following aspects: the selection of the number of trial or the number of the local trial points, the selection of the bounds of the domains where the trial points and the local trial points are randomly generated and a criterion for initiating the line search.
650
0
$a
Mathematical optimization.
$3
517763
650
1 4
$a
Optimization.
$3
891104
650
2 4
$a
Operations Research, Management Science.
$3
1532996
710
2
$a
SpringerLink (Online service)
$3
836513
773
0
$t
Springer Nature eBook
830
0
$a
SpringerBriefs in optimization.
$3
1566137
856
4 0
$u
https://doi.org/10.1007/978-3-030-68517-1
950
$a
Mathematics and Statistics (SpringerNature-11649)
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9401210
電子資源
11.線上閱覽_V
電子書
EB QA402.5 .A537 2021
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login