Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Minimum divergence methods in statis...
~
Eguchi, Shinto.
Linked to FindBook
Google Book
Amazon
博客來
Minimum divergence methods in statistical machine learning = from an information geometric viewpoint /
Record Type:
Electronic resources : Monograph/item
Title/Author:
Minimum divergence methods in statistical machine learning/ by Shinto Eguchi, Osamu Komori.
Reminder of title:
from an information geometric viewpoint /
Author:
Eguchi, Shinto.
other author:
Komori, Osamu.
Published:
Tokyo :Springer Japan : : 2022.,
Description:
x, 221 p. :ill., digital ;24 cm.
[NT 15003449]:
Information geometry -- Information divergence -- Maximum entropy model -- Minimum divergence method -- Unsupervised learning algorithms -- Regression model -- Classification.
Contained By:
Springer Nature eBook
Subject:
Machine learning - Statistical methods. -
Online resource:
https://doi.org/10.1007/978-4-431-56922-0
ISBN:
9784431569220
Minimum divergence methods in statistical machine learning = from an information geometric viewpoint /
Eguchi, Shinto.
Minimum divergence methods in statistical machine learning
from an information geometric viewpoint /[electronic resource] :by Shinto Eguchi, Osamu Komori. - Tokyo :Springer Japan :2022. - x, 221 p. :ill., digital ;24 cm.
Information geometry -- Information divergence -- Maximum entropy model -- Minimum divergence method -- Unsupervised learning algorithms -- Regression model -- Classification.
This book explores minimum divergence methods of statistical machine learning for estimation, regression, prediction, and so forth, in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors. This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence. This understanding sublimates a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm, clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of U leads to a robust statistical procedure via the minimum U-divergence method. If U is selected as an exponential function, then the corresponding U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine learning.
ISBN: 9784431569220
Standard No.: 10.1007/978-4-431-56922-0doiSubjects--Topical Terms:
921882
Machine learning
--Statistical methods.
LC Class. No.: Q325.5 / .E48 2022
Dewey Class. No.: 006.31
Minimum divergence methods in statistical machine learning = from an information geometric viewpoint /
LDR
:04696nmm a2200325 a 4500
001
2299141
003
DE-He213
005
20220314204554.0
006
m d
007
cr nn 008maaau
008
230324s2022 ja s 0 eng d
020
$a
9784431569220
$q
(electronic bk.)
020
$a
9784431569206
$q
(paper)
024
7
$a
10.1007/978-4-431-56922-0
$2
doi
035
$a
978-4-431-56922-0
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
Q325.5
$b
.E48 2022
072
7
$a
PBT
$2
bicssc
072
7
$a
MAT029000
$2
bisacsh
072
7
$a
PBT
$2
thema
082
0 4
$a
006.31
$2
23
090
$a
Q325.5
$b
.E32 2022
100
1
$a
Eguchi, Shinto.
$3
3503179
245
1 0
$a
Minimum divergence methods in statistical machine learning
$h
[electronic resource] :
$b
from an information geometric viewpoint /
$c
by Shinto Eguchi, Osamu Komori.
260
$a
Tokyo :
$b
Springer Japan :
$b
Imprint: Springer,
$c
2022.
300
$a
x, 221 p. :
$b
ill., digital ;
$c
24 cm.
505
0
$a
Information geometry -- Information divergence -- Maximum entropy model -- Minimum divergence method -- Unsupervised learning algorithms -- Regression model -- Classification.
520
$a
This book explores minimum divergence methods of statistical machine learning for estimation, regression, prediction, and so forth, in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors. This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence. This understanding sublimates a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm, clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of U leads to a robust statistical procedure via the minimum U-divergence method. If U is selected as an exponential function, then the corresponding U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine learning.
650
0
$a
Machine learning
$x
Statistical methods.
$3
921882
650
1 4
$a
Statistics in Engineering, Physics, Computer Science, Chemistry and Earth Sciences.
$3
3591853
650
2 4
$a
Statistical Theory and Methods.
$3
891074
650
2 4
$a
Probability and Statistics in Computer Science.
$3
891072
700
1
$a
Komori, Osamu.
$3
3503178
710
2
$a
SpringerLink (Online service)
$3
836513
773
0
$t
Springer Nature eBook
856
4 0
$u
https://doi.org/10.1007/978-4-431-56922-0
950
$a
Mathematics and Statistics (SpringerNature-11649)
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9441033
電子資源
11.線上閱覽_V
電子書
EB Q325.5 .E48 2022
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login