Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Newton Methods for Large Scale Probl...
~
Hansen, Samantha Leigh.
Linked to FindBook
Google Book
Amazon
博客來
Newton Methods for Large Scale Problems in Machine Learning.
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Newton Methods for Large Scale Problems in Machine Learning./
Author:
Hansen, Samantha Leigh.
Description:
130 p.
Notes:
Source: Dissertation Abstracts International, Volume: 75-07(E), Section: B.
Contained By:
Dissertation Abstracts International75-07B(E).
Subject:
Applied Mathematics. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3615562
ISBN:
9781303815829
Newton Methods for Large Scale Problems in Machine Learning.
Hansen, Samantha Leigh.
Newton Methods for Large Scale Problems in Machine Learning.
- 130 p.
Source: Dissertation Abstracts International, Volume: 75-07(E), Section: B.
Thesis (Ph.D.)--Northwestern University, 2014.
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss functions over a dataset, but respectively apply to the general scenarios of either minimizing a stochastic convex function or a convex function with an L1 regularizer. Chapter 4 discusses efficient implementations of projected Newton methods for nonnegative tensor factorization.
ISBN: 9781303815829Subjects--Topical Terms:
1669109
Applied Mathematics.
Newton Methods for Large Scale Problems in Machine Learning.
LDR
:03289nam a2200313 4500
001
1968916
005
20141231071636.5
008
150210s2014 ||||||||||||||||| ||eng d
020
$a
9781303815829
035
$a
(MiAaPQ)AAI3615562
035
$a
AAI3615562
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Hansen, Samantha Leigh.
$3
2106148
245
1 0
$a
Newton Methods for Large Scale Problems in Machine Learning.
300
$a
130 p.
500
$a
Source: Dissertation Abstracts International, Volume: 75-07(E), Section: B.
500
$a
Adviser: Jorge Nocedal.
502
$a
Thesis (Ph.D.)--Northwestern University, 2014.
520
$a
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss functions over a dataset, but respectively apply to the general scenarios of either minimizing a stochastic convex function or a convex function with an L1 regularizer. Chapter 4 discusses efficient implementations of projected Newton methods for nonnegative tensor factorization.
520
$a
Chapter 2 outlines a new stochastic quasi-Newton algorithm that incorporates second order information through the L-BFGS approximation of the Hessian. The method's novel element comes from using subsampled Hessian-vector products and averaging to define the L-BFGS curvature pairs. Numerical results on a speech and text classification problem demonstrate the effectiveness of this new algorithm over stochastic gradient descent.
520
$a
Chapter 3 presents a new active set method for minimizing a convex function with an L1 regularizer. The algorithm follows a two phase approach: an active-set prediction phase that employs first-order and second-order information, and a subspace phase that performs a Newton-like step using sub-sampled Newton-CG. The novelty of the algorithm comes from using an iterative shrinkage step in the active-set phase and a projected piece-wise linear line search in the subspace phase. The new algorithm is compared against a state-of-the-art orthant-wise limited memory algorithm on a speech classification problem.
520
$a
The fourth chapter concerns nonnegative tensor/matrix factorization with a Kullback-Leibler objective. All presented algorithms start from an alternating block Gauss-Seidel framework and formulate each block subproblem as a sum of independent row functions that only depend on a subset of variables. Minimization of the block problem is executed by the independent minimization of each row function, which is a strictly convex function with nonnegativity constraints. The conclusion is that applying two-metric gradient projection techniques with exact or approximate Hessian information to each of the independent row functions is much more effective than applying the same algorithms directly to the block subproblem.
590
$a
School code: 0163.
650
4
$a
Applied Mathematics.
$3
1669109
650
4
$a
Information Science.
$3
1017528
690
$a
0364
690
$a
0723
710
2
$a
Northwestern University.
$b
Applied Mathematics.
$3
1023521
773
0
$t
Dissertation Abstracts International
$g
75-07B(E).
790
$a
0163
791
$a
Ph.D.
792
$a
2014
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3615562
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9263923
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login