語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Newton Methods for Large Scale Probl...
~
Hansen, Samantha Leigh.
FindBook
Google Book
Amazon
博客來
Newton Methods for Large Scale Problems in Machine Learning.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Newton Methods for Large Scale Problems in Machine Learning./
作者:
Hansen, Samantha Leigh.
面頁冊數:
130 p.
附註:
Source: Dissertation Abstracts International, Volume: 75-07(E), Section: B.
Contained By:
Dissertation Abstracts International75-07B(E).
標題:
Applied Mathematics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3615562
ISBN:
9781303815829
Newton Methods for Large Scale Problems in Machine Learning.
Hansen, Samantha Leigh.
Newton Methods for Large Scale Problems in Machine Learning.
- 130 p.
Source: Dissertation Abstracts International, Volume: 75-07(E), Section: B.
Thesis (Ph.D.)--Northwestern University, 2014.
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss functions over a dataset, but respectively apply to the general scenarios of either minimizing a stochastic convex function or a convex function with an L1 regularizer. Chapter 4 discusses efficient implementations of projected Newton methods for nonnegative tensor factorization.
ISBN: 9781303815829Subjects--Topical Terms:
1669109
Applied Mathematics.
Newton Methods for Large Scale Problems in Machine Learning.
LDR
:03289nam a2200313 4500
001
1968916
005
20141231071636.5
008
150210s2014 ||||||||||||||||| ||eng d
020
$a
9781303815829
035
$a
(MiAaPQ)AAI3615562
035
$a
AAI3615562
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Hansen, Samantha Leigh.
$3
2106148
245
1 0
$a
Newton Methods for Large Scale Problems in Machine Learning.
300
$a
130 p.
500
$a
Source: Dissertation Abstracts International, Volume: 75-07(E), Section: B.
500
$a
Adviser: Jorge Nocedal.
502
$a
Thesis (Ph.D.)--Northwestern University, 2014.
520
$a
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss functions over a dataset, but respectively apply to the general scenarios of either minimizing a stochastic convex function or a convex function with an L1 regularizer. Chapter 4 discusses efficient implementations of projected Newton methods for nonnegative tensor factorization.
520
$a
Chapter 2 outlines a new stochastic quasi-Newton algorithm that incorporates second order information through the L-BFGS approximation of the Hessian. The method's novel element comes from using subsampled Hessian-vector products and averaging to define the L-BFGS curvature pairs. Numerical results on a speech and text classification problem demonstrate the effectiveness of this new algorithm over stochastic gradient descent.
520
$a
Chapter 3 presents a new active set method for minimizing a convex function with an L1 regularizer. The algorithm follows a two phase approach: an active-set prediction phase that employs first-order and second-order information, and a subspace phase that performs a Newton-like step using sub-sampled Newton-CG. The novelty of the algorithm comes from using an iterative shrinkage step in the active-set phase and a projected piece-wise linear line search in the subspace phase. The new algorithm is compared against a state-of-the-art orthant-wise limited memory algorithm on a speech classification problem.
520
$a
The fourth chapter concerns nonnegative tensor/matrix factorization with a Kullback-Leibler objective. All presented algorithms start from an alternating block Gauss-Seidel framework and formulate each block subproblem as a sum of independent row functions that only depend on a subset of variables. Minimization of the block problem is executed by the independent minimization of each row function, which is a strictly convex function with nonnegativity constraints. The conclusion is that applying two-metric gradient projection techniques with exact or approximate Hessian information to each of the independent row functions is much more effective than applying the same algorithms directly to the block subproblem.
590
$a
School code: 0163.
650
4
$a
Applied Mathematics.
$3
1669109
650
4
$a
Information Science.
$3
1017528
690
$a
0364
690
$a
0723
710
2
$a
Northwestern University.
$b
Applied Mathematics.
$3
1023521
773
0
$t
Dissertation Abstracts International
$g
75-07B(E).
790
$a
0163
791
$a
Ph.D.
792
$a
2014
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3615562
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9263923
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入