語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Large-scale machine learning using k...
~
Wu, Gang.
FindBook
Google Book
Amazon
博客來
Large-scale machine learning using kernel methods.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Large-scale machine learning using kernel methods./
作者:
Wu, Gang.
面頁冊數:
168 p.
附註:
Source: Dissertation Abstracts International, Volume: 67-05, Section: B, page: 2674.
Contained By:
Dissertation Abstracts International67-05B.
標題:
Computer Science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3218834
ISBN:
9780542681530
Large-scale machine learning using kernel methods.
Wu, Gang.
Large-scale machine learning using kernel methods.
- 168 p.
Source: Dissertation Abstracts International, Volume: 67-05, Section: B, page: 2674.
Thesis (Ph.D.)--University of California, Santa Barbara, 2006.
Kernel methods, such as Support Vector Machines (SVMs), are a core machine learning technology. They enjoy strong theoretical foundations and excellent empirical successes in many pattern-recognition applications. However, when kernel methods are applied to many emerging large-scale applications, such as video surveillance, multimedia information retrieval, and web mining, they suffer from the challenges of ineffective and inefficient training. In this dissertation, we explore these challenges and propose strategies to solve them.
ISBN: 9780542681530Subjects--Topical Terms:
626642
Computer Science.
Large-scale machine learning using kernel methods.
LDR
:03467nmm 2200313 4500
001
1830472
005
20070430071658.5
008
130610s2006 eng d
020
$a
9780542681530
035
$a
(UnM)AAI3218834
035
$a
AAI3218834
040
$a
UnM
$c
UnM
100
1
$a
Wu, Gang.
$3
1919301
245
1 0
$a
Large-scale machine learning using kernel methods.
300
$a
168 p.
500
$a
Source: Dissertation Abstracts International, Volume: 67-05, Section: B, page: 2674.
500
$a
Advisers: Edward Y. Chang.
502
$a
Thesis (Ph.D.)--University of California, Santa Barbara, 2006.
520
$a
Kernel methods, such as Support Vector Machines (SVMs), are a core machine learning technology. They enjoy strong theoretical foundations and excellent empirical successes in many pattern-recognition applications. However, when kernel methods are applied to many emerging large-scale applications, such as video surveillance, multimedia information retrieval, and web mining, they suffer from the challenges of ineffective and inefficient training. In this dissertation, we explore these challenges and propose strategies to solve them.
520
$a
We first investigate the imbalanced-training challenge which causes the training of kernel methods to be ineffective. The imbalance-training problem occurs when the training instances of the target class are significantly outnumbered by the other training instances. In such situations, we show the class boundary trained from SVMs can be severely skewed toward the target class. We propose using conformal transformation on the kernel function in Reproducing Kernel Hilbert Space for tackling the challenge.
520
$a
The training performance of kernel methods greatly depends on the chosen kernel function or matrix. A kernel function or matrix defines a pairwise-similarity measurement between two data instances. We thus develop an algorithm to formulate a context-dependent distance function for measuring such similarity. We demonstrate that the learned distance function leads to improved performance for kernel-based clustering and classification tasks. Moreover, we also research the situations where the similarity measurement to formulate the kernel function might not induce a positive semi-definite (psd) kernel matrix, and hence cannot be used for training with kernel methods. We propose an analytical framework on evaluating several representative spectrum-transformation methods.
520
$a
Finally, we address the efficiency of kernel methods to achieve fast training on massive data. Especially, we focus on Support Vector Machines. The traditional solutions of SVMs suffer from the widely-known scalability problem. We propose an incremental algorithm, which performs approximate matrix-factorization operations, to speed up SVMs. Two approximate factorization schemes, Kronecker and incomplete Cholesky, are utilized in the primal-dual interior-point method (IPM) to directly solve the quadratic optimization problem in SVMs.
520
$a
Through theoretical analysis and extensive empirical studies, we show that our proposed approaches are able to perform more effectively, and efficiently, than traditional methods.
590
$a
School code: 0035.
650
4
$a
Computer Science.
$3
626642
690
$a
0984
710
2 0
$a
University of California, Santa Barbara.
$3
1017586
773
0
$t
Dissertation Abstracts International
$g
67-05B.
790
1 0
$a
Chang, Edward Y.,
$e
advisor
790
$a
0035
791
$a
Ph.D.
792
$a
2006
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3218834
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9221335
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入