Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Large-scale machine learning using k...
~
Wu, Gang.
Linked to FindBook
Google Book
Amazon
博客來
Large-scale machine learning using kernel methods.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Large-scale machine learning using kernel methods./
Author:
Wu, Gang.
Description:
168 p.
Notes:
Source: Dissertation Abstracts International, Volume: 67-05, Section: B, page: 2674.
Contained By:
Dissertation Abstracts International67-05B.
Subject:
Computer Science. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3218834
ISBN:
9780542681530
Large-scale machine learning using kernel methods.
Wu, Gang.
Large-scale machine learning using kernel methods.
- 168 p.
Source: Dissertation Abstracts International, Volume: 67-05, Section: B, page: 2674.
Thesis (Ph.D.)--University of California, Santa Barbara, 2006.
Kernel methods, such as Support Vector Machines (SVMs), are a core machine learning technology. They enjoy strong theoretical foundations and excellent empirical successes in many pattern-recognition applications. However, when kernel methods are applied to many emerging large-scale applications, such as video surveillance, multimedia information retrieval, and web mining, they suffer from the challenges of ineffective and inefficient training. In this dissertation, we explore these challenges and propose strategies to solve them.
ISBN: 9780542681530Subjects--Topical Terms:
626642
Computer Science.
Large-scale machine learning using kernel methods.
LDR
:03467nmm 2200313 4500
001
1830472
005
20070430071658.5
008
130610s2006 eng d
020
$a
9780542681530
035
$a
(UnM)AAI3218834
035
$a
AAI3218834
040
$a
UnM
$c
UnM
100
1
$a
Wu, Gang.
$3
1919301
245
1 0
$a
Large-scale machine learning using kernel methods.
300
$a
168 p.
500
$a
Source: Dissertation Abstracts International, Volume: 67-05, Section: B, page: 2674.
500
$a
Advisers: Edward Y. Chang.
502
$a
Thesis (Ph.D.)--University of California, Santa Barbara, 2006.
520
$a
Kernel methods, such as Support Vector Machines (SVMs), are a core machine learning technology. They enjoy strong theoretical foundations and excellent empirical successes in many pattern-recognition applications. However, when kernel methods are applied to many emerging large-scale applications, such as video surveillance, multimedia information retrieval, and web mining, they suffer from the challenges of ineffective and inefficient training. In this dissertation, we explore these challenges and propose strategies to solve them.
520
$a
We first investigate the imbalanced-training challenge which causes the training of kernel methods to be ineffective. The imbalance-training problem occurs when the training instances of the target class are significantly outnumbered by the other training instances. In such situations, we show the class boundary trained from SVMs can be severely skewed toward the target class. We propose using conformal transformation on the kernel function in Reproducing Kernel Hilbert Space for tackling the challenge.
520
$a
The training performance of kernel methods greatly depends on the chosen kernel function or matrix. A kernel function or matrix defines a pairwise-similarity measurement between two data instances. We thus develop an algorithm to formulate a context-dependent distance function for measuring such similarity. We demonstrate that the learned distance function leads to improved performance for kernel-based clustering and classification tasks. Moreover, we also research the situations where the similarity measurement to formulate the kernel function might not induce a positive semi-definite (psd) kernel matrix, and hence cannot be used for training with kernel methods. We propose an analytical framework on evaluating several representative spectrum-transformation methods.
520
$a
Finally, we address the efficiency of kernel methods to achieve fast training on massive data. Especially, we focus on Support Vector Machines. The traditional solutions of SVMs suffer from the widely-known scalability problem. We propose an incremental algorithm, which performs approximate matrix-factorization operations, to speed up SVMs. Two approximate factorization schemes, Kronecker and incomplete Cholesky, are utilized in the primal-dual interior-point method (IPM) to directly solve the quadratic optimization problem in SVMs.
520
$a
Through theoretical analysis and extensive empirical studies, we show that our proposed approaches are able to perform more effectively, and efficiently, than traditional methods.
590
$a
School code: 0035.
650
4
$a
Computer Science.
$3
626642
690
$a
0984
710
2 0
$a
University of California, Santa Barbara.
$3
1017586
773
0
$t
Dissertation Abstracts International
$g
67-05B.
790
1 0
$a
Chang, Edward Y.,
$e
advisor
790
$a
0035
791
$a
Ph.D.
792
$a
2006
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3218834
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9221335
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login