語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Accelerating Machine Learning Algori...
~
Tiwari, Mohit.
FindBook
Google Book
Amazon
博客來
Accelerating Machine Learning Algorithms with Adaptive Sampling.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Accelerating Machine Learning Algorithms with Adaptive Sampling./
作者:
Tiwari, Mohit.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2023,
面頁冊數:
114 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Contained By:
Dissertations Abstracts International85-04B.
標題:
Sample size. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30615132
ISBN:
9798380485944
Accelerating Machine Learning Algorithms with Adaptive Sampling.
Tiwari, Mohit.
Accelerating Machine Learning Algorithms with Adaptive Sampling.
- Ann Arbor : ProQuest Dissertations & Theses, 2023 - 114 p.
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Thesis (Ph.D.)--Stanford University, 2023.
This item must not be sold to any third party vendors.
The era of huge data necessitates highly efficient machine learning algorithms. Many common machine learning algorithms, however, rely on computationally intensive subroutines that are prohibitively expensive on large datasets. Oftentimes, existing techniques subsample the data or use other methods to improve computational efficiency, at the expense of incurring some approximation error. This thesis demonstrates that it is often sufficient, instead, to substitute computationally intensive subroutines with a special kind of randomized counterparts that results in almost no degradation in quality.The results in this thesis are based on techniques from the adaptive sampling literature. Chapter 1 begins with an introduction to a specific adaptive sampling problem: that of best-arm identification in multi-armed bandits. We first provide a formal description of the setting and the best-arm identification problem. We then present a general algorithm, called successive elimination, for solving the best-arm identification problem.The techniques developed in Chapter 1 will be applied to different problems in Chapters 2, 3, and 4. In Chapter 2, we discuss an how the k-medoids clustering problem can be reduced to a sequence of best-arm identification problems. We use this observation to present a new algorithm, based on successive elimination, that matches the prior state-of-the-art in clustering quality but reaches the same solutions much faster. Our algorithm achieves an O(n/logn) reduction in sample complexity over prior state-of-the-art, where n is the size of the dataset, under general assumptions over the data generating distribution.In Chapter 3, we analyze the problem of training tree-based models. The majority of the training time for such models is in splitting each node of the tree, i.e., determining the feature and corresponding threshold at which to split each node. We show that the node-splitting subroutine can be reduced to a best-arm identification problem and present a state-of-the-art algorithm for training trees. Our algorithm depends only on the relative quality of each possible split, rather than explicitly depending on the size of the training dataset, and reduces the explicit dependence on dataset size n from O(n), for the most commonly-used prior algorithm, to O(1). Our algorithm applies generally to many tree-based models, such as Random Forests and XGBoost.In Chapter 4, we study the Maximum Inner Product Search problem. We observe that, as with the k-medoids and node-splitting problems, the Maximum Inner Product Search problem can be reduced to a best-arm identification problem. Armed with this observation, we present a novel algorithm for the Maximum Inner Product Search problem in high dimensions. Our algorithm reduces the explicit scaling with d, the dimensionality of the dataset, from O(√ d) to O(1) under reasonable assumptions on the data. Our algorithm has several advantages: it requires no preprocessing of the data, naturally deals with the addition or removal of new data points, and includes a hyperparameter to trade off accuracy and efficiency.Chapter 5 concludes this thesis with a summary of its contributions and possible directions for future work.
ISBN: 9798380485944Subjects--Topical Terms:
3642155
Sample size.
Accelerating Machine Learning Algorithms with Adaptive Sampling.
LDR
:04335nmm a2200337 4500
001
2395541
005
20240517104618.5
006
m o d
007
cr#unu||||||||
008
251215s2023 ||||||||||||||||| ||eng d
020
$a
9798380485944
035
$a
(MiAaPQ)AAI30615132
035
$a
(MiAaPQ)STANFORDvp233gb6409
035
$a
AAI30615132
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Tiwari, Mohit.
$3
3765052
245
1 0
$a
Accelerating Machine Learning Algorithms with Adaptive Sampling.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
114 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
500
$a
Advisor: Piech, Christopher;Thrun, Sebastian;Valiant, Gregory.
502
$a
Thesis (Ph.D.)--Stanford University, 2023.
506
$a
This item must not be sold to any third party vendors.
520
$a
The era of huge data necessitates highly efficient machine learning algorithms. Many common machine learning algorithms, however, rely on computationally intensive subroutines that are prohibitively expensive on large datasets. Oftentimes, existing techniques subsample the data or use other methods to improve computational efficiency, at the expense of incurring some approximation error. This thesis demonstrates that it is often sufficient, instead, to substitute computationally intensive subroutines with a special kind of randomized counterparts that results in almost no degradation in quality.The results in this thesis are based on techniques from the adaptive sampling literature. Chapter 1 begins with an introduction to a specific adaptive sampling problem: that of best-arm identification in multi-armed bandits. We first provide a formal description of the setting and the best-arm identification problem. We then present a general algorithm, called successive elimination, for solving the best-arm identification problem.The techniques developed in Chapter 1 will be applied to different problems in Chapters 2, 3, and 4. In Chapter 2, we discuss an how the k-medoids clustering problem can be reduced to a sequence of best-arm identification problems. We use this observation to present a new algorithm, based on successive elimination, that matches the prior state-of-the-art in clustering quality but reaches the same solutions much faster. Our algorithm achieves an O(n/logn) reduction in sample complexity over prior state-of-the-art, where n is the size of the dataset, under general assumptions over the data generating distribution.In Chapter 3, we analyze the problem of training tree-based models. The majority of the training time for such models is in splitting each node of the tree, i.e., determining the feature and corresponding threshold at which to split each node. We show that the node-splitting subroutine can be reduced to a best-arm identification problem and present a state-of-the-art algorithm for training trees. Our algorithm depends only on the relative quality of each possible split, rather than explicitly depending on the size of the training dataset, and reduces the explicit dependence on dataset size n from O(n), for the most commonly-used prior algorithm, to O(1). Our algorithm applies generally to many tree-based models, such as Random Forests and XGBoost.In Chapter 4, we study the Maximum Inner Product Search problem. We observe that, as with the k-medoids and node-splitting problems, the Maximum Inner Product Search problem can be reduced to a best-arm identification problem. Armed with this observation, we present a novel algorithm for the Maximum Inner Product Search problem in high dimensions. Our algorithm reduces the explicit scaling with d, the dimensionality of the dataset, from O(√ d) to O(1) under reasonable assumptions on the data. Our algorithm has several advantages: it requires no preprocessing of the data, naturally deals with the addition or removal of new data points, and includes a hyperparameter to trade off accuracy and efficiency.Chapter 5 concludes this thesis with a summary of its contributions and possible directions for future work.
590
$a
School code: 0212.
650
4
$a
Sample size.
$3
3642155
650
4
$a
Computer science.
$3
523869
690
$a
0984
690
$a
0800
710
2
$a
Stanford University.
$3
754827
773
0
$t
Dissertations Abstracts International
$g
85-04B.
790
$a
0212
791
$a
Ph.D.
792
$a
2023
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30615132
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9503861
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入