語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Efficiency in Machine Learning with ...
~
Nesky, Amy.
FindBook
Google Book
Amazon
博客來
Efficiency in Machine Learning with Focus on Deep Learning and Recommender Systems.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Efficiency in Machine Learning with Focus on Deep Learning and Recommender Systems./
作者:
Nesky, Amy.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
111 p.
附註:
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Contained By:
Dissertations Abstracts International82-07B.
標題:
Educational technology. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240172
ISBN:
9798684626647
Efficiency in Machine Learning with Focus on Deep Learning and Recommender Systems.
Nesky, Amy.
Efficiency in Machine Learning with Focus on Deep Learning and Recommender Systems.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 111 p.
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Thesis (Ph.D.)--University of Michigan, 2020.
This item must not be sold to any third party vendors.
Machine learning algorithms have opened up countless doors for scientists tackling problems that had previously been inaccessible, and the applications of these algorithms are far from exhausted. However, as the complexity of the learning problem grows, so does the computational and memory cost of the appropriate learning algorithm. As a result, the training process for computationally heavy algorithms can take weeks or even months to reach a good result, which can be prohibitively expensive. The general inefficiencies of machine learning algorithms is a significant bottleneck slowing the progress in application sciences. This thesis introduces three new methods of improving the efficiency of machine learning algorithms focusing on expensive algorithms such as neural networks and recommender systems. The first method discussed makes structured reductions of fully connected layers in neural networks, which causes speedup during training and decreases the amount of storage required. The second method presented is an accelerated gradient descent method called Predictor-Corrector Gradient Descent (PCGD) that combines predictor-corrector techniques with stochastic gradient descent. The final technique introduced generates Artificial Core Users (ACUs) from the Core Users of a recommendation dataset. Core Users condense the number of users in a recommendation dataset without significant loss of information; Artificial Core Users improve the recommendation accuracy of Core Users yet still mimic real user data.
ISBN: 9798684626647Subjects--Topical Terms:
517670
Educational technology.
Subjects--Index Terms:
Machine learning
Efficiency in Machine Learning with Focus on Deep Learning and Recommender Systems.
LDR
:02956nmm a2200457 4500
001
2281892
005
20210927083422.5
008
220723s2020 ||||||||||||||||| ||eng d
020
$a
9798684626647
035
$a
(MiAaPQ)AAI28240172
035
$a
(MiAaPQ)umichrackham003090
035
$a
AAI28240172
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Nesky, Amy.
$3
3560600
245
1 0
$a
Efficiency in Machine Learning with Focus on Deep Learning and Recommender Systems.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
111 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
500
$a
Advisor: Stout, Quentin F.
502
$a
Thesis (Ph.D.)--University of Michigan, 2020.
506
$a
This item must not be sold to any third party vendors.
506
$a
This item must not be added to any third party search indexes.
520
$a
Machine learning algorithms have opened up countless doors for scientists tackling problems that had previously been inaccessible, and the applications of these algorithms are far from exhausted. However, as the complexity of the learning problem grows, so does the computational and memory cost of the appropriate learning algorithm. As a result, the training process for computationally heavy algorithms can take weeks or even months to reach a good result, which can be prohibitively expensive. The general inefficiencies of machine learning algorithms is a significant bottleneck slowing the progress in application sciences. This thesis introduces three new methods of improving the efficiency of machine learning algorithms focusing on expensive algorithms such as neural networks and recommender systems. The first method discussed makes structured reductions of fully connected layers in neural networks, which causes speedup during training and decreases the amount of storage required. The second method presented is an accelerated gradient descent method called Predictor-Corrector Gradient Descent (PCGD) that combines predictor-corrector techniques with stochastic gradient descent. The final technique introduced generates Artificial Core Users (ACUs) from the Core Users of a recommendation dataset. Core Users condense the number of users in a recommendation dataset without significant loss of information; Artificial Core Users improve the recommendation accuracy of Core Users yet still mimic real user data.
590
$a
School code: 0127.
650
4
$a
Educational technology.
$3
517670
650
4
$a
Applied mathematics.
$3
2122814
650
4
$a
Computer science.
$3
523869
650
4
$a
Systems science.
$3
3168411
650
4
$a
Information technology.
$3
532993
650
4
$a
Artificial intelligence.
$3
516317
653
$a
Machine learning
653
$a
Deep learning
653
$a
Core users
653
$a
Accelerated gradient Descent
653
$a
Learning algorithm
653
$a
Neural networks
653
$a
Real user data
690
$a
0984
690
$a
0489
690
$a
0800
690
$a
0710
690
$a
0364
690
$a
0790
710
2
$a
University of Michigan.
$b
Computer Science & Engineering.
$3
3285590
773
0
$t
Dissertations Abstracts International
$g
82-07B.
790
$a
0127
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240172
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9433625
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入