語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Automated Customization of ML Infere...
~
Ghasemzadeh, Mohammad.
FindBook
Google Book
Amazon
博客來
Automated Customization of ML Inference on FPGAs.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Automated Customization of ML Inference on FPGAs./
作者:
Ghasemzadeh, Mohammad.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
99 p.
附註:
Source: Masters Abstracts International, Volume: 80-01.
Contained By:
Masters Abstracts International80-01.
標題:
Computer Engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10814231
ISBN:
9780438115323
Automated Customization of ML Inference on FPGAs.
Ghasemzadeh, Mohammad.
Automated Customization of ML Inference on FPGAs.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 99 p.
Source: Masters Abstracts International, Volume: 80-01.
Thesis (M.S.)--University of California, San Diego, 2018.
This item must not be sold to any third party vendors.
This thesis introduces novel frameworks for automated customization of two classes of machine learning algorithms, deep neural networks and causal Bayesian analysis. The high computational complexity often prohibits the deployment of ML models on resource-constrained embedded devices where memory and energy budgets are strictly limited. FPGAs offer a flexible substrate that can be configured to maximally exploit the parallel nature of computations in different ML algorithms to deliver high-throughput and power-efficient accelerators. To make FPGAs a ubiquitous platform for ML inference, automated frameworks that can customize ML models to the constraints of the underlying hardware and pertinent application requirements are necessary. My work proposes hardware-algorithm co-design approaches to customize ML inference on FPGA platforms and provides end-to-end automated frameworks to generate optimized hardware accelerators which can be used by a broad range of ML developers without requiring any hardware design knowledge. My key contributions include: (i) proposing an end-to-end framework to customize execution of deep neural networks on FPGAs using a reconfigurable encoding approach for the parameters of model which results in 9-fold reduction in memory footprint and 15-fold improvement in throughput without any loss in accuracy, (ii) proposing CausaLearn, the first automated framework that enables real-time and scalable approximation of probability density function in the context of causal Bayesian analysis which offers up to two orders-of-magnitude runtime and energy improvements compared to the best-known prior solution, (iii) proposing ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA.
ISBN: 9780438115323Subjects--Topical Terms:
1567821
Computer Engineering.
Automated Customization of ML Inference on FPGAs.
LDR
:02819nmm a2200313 4500
001
2206257
005
20190829083217.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9780438115323
035
$a
(MiAaPQ)AAI10814231
035
$a
(MiAaPQ)ucsd:17397
035
$a
AAI10814231
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Ghasemzadeh, Mohammad.
$3
3433149
245
1 0
$a
Automated Customization of ML Inference on FPGAs.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
99 p.
500
$a
Source: Masters Abstracts International, Volume: 80-01.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Koushanfar, Farinaz.
502
$a
Thesis (M.S.)--University of California, San Diego, 2018.
506
$a
This item must not be sold to any third party vendors.
520
$a
This thesis introduces novel frameworks for automated customization of two classes of machine learning algorithms, deep neural networks and causal Bayesian analysis. The high computational complexity often prohibits the deployment of ML models on resource-constrained embedded devices where memory and energy budgets are strictly limited. FPGAs offer a flexible substrate that can be configured to maximally exploit the parallel nature of computations in different ML algorithms to deliver high-throughput and power-efficient accelerators. To make FPGAs a ubiquitous platform for ML inference, automated frameworks that can customize ML models to the constraints of the underlying hardware and pertinent application requirements are necessary. My work proposes hardware-algorithm co-design approaches to customize ML inference on FPGA platforms and provides end-to-end automated frameworks to generate optimized hardware accelerators which can be used by a broad range of ML developers without requiring any hardware design knowledge. My key contributions include: (i) proposing an end-to-end framework to customize execution of deep neural networks on FPGAs using a reconfigurable encoding approach for the parameters of model which results in 9-fold reduction in memory footprint and 15-fold improvement in throughput without any loss in accuracy, (ii) proposing CausaLearn, the first automated framework that enables real-time and scalable approximation of probability density function in the context of causal Bayesian analysis which offers up to two orders-of-magnitude runtime and energy improvements compared to the best-known prior solution, (iii) proposing ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA.
590
$a
School code: 0033.
650
4
$a
Computer Engineering.
$3
1567821
690
$a
0464
710
2
$a
University of California, San Diego.
$b
Electrical Engineering (Computer Engineering).
$3
1058698
773
0
$t
Masters Abstracts International
$g
80-01.
790
$a
0033
791
$a
M.S.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10814231
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9382806
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入