Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Enhancing FPGA Architecture for Effi...
~
Boutros, Andrew Maher Mansour.
Linked to FindBook
Google Book
Amazon
博客來
Enhancing FPGA Architecture for Efficient Deep Learning Inference.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Enhancing FPGA Architecture for Efficient Deep Learning Inference./
Author:
Boutros, Andrew Maher Mansour.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
Description:
99 p.
Notes:
Source: Masters Abstracts International, Volume: 80-05.
Contained By:
Masters Abstracts International80-05.
Subject:
Computer Engineering. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10930958
ISBN:
9780438670839
Enhancing FPGA Architecture for Efficient Deep Learning Inference.
Boutros, Andrew Maher Mansour.
Enhancing FPGA Architecture for Efficient Deep Learning Inference.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 99 p.
Source: Masters Abstracts International, Volume: 80-05.
Thesis (M.A.S.)--University of Toronto (Canada), 2018.
This item must not be sold to any third party vendors.
Deep Learning (DL) has become best-in-class for numerous applications but at a high computational cost that necessitates high-performance energy-efficient acceleration. FPGAs offer an appealing DL inference acceleration platform due to their flexibility and energy-efficiency. This thesis explores FPGA architectural changes to enhance the efficiency of a class of DL models, convolutional neural networks (CNNs), on FPGAs. We first build three state-of-the-art CNN computing architectures (CAs) as benchmarks representative of the DL domain and quantify the FPGA vs. ASIC efficiency gaps for these CAs to highlight the bottlenecks of current FPGA architectures. Then, we enhance the flexibility of digital signal processing (DSP) blocks on current FPGAs for low-precision DL. Our DSP block increases the performance of 8-bit and 4-bit CNN inference by 1.3x and 1.6x respectively with minimal block area overhead. Finally, we present a preliminary evaluation of logic block architectural changes, leaving their detailed evaluation for future work.
ISBN: 9780438670839Subjects--Topical Terms:
1567821
Computer Engineering.
Enhancing FPGA Architecture for Efficient Deep Learning Inference.
LDR
:02057nmm a2200313 4500
001
2206276
005
20190829083226.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9780438670839
035
$a
(MiAaPQ)AAI10930958
035
$a
(MiAaPQ)toronto:17930
035
$a
AAI10930958
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Boutros, Andrew Maher Mansour.
$3
3433166
245
1 0
$a
Enhancing FPGA Architecture for Efficient Deep Learning Inference.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
99 p.
500
$a
Source: Masters Abstracts International, Volume: 80-05.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Betz, Vaughn.
502
$a
Thesis (M.A.S.)--University of Toronto (Canada), 2018.
506
$a
This item must not be sold to any third party vendors.
520
$a
Deep Learning (DL) has become best-in-class for numerous applications but at a high computational cost that necessitates high-performance energy-efficient acceleration. FPGAs offer an appealing DL inference acceleration platform due to their flexibility and energy-efficiency. This thesis explores FPGA architectural changes to enhance the efficiency of a class of DL models, convolutional neural networks (CNNs), on FPGAs. We first build three state-of-the-art CNN computing architectures (CAs) as benchmarks representative of the DL domain and quantify the FPGA vs. ASIC efficiency gaps for these CAs to highlight the bottlenecks of current FPGA architectures. Then, we enhance the flexibility of digital signal processing (DSP) blocks on current FPGAs for low-precision DL. Our DSP block increases the performance of 8-bit and 4-bit CNN inference by 1.3x and 1.6x respectively with minimal block area overhead. Finally, we present a preliminary evaluation of logic block architectural changes, leaving their detailed evaluation for future work.
590
$a
School code: 0779.
650
4
$a
Computer Engineering.
$3
1567821
690
$a
0464
710
2
$a
University of Toronto (Canada).
$b
Electrical and Computer Engineering.
$3
2096349
773
0
$t
Masters Abstracts International
$g
80-05.
790
$a
0779
791
$a
M.A.S.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10930958
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9382825
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login