語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Data orchestration in deep learning ...
~
Krishna, Tushar,
FindBook
Google Book
Amazon
博客來
Data orchestration in deep learning accelerators
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Data orchestration in deep learning accelerators/ Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
作者:
Krishna, Tushar,
其他作者:
Kwon, Hyoukjun,
面頁冊數:
1 online resource (166 p.)
標題:
Neural networks (Computer science) -
電子資源:
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
ISBN:
9781681738697
Data orchestration in deep learning accelerators
Krishna, Tushar,
Data orchestration in deep learning accelerators
[electronic resource] /Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar. - 1 online resource (166 p.) - Synthesis lectures on computer architecture ;52. - Synthesis lectures on computer architecture ;52..
Includes bibliographical references (pages 131-143).
Access restricted to authorized users and institutions.
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
Mode of access: World Wide Web.
ISBN: 9781681738697Subjects--Topical Terms:
532070
Neural networks (Computer science)
Index Terms--Genre/Form:
542853
Electronic books.
LC Class. No.: Q342
Dewey Class. No.: 006.3
Data orchestration in deep learning accelerators
LDR
:02253nmm a2200301 i 4500
001
2247683
006
m eo d
007
cr cn |||m|||a
008
211227t20202020cau ob 000 0 eng d
020
$a
9781681738697
020
$a
9781681738703
020
$a
9781681738710
035
$a
MCPB0006576
040
$a
iG Publishing
$b
eng
$c
iG Publishing
$e
rda
050
0 0
$a
Q342
082
0 0
$a
006.3
100
1
$a
Krishna, Tushar,
$e
author.
$3
3512040
245
1 0
$a
Data orchestration in deep learning accelerators
$h
[electronic resource] /
$c
Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
264
1
$a
San Rafael, California :
$b
Morgan & Claypool Publishers,
$c
2020.
264
4
$c
©2020
300
$a
1 online resource (166 p.)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
490
1
$a
Synthesis lectures on computer architecture ;
$v
52
504
$a
Includes bibliographical references (pages 131-143).
506
$a
Access restricted to authorized users and institutions.
520
3
$a
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
538
$a
Mode of access: World Wide Web.
650
0
$a
Neural networks (Computer science)
$3
532070
650
0
$a
Machine learning.
$3
533906
650
0
$a
Data flow computing.
$3
1005622
655
4
$a
Electronic books.
$2
lcsh
$3
542853
700
1
$a
Kwon, Hyoukjun,
$e
author.
$3
3512041
700
1
$a
Parashar, Angshuman,
$e
author.
$3
3512042
700
1
$a
Pellauer, Michael,
$e
author.
$3
3512043
700
1
$a
Samajdar, Ananda,
$e
author.
$3
3512044
830
0
$a
Synthesis lectures on computer architecture ;
$v
52.
$3
3512045
856
4 0
$u
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9407618
電子資源
11.線上閱覽_V
電子書
EB Q342
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入