Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Data orchestration in deep learning ...
~
Krishna, Tushar,
Linked to FindBook
Google Book
Amazon
博客來
Data orchestration in deep learning accelerators
Record Type:
Electronic resources : Monograph/item
Title/Author:
Data orchestration in deep learning accelerators/ Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
Author:
Krishna, Tushar,
other author:
Kwon, Hyoukjun,
Description:
1 online resource (166 p.)
Subject:
Neural networks (Computer science) -
Online resource:
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
ISBN:
9781681738697
Data orchestration in deep learning accelerators
Krishna, Tushar,
Data orchestration in deep learning accelerators
[electronic resource] /Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar. - 1 online resource (166 p.) - Synthesis lectures on computer architecture ;52. - Synthesis lectures on computer architecture ;52..
Includes bibliographical references (pages 131-143).
Access restricted to authorized users and institutions.
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
Mode of access: World Wide Web.
ISBN: 9781681738697Subjects--Topical Terms:
532070
Neural networks (Computer science)
Index Terms--Genre/Form:
542853
Electronic books.
LC Class. No.: Q342
Dewey Class. No.: 006.3
Data orchestration in deep learning accelerators
LDR
:02253nmm a2200301 i 4500
001
2247683
006
m eo d
007
cr cn |||m|||a
008
211227t20202020cau ob 000 0 eng d
020
$a
9781681738697
020
$a
9781681738703
020
$a
9781681738710
035
$a
MCPB0006576
040
$a
iG Publishing
$b
eng
$c
iG Publishing
$e
rda
050
0 0
$a
Q342
082
0 0
$a
006.3
100
1
$a
Krishna, Tushar,
$e
author.
$3
3512040
245
1 0
$a
Data orchestration in deep learning accelerators
$h
[electronic resource] /
$c
Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
264
1
$a
San Rafael, California :
$b
Morgan & Claypool Publishers,
$c
2020.
264
4
$c
©2020
300
$a
1 online resource (166 p.)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
490
1
$a
Synthesis lectures on computer architecture ;
$v
52
504
$a
Includes bibliographical references (pages 131-143).
506
$a
Access restricted to authorized users and institutions.
520
3
$a
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
538
$a
Mode of access: World Wide Web.
650
0
$a
Neural networks (Computer science)
$3
532070
650
0
$a
Machine learning.
$3
533906
650
0
$a
Data flow computing.
$3
1005622
655
4
$a
Electronic books.
$2
lcsh
$3
542853
700
1
$a
Kwon, Hyoukjun,
$e
author.
$3
3512041
700
1
$a
Parashar, Angshuman,
$e
author.
$3
3512042
700
1
$a
Pellauer, Michael,
$e
author.
$3
3512043
700
1
$a
Samajdar, Ananda,
$e
author.
$3
3512044
830
0
$a
Synthesis lectures on computer architecture ;
$v
52.
$3
3512045
856
4 0
$u
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9407618
電子資源
11.線上閱覽_V
電子書
EB Q342
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login