語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Energy Efficient Hardware Design of ...
~
Venkataramanaiah, Shreyas Kolala.
FindBook
Google Book
Amazon
博客來
Energy Efficient Hardware Design of Neural Networks.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Energy Efficient Hardware Design of Neural Networks./
作者:
Venkataramanaiah, Shreyas Kolala.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
53 p.
附註:
Source: Masters Abstracts International, Volume: 80-06.
Contained By:
Masters Abstracts International80-06.
標題:
Computer Engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10977775
ISBN:
9780438713376
Energy Efficient Hardware Design of Neural Networks.
Venkataramanaiah, Shreyas Kolala.
Energy Efficient Hardware Design of Neural Networks.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 53 p.
Source: Masters Abstracts International, Volume: 80-06.
Thesis (M.S.)--Arizona State University, 2018.
This item must not be sold to any third party vendors.
Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements called neurons with several connections between them called synapses are used to build these networks. Hence, it involves operations that exhibit a high level of parallelism making it computationally and memory intensive. Constrained by computing resources and memory, most of the applications require a neural network which utilizes less energy. Energy efficient implementation of these computationally intense algorithms on neuromorphic hardware demands a lot of architectural optimizations. One of these optimizations would be the reduction in the network size using compression and several studies investigated compression by introducing element-wise or row-/column-/block-wise sparsity via pruning and regularization. Additionally, numerous recent works have concentrated on reducing the precision of activations and weights with some reducing to a single bit. However, combining various sparsity structures with binarized or very-low-precision (2-3 bit) neural networks have not been comprehensively explored. Output activations in these deep neural network algorithms are habitually non-binary making it difficult to exploit sparsity. On the other hand, biologically realistic models like spiking neural networks (SNN) closely mimic the operations in biological nervous systems and explore new avenues for brain-like cognitive computing. These networks deal with binary spikes, and they can exploit the input-dependent sparsity or redundancy to dynamically scale the amount of computation in turn leading to energy-efficient hardware implementation. This work discusses configurable spiking neuromorphic architecture that supports multiple hidden layers exploiting hardware reuse. It also presents design techniques for minimum-area/-energy DNN hardware with minimal degradation in accuracy. Area, performance and energy results of these DNN and SNN hardware is reported for the MNIST dataset. The Neuromorphic hardware designed for SNN algorithm in 28nm CMOS demonstrates high classification accuracy (>98% on MNIST) and low energy (51.4 - 773 (nJ) per classification). The optimized DNN hardware designed in 40nm CMOS that combines 8X structured compression and 3-bit weight precision showed 98.4% accuracy at 33 (nJ) per classification.
ISBN: 9780438713376Subjects--Topical Terms:
1567821
Computer Engineering.
Energy Efficient Hardware Design of Neural Networks.
LDR
:03643nmm a2200325 4500
001
2206284
005
20190829083229.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9780438713376
035
$a
(MiAaPQ)AAI10977775
035
$a
(MiAaPQ)asu:18335
035
$a
AAI10977775
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Venkataramanaiah, Shreyas Kolala.
$3
3433174
245
1 0
$a
Energy Efficient Hardware Design of Neural Networks.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
53 p.
500
$a
Source: Masters Abstracts International, Volume: 80-06.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Seo, Jae-sun.
502
$a
Thesis (M.S.)--Arizona State University, 2018.
506
$a
This item must not be sold to any third party vendors.
520
$a
Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements called neurons with several connections between them called synapses are used to build these networks. Hence, it involves operations that exhibit a high level of parallelism making it computationally and memory intensive. Constrained by computing resources and memory, most of the applications require a neural network which utilizes less energy. Energy efficient implementation of these computationally intense algorithms on neuromorphic hardware demands a lot of architectural optimizations. One of these optimizations would be the reduction in the network size using compression and several studies investigated compression by introducing element-wise or row-/column-/block-wise sparsity via pruning and regularization. Additionally, numerous recent works have concentrated on reducing the precision of activations and weights with some reducing to a single bit. However, combining various sparsity structures with binarized or very-low-precision (2-3 bit) neural networks have not been comprehensively explored. Output activations in these deep neural network algorithms are habitually non-binary making it difficult to exploit sparsity. On the other hand, biologically realistic models like spiking neural networks (SNN) closely mimic the operations in biological nervous systems and explore new avenues for brain-like cognitive computing. These networks deal with binary spikes, and they can exploit the input-dependent sparsity or redundancy to dynamically scale the amount of computation in turn leading to energy-efficient hardware implementation. This work discusses configurable spiking neuromorphic architecture that supports multiple hidden layers exploiting hardware reuse. It also presents design techniques for minimum-area/-energy DNN hardware with minimal degradation in accuracy. Area, performance and energy results of these DNN and SNN hardware is reported for the MNIST dataset. The Neuromorphic hardware designed for SNN algorithm in 28nm CMOS demonstrates high classification accuracy (>98% on MNIST) and low energy (51.4 - 773 (nJ) per classification). The optimized DNN hardware designed in 40nm CMOS that combines 8X structured compression and 3-bit weight precision showed 98.4% accuracy at 33 (nJ) per classification.
590
$a
School code: 0010.
650
4
$a
Computer Engineering.
$3
1567821
650
4
$a
Engineering.
$3
586835
690
$a
0464
690
$a
0537
710
2
$a
Arizona State University.
$b
Electrical Engineering.
$3
1671741
773
0
$t
Masters Abstracts International
$g
80-06.
790
$a
0010
791
$a
M.S.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10977775
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9382833
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入