語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Reduced Order Models and Approximations for Hardware Acceleration of Neural Networks.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Reduced Order Models and Approximations for Hardware Acceleration of Neural Networks./
作者:
Azari, Elham.
面頁冊數:
1 online resource (171 pages)
附註:
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
Contained By:
Dissertations Abstracts International83-02B.
標題:
Computer engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28651949click for full text (PQDT)
ISBN:
9798535548562
Reduced Order Models and Approximations for Hardware Acceleration of Neural Networks.
Azari, Elham.
Reduced Order Models and Approximations for Hardware Acceleration of Neural Networks.
- 1 online resource (171 pages)
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
Thesis (Ph.D.)--Arizona State University, 2021.
Includes bibliographical references
Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus of this work is on the data-driven surrogate models, in which empirical approximations of the output are performed given the input parameters. Recently neural networks (NN) have re-emerged as a popular method for constructing data-driven surrogate models. Although, NNs have achieved excellent accuracy and are widely used, they pose their own challenges. This work addresses two common challenges, the need for: (1) hardware acceleration and (2) uncertainty quantification (UQ) in the presence of input variability.The high demand in the inference phase of deep NNs in cloud servers/edge devices calls for the design of low power custom hardware accelerators. The first part of this work describes the design of an energy-efficient long short-term memory (LSTM) accelerator. The overarching goal is to aggressively reduce the power consumption and area of the LSTM components using approximate computing, and then use architectural level techniques to boost the performance. The proposed design is synthesized and placed and routed as an application-specific integrated circuit (ASIC). The results demonstrate that this accelerator is 1.2X and 3.6X more energy-efficient and area-efficient than the baseline LSTM.In the second part of this work, a robust framework is developed based on an alternate data-driven surrogate model referred to as polynomial chaos expansion (PCE) for addressing UQ. In contrast to many existing approaches, no assumptions are made on the elements of the function space and UQ is a function of the expansion coefficients. Moreover, the sensitivity of the output with respect to any subset of the input variables can be computed analytically by post-processing the PCE coefficients. This provides a systematic and incremental method to pruning or changing the order of the model. This framework is evaluated on several real-world applications from different domains and is extended for classification tasks as well.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798535548562Subjects--Topical Terms:
621879
Computer engineering.
Subjects--Index Terms:
Hardware accelerationIndex Terms--Genre/Form:
542853
Electronic books.
Reduced Order Models and Approximations for Hardware Acceleration of Neural Networks.
LDR
:03703nmm a2200409K 4500
001
2357226
005
20230622065021.5
006
m o d
007
cr mn ---uuuuu
008
241011s2021 xx obm 000 0 eng d
020
$a
9798535548562
035
$a
(MiAaPQ)AAI28651949
035
$a
AAI28651949
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Azari, Elham.
$3
3697756
245
1 0
$a
Reduced Order Models and Approximations for Hardware Acceleration of Neural Networks.
264
0
$c
2021
300
$a
1 online resource (171 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
500
$a
Advisor: Vrudhula, Sarma.
502
$a
Thesis (Ph.D.)--Arizona State University, 2021.
504
$a
Includes bibliographical references
520
$a
Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus of this work is on the data-driven surrogate models, in which empirical approximations of the output are performed given the input parameters. Recently neural networks (NN) have re-emerged as a popular method for constructing data-driven surrogate models. Although, NNs have achieved excellent accuracy and are widely used, they pose their own challenges. This work addresses two common challenges, the need for: (1) hardware acceleration and (2) uncertainty quantification (UQ) in the presence of input variability.The high demand in the inference phase of deep NNs in cloud servers/edge devices calls for the design of low power custom hardware accelerators. The first part of this work describes the design of an energy-efficient long short-term memory (LSTM) accelerator. The overarching goal is to aggressively reduce the power consumption and area of the LSTM components using approximate computing, and then use architectural level techniques to boost the performance. The proposed design is synthesized and placed and routed as an application-specific integrated circuit (ASIC). The results demonstrate that this accelerator is 1.2X and 3.6X more energy-efficient and area-efficient than the baseline LSTM.In the second part of this work, a robust framework is developed based on an alternate data-driven surrogate model referred to as polynomial chaos expansion (PCE) for addressing UQ. In contrast to many existing approaches, no assumptions are made on the elements of the function space and UQ is a function of the expansion coefficients. Moreover, the sensitivity of the output with respect to any subset of the input variables can be computed analytically by post-processing the PCE coefficients. This provides a systematic and incremental method to pruning or changing the order of the model. This framework is evaluated on several real-world applications from different domains and is extended for classification tasks as well.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Computer science.
$3
523869
650
4
$a
Design.
$3
518875
650
4
$a
Energy efficiency.
$3
3555643
650
4
$a
Accuracy.
$3
3559958
650
4
$a
Random variables.
$3
646291
650
4
$a
Sensitivity analysis.
$3
3560752
650
4
$a
Classification.
$3
595585
653
$a
Hardware acceleration
653
$a
LSTM
653
$a
Machine learning
653
$a
Neural networks
653
$a
Polynomial chaos
653
$a
Uncertainty quantification
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0464
690
$a
0984
690
$a
0389
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
Arizona State University.
$b
Computer Engineering.
$3
3289092
773
0
$t
Dissertations Abstracts International
$g
83-02B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28651949
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9479582
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入