語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
An Energy Efficient EdgeAI Autoencoder for Reinforcement Learning.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
An Energy Efficient EdgeAI Autoencoder for Reinforcement Learning./
作者:
Manjunath, Nitheesh Kumar.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
103 p.
附註:
Source: Masters Abstracts International, Volume: 83-03.
Contained By:
Masters Abstracts International83-03.
標題:
Engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28494398
ISBN:
9798535574264
An Energy Efficient EdgeAI Autoencoder for Reinforcement Learning.
Manjunath, Nitheesh Kumar.
An Energy Efficient EdgeAI Autoencoder for Reinforcement Learning.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 103 p.
Source: Masters Abstracts International, Volume: 83-03.
Thesis (M.S.)--University of Maryland, Baltimore County, 2021.
This item must not be sold to any third party vendors.
In EdgeAI embedded devices that exploit reinforcement learning (RL), it is essential to reduce the number of actions taken by the agent in the real world and minimize the compute-intensive policies learning process. Convolutional autoencoders (AEs) has demonstrated great improvement for speeding up the policy learning time when attached to the RL agent, by compressing the high dimensional input data into a small latent representation for feeding the RL agent.Despite reducing the policy learning time, AE adds a significant computational and memory complexity to the model which contributes to the increase in the total computation and the model size. In this paper, we propose a model for speeding up the policy learning process of RL agent with the use of AE neural networks, which engages binary and ternary precision to address the high complexity overhead without deteriorating the policy that an RL agent learns.Binary Neural Networks (BNNs) and Ternary Neural Networks (TNNs) compress weights into 1 and 2 bits representations, which result in significant compression of the model size and memory as well as simplifying multiply-accumulate (MAC) operations. We evaluate the performance of our model in three RL environments including DonkeyCar, Miniworld sidewalk, and Miniworld Object Pickup, which emulate various real-world applications with different levels of complexity. With proper hyperparameter optimization and architecture exploration, TNN models achieve near the same average reward, Peak Signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) performance as the full-precision model while reducing the model size by 10x compared to full-precision and 3x compared to BNNs. However, in BNN models the average reward drops up to 12%-25% compared to the full-precision even after increasing its model size by 4x. We designed and implemented a scalable hardware accelerator which is configurable in terms of the number of processing elements (PEs) and memory data width to achieve the best power, performance, and energy efficiency trade-off for EdgeAI embedded devices. The proposed hardware implemented on Artix-7 FPGA dissipates 250 μJ energy while meeting 30 frames per second (FPS) throughput requirements. The hardware is configurable to reach an efficiency of over 1 TOP/J on FPGA implementation. The proposed hardware accelerator is synthesized and placed-and-routed in 14nm FinFET ASIC technology which brings down the power dissipation to 3.9 μJ and maximum throughput of 1,250 FPS. Compared to the state of the art TNN implementations on the same target platform, our hardware is 5x and 4.4x (2.2x if technology scaled) more energy efficient on FPGA and ASIC, respectively.
ISBN: 9798535574264Subjects--Topical Terms:
586835
Engineering.
Subjects--Index Terms:
Energy efficient EdgeAI autoencode
An Energy Efficient EdgeAI Autoencoder for Reinforcement Learning.
LDR
:03758nmm a2200325 4500
001
2349300
005
20220920133714.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798535574264
035
$a
(MiAaPQ)AAI28494398
035
$a
AAI28494398
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Manjunath, Nitheesh Kumar.
$3
3688707
245
1 3
$a
An Energy Efficient EdgeAI Autoencoder for Reinforcement Learning.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
103 p.
500
$a
Source: Masters Abstracts International, Volume: 83-03.
500
$a
Advisor: Mohsenin, Tinoosh.
502
$a
Thesis (M.S.)--University of Maryland, Baltimore County, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
In EdgeAI embedded devices that exploit reinforcement learning (RL), it is essential to reduce the number of actions taken by the agent in the real world and minimize the compute-intensive policies learning process. Convolutional autoencoders (AEs) has demonstrated great improvement for speeding up the policy learning time when attached to the RL agent, by compressing the high dimensional input data into a small latent representation for feeding the RL agent.Despite reducing the policy learning time, AE adds a significant computational and memory complexity to the model which contributes to the increase in the total computation and the model size. In this paper, we propose a model for speeding up the policy learning process of RL agent with the use of AE neural networks, which engages binary and ternary precision to address the high complexity overhead without deteriorating the policy that an RL agent learns.Binary Neural Networks (BNNs) and Ternary Neural Networks (TNNs) compress weights into 1 and 2 bits representations, which result in significant compression of the model size and memory as well as simplifying multiply-accumulate (MAC) operations. We evaluate the performance of our model in three RL environments including DonkeyCar, Miniworld sidewalk, and Miniworld Object Pickup, which emulate various real-world applications with different levels of complexity. With proper hyperparameter optimization and architecture exploration, TNN models achieve near the same average reward, Peak Signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) performance as the full-precision model while reducing the model size by 10x compared to full-precision and 3x compared to BNNs. However, in BNN models the average reward drops up to 12%-25% compared to the full-precision even after increasing its model size by 4x. We designed and implemented a scalable hardware accelerator which is configurable in terms of the number of processing elements (PEs) and memory data width to achieve the best power, performance, and energy efficiency trade-off for EdgeAI embedded devices. The proposed hardware implemented on Artix-7 FPGA dissipates 250 μJ energy while meeting 30 frames per second (FPS) throughput requirements. The hardware is configurable to reach an efficiency of over 1 TOP/J on FPGA implementation. The proposed hardware accelerator is synthesized and placed-and-routed in 14nm FinFET ASIC technology which brings down the power dissipation to 3.9 μJ and maximum throughput of 1,250 FPS. Compared to the state of the art TNN implementations on the same target platform, our hardware is 5x and 4.4x (2.2x if technology scaled) more energy efficient on FPGA and ASIC, respectively.
590
$a
School code: 0434.
650
4
$a
Engineering.
$3
586835
650
4
$a
Design optimization.
$3
3681984
650
4
$a
Random access memory.
$3
623617
650
4
$a
Deep learning.
$3
3554982
650
4
$a
Neural networks.
$3
677449
650
4
$a
Medical research.
$2
bicssc
$3
1556686
650
4
$a
Energy efficiency.
$3
3555643
650
4
$a
Algorithms.
$3
536374
650
4
$a
Energy consumption.
$3
631630
650
4
$a
Data compression.
$3
3681696
650
4
$a
Field programmable gate arrays.
$3
666370
650
4
$a
Artificial intelligence.
$3
516317
653
$a
Energy efficient EdgeAI autoencode
653
$a
Reinforcement learning
690
$a
0537
690
$a
0800
710
2
$a
University of Maryland, Baltimore County.
$b
Engineering, Computer.
$3
1672903
773
0
$t
Masters Abstracts International
$g
83-03.
790
$a
0434
791
$a
M.S.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28494398
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9471738
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入