語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Learning for and with Efficient Comp...
~
Chen, Zhuo.
FindBook
Google Book
Amazon
博客來
Learning for and with Efficient Computing Systems.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Learning for and with Efficient Computing Systems./
作者:
Chen, Zhuo.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2019,
面頁冊數:
131 p.
附註:
Source: Dissertation Abstracts International, Volume: 80-07(E), Section: B.
Contained By:
Dissertation Abstracts International80-07B(E).
標題:
Computer engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13807043
ISBN:
9780438971530
Learning for and with Efficient Computing Systems.
Chen, Zhuo.
Learning for and with Efficient Computing Systems.
- Ann Arbor : ProQuest Dissertations & Theses, 2019 - 131 p.
Source: Dissertation Abstracts International, Volume: 80-07(E), Section: B.
Thesis (Ph.D.)--Carnegie Mellon University, 2019.
Machine learning approaches have been widely adopted in recent years due to their capability of learning from data rather than hand-tuning features manually. We investigate two important aspects of machine learning methods, i.e., (i) applying machine learning in computing system optimization and (ii) optimizing machine learning algorithms, especially deep convolutional neural networks, so they can train and infer efficiently.
ISBN: 9780438971530Subjects--Topical Terms:
621879
Computer engineering.
Learning for and with Efficient Computing Systems.
LDR
:03815nmm a2200337 4500
001
2204657
005
20190716101637.5
008
201008s2019 ||||||||||||||||| ||eng d
020
$a
9780438971530
035
$a
(MiAaPQ)AAI13807043
035
$a
(MiAaPQ)cmu:10360
035
$a
AAI13807043
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Chen, Zhuo.
$3
1681217
245
1 0
$a
Learning for and with Efficient Computing Systems.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2019
300
$a
131 p.
500
$a
Source: Dissertation Abstracts International, Volume: 80-07(E), Section: B.
500
$a
Adviser: Diana Marculescu.
502
$a
Thesis (Ph.D.)--Carnegie Mellon University, 2019.
520
$a
Machine learning approaches have been widely adopted in recent years due to their capability of learning from data rather than hand-tuning features manually. We investigate two important aspects of machine learning methods, i.e., (i) applying machine learning in computing system optimization and (ii) optimizing machine learning algorithms, especially deep convolutional neural networks, so they can train and infer efficiently.
520
$a
As power emerges as the main constraint for computing systems, controlling power consumption under a given Thermal Design Power (TDP) while maximizing the performance becomes increasingly critical. Meanwhile, systems have certain performance constraints that the applications should satisfy to ensure Quality of Service (QoS). Learning approaches have drawn significant attention recently due to the ability to adapt to the ever-increasing complexity of the system and applications. In this thesis, we propose On-line Distributed Reinforcement Learning (OD-RL) based algorithms for many-core system performance improvement under both power and performance constraints. The experiments show that compared to the state-of-the-art algorithms, our approach: 1) produces up to 98% less budget overshoot, 2) up to 23% higher energy efficiency, and 3) two orders of magnitude speedup over state-of-the-art techniques for systems with hundreds of cores, while an improved version can better satisfy performance constraints. To further improve the sample-efficiency of RL algorithms, we propose a novel Bayesian Optimization approach to speed up reinforcement learning-based DVFS control by 37.4x while maintaining the performance of the best rule-based DVFS algorithm.
520
$a
Convolutional Neural Networks (CNNs) have shown unprecedented capability in visual learning tasks. While accuracy-wise CNNs provide unprecedented performance, they are also known to be computationally intensive and energy demanding for modern computer systems. We propose Virtual Pooling (ViP), a model-level approach to improve inference speed and energy consumption of CNN-based image classification and object detection tasks, with provable error bound. We show the efficacy of ViP through extensive experiments. For example, ViP delivers 2.1x speedup with less than 1.5% accuracy degradation in ImageNet classification on VGG-16, and 1.8x speedup with 0.025 mAP degradation in PASCAL VOC object detection with Faster-RCNN. ViP also reduces mobile GPU and CPU energy consumption by up to 55% and 70%, respectively. We further propose to train CNNs with fine-grain labels, which not only improves testing accuracy but also the training data efficiency. For example, a CNN trained with fine-grain labels and only 40% of the total training data can achieve higher accuracy than a CNN trained with the full training dataset and coarse-grain labels.
590
$a
School code: 0041.
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Computer science.
$3
523869
650
4
$a
Electrical engineering.
$3
649834
690
$a
0464
690
$a
0984
690
$a
0544
710
2
$a
Carnegie Mellon University.
$b
Electrical and Computer Engineering.
$3
2094139
773
0
$t
Dissertation Abstracts International
$g
80-07B(E).
790
$a
0041
791
$a
Ph.D.
792
$a
2019
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13807043
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9381206
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入