語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
On efficiency improving and energy s...
~
Subha, Srinivasan.
FindBook
Google Book
Amazon
博客來
On efficiency improving and energy saving in data caches.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
On efficiency improving and energy saving in data caches./
作者:
Subha, Srinivasan.
面頁冊數:
209 p.
附註:
Source: Dissertation Abstracts International, Volume: 71-06, Section: B, page: 3831.
Contained By:
Dissertation Abstracts International71-06B.
標題:
Engineering, Computer. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3411003
ISBN:
9781124034386
On efficiency improving and energy saving in data caches.
Subha, Srinivasan.
On efficiency improving and energy saving in data caches.
- 209 p.
Source: Dissertation Abstracts International, Volume: 71-06, Section: B, page: 3831.
Thesis (Ph.D.)--Santa Clara University, 2010.
This work presents a new hybrid cache that consists of a direct mapped cache and a fully associative cache. A data array with any dimension is transformed to a one dimensional data array where data are rearranged in the order they will be referenced. Reference patterns of data arrays in a loop are considered to minimize the cache misses by labeling each block in the cache with a parameter: the maximum iteration number or block_max where that block is accessed. A new replacement policy is suggested for this hybrid cache based on block_max that prevents preemption of blocks to a certain extent. A performance improvement of 89% in average memory access time was observed over the conventional direct mapped cache of the same size.
ISBN: 9781124034386Subjects--Topical Terms:
1669061
Engineering, Computer.
On efficiency improving and energy saving in data caches.
LDR
:05142nam 2200337 4500
001
1401076
005
20111013150242.5
008
130515s2010 ||||||||||||||||| ||eng d
020
$a
9781124034386
035
$a
(UMI)AAI3411003
035
$a
AAI3411003
040
$a
UMI
$c
UMI
100
1
$a
Subha, Srinivasan.
$3
1680186
245
1 0
$a
On efficiency improving and energy saving in data caches.
300
$a
209 p.
500
$a
Source: Dissertation Abstracts International, Volume: 71-06, Section: B, page: 3831.
500
$a
Adviser: Weijia Shang.
502
$a
Thesis (Ph.D.)--Santa Clara University, 2010.
520
$a
This work presents a new hybrid cache that consists of a direct mapped cache and a fully associative cache. A data array with any dimension is transformed to a one dimensional data array where data are rearranged in the order they will be referenced. Reference patterns of data arrays in a loop are considered to minimize the cache misses by labeling each block in the cache with a parameter: the maximum iteration number or block_max where that block is accessed. A new replacement policy is suggested for this hybrid cache based on block_max that prevents preemption of blocks to a certain extent. A performance improvement of 89% in average memory access time was observed over the conventional direct mapped cache of the same size.
520
$a
This work proposes an algorithm to determine the variable block size for variables in a program at some predetermined points, called decision points, based on their access pattern. The whole program is divided into segments by the decision points. Rules to decide the decision points are developed. The algorithm identifies the decision points, formulates the optimization function to determine the average memory access time for the variables involved at these decision points. Solving the optimization function with constraints gives then the optimal block size. A performance improvement of 64% is observed for matrices of size six. The proposed model is compared with pre-fetching and is seen to show better results.
520
$a
This work proposes a method to save energy in set associative caches. The method collects the time of access of each memory address by profiling. Additional information about next access to a way is maintained in the cache ways. All the ways of the cache are put in either disable mode or low energy mode as supported by the cache. At each time unit, the cache ways are searched enabling the way that is going to be accessed next. If no way is going to be accessed in the next time unit, the generated address is placed respecting the replacement algorithm in the cache using the address mapping function. During this mapping all the ways of the mapped set are enabled as in a traditional set associative cache. An average energy savings of 63% and performance improvement of 14% over way-prediction cache was observed.
520
$a
A fully associative cache with modified address translation using XOR functions is proposed next. This work compares the performance of this model with direct, set associative and fully associative memory of same size and analyzes the energy consumption in the proposed model. Expressions for the average memory access time for the proposed model are stated. The energy consumption in the proposed model is compared with the direct, set associative and fully associative cache of the same size and conditions for outperforming them are derived. Simulations are done with the SPEC 2000 benchmark. The performance with respect to the average memory access time is found to be equal to direct, set associative, fully associative memory of the same size for the chosen parameters. The energy consumption is comparable with a set associative cache of the same size and ways. An improvement in energy consumption of 99% is seen in the case of fully associative memory of the same size.
520
$a
This dissertation proposes an algorithm for the buffer cache management with pre-fetching. The proposed algorithm is compared with the Waiting Room and Weighing Room (W2R) algorithm for sequential and random input. For sequential input, the performance is comparable with W2R algorithm. For random input, the proposed algorithm performs better by 9%.
520
$a
This dissertation proposes a new cache type which consists of both kinds of caches. Initially the entire cache system behaves like an exclusive cache but changes with the reuse of the cache block/way to an inclusive behavior with the reused block/way. When a new block is fetched into the cache, the corresponding way is reset to an exclusive way. On a reuse of a block in a level one cache, it is made inclusive. Conditions when this model outperforms traditional inclusive cache are derived. Performance improvements of 66% over inclusive cache are observed. (Abstract shortened by UMI.)
590
$a
School code: 0196.
650
4
$a
Engineering, Computer.
$3
1669061
650
4
$a
Engineering, Electronics and Electrical.
$3
626636
690
$a
0464
690
$a
0544
710
2
$a
Santa Clara University.
$3
1258362
773
0
$t
Dissertation Abstracts International
$g
71-06B.
790
1 0
$a
Shang, Weijia,
$e
advisor
790
$a
0196
791
$a
Ph.D.
792
$a
2010
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3411003
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9164215
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入