語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Finding and Exploiting Parallelism W...
~
Fallin, Christopher Ian.
FindBook
Google Book
Amazon
博客來
Finding and Exploiting Parallelism With Data-Structure-Aware Static and Dynamic Analysis.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Finding and Exploiting Parallelism With Data-Structure-Aware Static and Dynamic Analysis./
作者:
Fallin, Christopher Ian.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2019,
面頁冊數:
172 p.
附註:
Source: Dissertations Abstracts International, Volume: 80-09, Section: B.
Contained By:
Dissertations Abstracts International80-09B.
標題:
Computer Engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13807114
ISBN:
9780438963986
Finding and Exploiting Parallelism With Data-Structure-Aware Static and Dynamic Analysis.
Fallin, Christopher Ian.
Finding and Exploiting Parallelism With Data-Structure-Aware Static and Dynamic Analysis.
- Ann Arbor : ProQuest Dissertations & Theses, 2019 - 172 p.
Source: Dissertations Abstracts International, Volume: 80-09, Section: B.
Thesis (Ph.D.)--Carnegie Mellon University, 2019.
This item must not be sold to any third party vendors.
Maximizing performance on modern multicore hardware demands aggressive optimizations. Large amounts of legacy code are written for sequential hardware, and parallelization of this code is an important goal. Some programs are written for one parallel platform, but must be periodically updated for other platforms, or updated with the existing platform's changing characteristics - for example, by splitting work at a different granularity or tiling work to fit in a cache. A programmer tasked with this work will likely refactor the code in ways that diverge from its original implementation's step-by-step operation, but nevertheless computes correct results. Unfortunately, because modern compilers are unaware of the higher-level structure of a program, they are largely unable to perform this work automatically. First, they generally preserve the operation of the original program at the language-semantics level, down to implementation details. Thus parallelization transforms are often hindered by false dependencies between data-structure operations that are semantically commutative. For example, reordering (e.g.) two tree insertions would produce a different program heap even though the tree elements are identical. Second, by analyzing the program at this low level, they are generally unable to derive invariants at a higher level, such as that no two pointers in a particular list alias each other. Both of these shortcomings hinder modern parallelization analyses from reaching the performance that a human programmer can achieve by refactoring. In this thesis, we introduce an enhanced compiler and runtime system that can parallelize sequential loops in a program by reasoning about a program's high-level semantics. We make three contributions. First, we enhance the compiler to reason about data structures as first-class values: operations on maps and lists, and traversals over them, are encoded directly. Semantic models map library data types that implement these standard containers to the IR intrinsics. This enables the compiler to reason about commutativity of various operations, and provides a basis for deriving data-structure invariants. Second, we present distinctness analysis, a type of static alias analysis that is specially designed to derive results for parallelization that are just as precise as needed, while remaining simple and widely applicable. This analysis discovers non-aliasing across loop iterations, which is a form of flow-sensitivity that nevertheless is much simpler than past work's closed-form analysis of linear indexing of data structures. Finally, for cases when infrequent occurrences rule out a loop parallelization, or when static analysis cannot prove the parallelization to be safe, we leverage dynamic checks. Our hybrid static-dynamic approach extends the logic of our static alias analysis to find a set of checks to perform at runtime, and then uses the implications of these checks in its static reasoning. With the proper runtime techniques, these dynamic checks can be used to parallelize many loops without any speculation or rollback overhead. This system, which we have proven sound, performs significantly better than prior standard loop-parallelization techniques. We argue that a compiler and runtime system with built-in data-structure primitives, simple but effective alias-analysis extensions, and some carefully-placed dynamic checks is a promising platform for macro-scale program transforms such as loop parallelization, and potentially many other lines of future work.
ISBN: 9780438963986Subjects--Topical Terms:
1567821
Computer Engineering.
Finding and Exploiting Parallelism With Data-Structure-Aware Static and Dynamic Analysis.
LDR
:04663nmm a2200325 4500
001
2210722
005
20191121124302.5
008
201008s2019 ||||||||||||||||| ||eng d
020
$a
9780438963986
035
$a
(MiAaPQ)AAI13807114
035
$a
(MiAaPQ)cmu:10361
035
$a
AAI13807114
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Fallin, Christopher Ian.
$3
3437858
245
1 0
$a
Finding and Exploiting Parallelism With Data-Structure-Aware Static and Dynamic Analysis.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2019
300
$a
172 p.
500
$a
Source: Dissertations Abstracts International, Volume: 80-09, Section: B.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Advisor: Mowry, Todd C.;Gibbons, Philip B.
502
$a
Thesis (Ph.D.)--Carnegie Mellon University, 2019.
506
$a
This item must not be sold to any third party vendors.
520
$a
Maximizing performance on modern multicore hardware demands aggressive optimizations. Large amounts of legacy code are written for sequential hardware, and parallelization of this code is an important goal. Some programs are written for one parallel platform, but must be periodically updated for other platforms, or updated with the existing platform's changing characteristics - for example, by splitting work at a different granularity or tiling work to fit in a cache. A programmer tasked with this work will likely refactor the code in ways that diverge from its original implementation's step-by-step operation, but nevertheless computes correct results. Unfortunately, because modern compilers are unaware of the higher-level structure of a program, they are largely unable to perform this work automatically. First, they generally preserve the operation of the original program at the language-semantics level, down to implementation details. Thus parallelization transforms are often hindered by false dependencies between data-structure operations that are semantically commutative. For example, reordering (e.g.) two tree insertions would produce a different program heap even though the tree elements are identical. Second, by analyzing the program at this low level, they are generally unable to derive invariants at a higher level, such as that no two pointers in a particular list alias each other. Both of these shortcomings hinder modern parallelization analyses from reaching the performance that a human programmer can achieve by refactoring. In this thesis, we introduce an enhanced compiler and runtime system that can parallelize sequential loops in a program by reasoning about a program's high-level semantics. We make three contributions. First, we enhance the compiler to reason about data structures as first-class values: operations on maps and lists, and traversals over them, are encoded directly. Semantic models map library data types that implement these standard containers to the IR intrinsics. This enables the compiler to reason about commutativity of various operations, and provides a basis for deriving data-structure invariants. Second, we present distinctness analysis, a type of static alias analysis that is specially designed to derive results for parallelization that are just as precise as needed, while remaining simple and widely applicable. This analysis discovers non-aliasing across loop iterations, which is a form of flow-sensitivity that nevertheless is much simpler than past work's closed-form analysis of linear indexing of data structures. Finally, for cases when infrequent occurrences rule out a loop parallelization, or when static analysis cannot prove the parallelization to be safe, we leverage dynamic checks. Our hybrid static-dynamic approach extends the logic of our static alias analysis to find a set of checks to perform at runtime, and then uses the implications of these checks in its static reasoning. With the proper runtime techniques, these dynamic checks can be used to parallelize many loops without any speculation or rollback overhead. This system, which we have proven sound, performs significantly better than prior standard loop-parallelization techniques. We argue that a compiler and runtime system with built-in data-structure primitives, simple but effective alias-analysis extensions, and some carefully-placed dynamic checks is a promising platform for macro-scale program transforms such as loop parallelization, and potentially many other lines of future work.
590
$a
School code: 0041.
650
4
$a
Computer Engineering.
$3
1567821
650
4
$a
Computer science.
$3
523869
690
$a
0464
690
$a
0984
710
2
$a
Carnegie Mellon University.
$b
Electrical and Computer Engineering.
$3
2094139
773
0
$t
Dissertations Abstracts International
$g
80-09B.
790
$a
0041
791
$a
Ph.D.
792
$a
2019
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13807114
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9387271
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入