Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Effective scheduling techniques for ...
~
Rainey, Michael Alan.
Linked to FindBook
Google Book
Amazon
博客來
Effective scheduling techniques for high-level parallel programming languages.
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Effective scheduling techniques for high-level parallel programming languages./
Author:
Rainey, Michael Alan.
Description:
143 p.
Notes:
Source: Dissertation Abstracts International, Volume: 71-10, Section: B, page: 6235.
Contained By:
Dissertation Abstracts International71-10B.
Subject:
Engineering, Computer. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3419774
ISBN:
9781124199160
Effective scheduling techniques for high-level parallel programming languages.
Rainey, Michael Alan.
Effective scheduling techniques for high-level parallel programming languages.
- 143 p.
Source: Dissertation Abstracts International, Volume: 71-10, Section: B, page: 6235.
Thesis (Ph.D.)--The University of Chicago, 2010.
In the not-so-distant past, parallel programming was mostly the concern of programmers specializing in high-performance computing. Nowadays, on the other hand, many of today's desktop and laptop computers come equipped with a species of shared-memory multiprocessor called a multicore processor, making parallel programming a concern for a much broader range of programmers. High-level parallel languages, such as Parallel ML (PML) and Haskell, seek to reduce the complexity of programming multicore processors by giving programmers abstract execution models, such as implicit threading, where programmers annotate their programs to suggest the parallel decomposition. Implicitly-threaded programs, however, do not specify the actual decomposition of computations or mapping from computations to processors. The annotations act simply as hints that can be ignored and safely replaced with sequential counterparts. The parallel decomposition itself is the responsibility of the language implementation and, more specifically, of the scheduling system.
ISBN: 9781124199160Subjects--Topical Terms:
1669061
Engineering, Computer.
Effective scheduling techniques for high-level parallel programming languages.
LDR
:04403nam 2200337 4500
001
1403165
005
20111111141818.5
008
130515s2010 ||||||||||||||||| ||eng d
020
$a
9781124199160
035
$a
(UMI)AAI3419774
035
$a
AAI3419774
040
$a
UMI
$c
UMI
100
1
$a
Rainey, Michael Alan.
$3
1682414
245
1 0
$a
Effective scheduling techniques for high-level parallel programming languages.
300
$a
143 p.
500
$a
Source: Dissertation Abstracts International, Volume: 71-10, Section: B, page: 6235.
500
$a
Adviser: John H. Reppy.
502
$a
Thesis (Ph.D.)--The University of Chicago, 2010.
520
$a
In the not-so-distant past, parallel programming was mostly the concern of programmers specializing in high-performance computing. Nowadays, on the other hand, many of today's desktop and laptop computers come equipped with a species of shared-memory multiprocessor called a multicore processor, making parallel programming a concern for a much broader range of programmers. High-level parallel languages, such as Parallel ML (PML) and Haskell, seek to reduce the complexity of programming multicore processors by giving programmers abstract execution models, such as implicit threading, where programmers annotate their programs to suggest the parallel decomposition. Implicitly-threaded programs, however, do not specify the actual decomposition of computations or mapping from computations to processors. The annotations act simply as hints that can be ignored and safely replaced with sequential counterparts. The parallel decomposition itself is the responsibility of the language implementation and, more specifically, of the scheduling system.
520
$a
Threads can take arbitrarily different amounts of time to execute, and these times are difficult to predict. Implicit threading encourages the programmer to divide the program into threads that are as small as possible because doing so increases the flexibility the scheduler in its duty to distribute work evenly across processors. The downside of such fine-grain parallelism is that if the total scheduling cost is too large, then parallelism is not worthwhile. This problem is the focus of this dissertation.
520
$a
The starting point of this dissertation is work stealing, a scheduling policy well known for its scalable parallel performance, and the work-first principle, which serves as a guide for building efficient implementations of work stealing. In this dissertation, I present two techniques, Lazy Promotion and Lazy Tree Splitting, for implementing work stealing. Both techniques derive their efficiency from adhering to the work-first principle. Lazy Promotion is a strategy that improves the performance, in terms of execution time, of a work-stealing scheduler by reducing the amount of load the scheduler places on the garbage collector. Lazy Tree Splitting is a technique for automatically scheduling the execution of parallel operations over trees to yield scalable performance and eliminate the need for per-application tuning. I use Manticore, PML's compiler and runtime system, and a sixteen-core NUMA machine as a testbed for these techniques.
520
$a
In addition, I present two empirical studies. In the first study, I evaluate Lazy Promotion over six PML benchmarks. The results demonstrate that Lazy Promotion either outperforms or performs the same as an alternative scheme based on Eager Promotion. This study also evaluates the design of the Manticore runtime system, in particular, the split-heap memory manager, by comparing the system to an alternative system based on a unified-heap memory manager, and showing that the unified version has limited scalability due to poor locality. In the second study, I evaluate Lazy Tree Splitting over seven PML benchmarks by comparing Lazy Tree Splitting to its alternative, Eager Tree Splitting. The results show that, although the two techniques offer similar scalability, only Lazy Tree Splitting is suitable for building an effective language implementation.
590
$a
School code: 0330.
650
4
$a
Engineering, Computer.
$3
1669061
650
4
$a
Computer Science.
$3
626642
690
$a
0464
690
$a
0984
710
2
$a
The University of Chicago.
$b
Computer Science.
$3
1674744
773
0
$t
Dissertation Abstracts International
$g
71-10B.
790
1 0
$a
Reppy, John H.,
$e
advisor
790
1 0
$a
Rogers, Anne
$e
committee member
790
1 0
$a
Fluet, Matthew
$e
committee member
790
$a
0330
791
$a
Ph.D.
792
$a
2010
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3419774
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9166304
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login