Abstract
The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening execution strategy, comes at the price of potentially prohibitive space usage in the common case of computations with an excess of available parallelism, such as dense-matrix multiplication.
We present a simple nested data-parallel functional language and associated cost semantics that retains NESL's intuitive work--depth model for time complexity, but also allows highly parallel computations to be expressed in a space-efficient way, in the sense that memory usage on a single (or a few) processors is of the same order as for a sequential formulation of the algorithm, and in general scales smoothly with the actually realized degree of parallelism, not the potential parallelism.
The refined semantics is based on distinguishing formally between fully materialized (i.e., explicitly allocated in memory all at once) "vectors" and potentially ephemeral "sequences" of values, with the latter being bulk-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level.
The language definition and implementation are still very much work in progress, but we do present some preliminary examples and timings, suggesting that the streaming model has practical potential.
We present a simple nested data-parallel functional language and associated cost semantics that retains NESL's intuitive work--depth model for time complexity, but also allows highly parallel computations to be expressed in a space-efficient way, in the sense that memory usage on a single (or a few) processors is of the same order as for a sequential formulation of the algorithm, and in general scales smoothly with the actually realized degree of parallelism, not the potential parallelism.
The refined semantics is based on distinguishing formally between fully materialized (i.e., explicitly allocated in memory all at once) "vectors" and potentially ephemeral "sequences" of values, with the latter being bulk-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level.
The language definition and implementation are still very much work in progress, but we do present some preliminary examples and timings, suggesting that the streaming model has practical potential.
Original language | English |
---|---|
Title of host publication | FHPC '13 : proceedings of the 2nd ACM SIGPLAN Workshop on Functional High-Performance Computing |
Number of pages | 12 |
Publisher | Association for Computing Machinery |
Publication date | 2013 |
Pages | 13-24 |
ISBN (Electronic) | 978-1-4503-2381-9 |
DOIs | |
Publication status | Published - 2013 |
Event | 2nd ACM SIGPLAN Workshop on Functional High-Performance Computing - Boston, United States Duration: 23 Sept 2013 → 23 Sept 2013 Conference number: 2 |
Conference
Conference | 2nd ACM SIGPLAN Workshop on Functional High-Performance Computing |
---|---|
Number | 2 |
Country/Territory | United States |
City | Boston |
Period | 23/09/2013 → 23/09/2013 |