Foreword
This manual documents the version 1.4.1 of StarPU.
Its contents was last updated on 2023-05-24.
1.1
Organization
This part explains the advanced concepts of StarPU. It is intended for users whose applications need more than basic task submission.
-
Tools to help debugging applications are presented in Chapter Debugging Tools.
-
Chapter Configuration and Initialization shows a brief overview of how to configure and tune StarPU.
-
You can learn more knowledge about some important and core concepts in StarPU:
-
Other chapters cover some further usages of StarPU.
-
If you need to store more data than what the main memory (RAM) can store, Chapter Out Of Core presents how to add a new memory node on a disk and how to use it.
-
We integrate MPI transfers within task parallelism. For users who need to run MPI processes in their applications Chapter MPI Support may be useful.
-
In Chapter TCP/IP Support, we explain the TCP/IP master slave mechanism which can execute application across many remote cores without thinking about data distribution.
-
Chapter Transactions shows how to cancel a sequence of already submitted tasks based on a just-in-time decision.
-
StarPU provides some supports for failure of tasks or even failure of complete nodes in Chapter Fault Tolerance.
-
The usage of
libstarpufft
is described in Chapter FFT Support, the design is very similar to both fftw
and cufft
, but this library provided by StarPU takes benefit from both CPUs and GPUs.
-
StarPU support Field Programmable Gate Array (FPGA) applications exploiting DFE configurations, you can find related usage in Chapter Maxeler FPGA Support.
-
If you want your applications can share entities such as Events, Contexts or Command Queues between several OpenCL implementations, we have an OpenCL implementation based on StarPU described in Chapter SOCL OpenCL Extensions.
-
We propose a hierarchical tasks model in Chapter Hierarchical DAGS to enable tasks subgraphs at runtime for a more dynamic task graph.
-
You can find how to partition a machine into parallel workers in Chapter Creating Parallel Workers On A Machine.
-
If you need StarPU to coexist with other parallel software elements without resulting in computing core oversubscription or undersubscription, Chapter Interoperability Support is useful. You can get the information about how to dynamically manage the computing resources allocated to StarPU.
-
You can learn how to define a StarPU task scheduling policy in a basic monolithic way, or in a modular way in Chapter How To Define A New Scheduling Policy.
-
Chapter SimGrid Support shows you how to simulate execution on an arbitrary platform.