StarPU Handbook - StarPU Installation
Loading...
Searching...
No Matches
3. Execution Configuration Through Environment Variables

The behavior of the StarPU library and tools may be tuned thanks to the following environment variables. The function starpu_getenv() allows you to retrieve the value of an environment variable used by StarPU. The function starpu_get_env_string_var_default() allows you to retrieve the value of an environment variable used by StarPU as a string, or a default value if the environment variable is not set. The function allows you to retrieve the value of an environment variable used by StarPU as a size in bytes, or a default value if the environment variable is not set.

3.1 Configuring Workers

3.1.1 General Configuration

STARPU_WORKERS_NOBIND

Setting it to non-zero will prevent StarPU from binding its threads to CPUs. This is for instance useful when running the test suite in parallel.

STARPU_WORKERS_GETBIND

By default StarPU uses the OS-provided CPU binding to determine how many and which CPU cores it should use. This is notably useful when running several StarPU-MPI processes on the same host, to let the MPI launcher set the CPUs to be used. The default value is 1.

If that binding is erroneous (e.g. because the job scheduler binds to just one core of the allocated cores), you can set STARPU_WORKERS_GETBIND to 0 to make StarPU use all cores of the machine.

STARPU_WORKERS_CPUID

Passing an array of integers in STARPU_WORKERS_CPUID specifies on which logical CPU the different workers should be bound. For instance, if STARPU_WORKERS_CPUID = "0 1 4 5", the first worker will be bound to logical CPU #0, the second CPU worker will be bound to logical CPU #1 and so on. Note that the logical ordering of the CPUs is either determined by the OS, or provided by the library hwloc in case it is available. Ranges can be provided: for instance, STARPU_WORKERS_CPUID = "1-3 5" will bind the first three workers on logical CPUs #1, #2, and #3, and the fourth worker on logical CPU #5. Unbound ranges can also be provided: STARPU_WORKERS_CPUID = "1-" will bind the workers starting from logical CPU #1 up to last CPU.

Note that the first workers correspond to the CUDA workers, then come the OpenCL workers, and finally the CPU workers. For example, if we have STARPU_NCUDA=1, STARPU_NOPENCL=1, STARPU_NCPU=2 and STARPU_WORKERS_CPUID = "0 2 1 3", the CUDA device will be controlled by logical CPU #0, the OpenCL device will be controlled by logical CPU #2, and the logical CPUs #1 and #3 will be used by the CPU workers.

If the number of workers is larger than the array given in STARPU_WORKERS_CPUID, the workers are bound to the logical CPUs in a round-robin fashion: if STARPU_WORKERS_CPUID = "0 1", the first and the third (resp. second and fourth) workers will be put on CPU #0 (resp. CPU #1).

This variable is ignored if the field starpu_conf::use_explicit_workers_bindid passed to starpu_init() is set.

Setting STARPU_WORKERS_CPUID or STARPU_WORKERS_COREID overrides the binding provided by the job scheduler, as described for STARPU_WORKERS_GETBIND.

STARPU_WORKERS_COREID

Same as STARPU_WORKERS_CPUID, but bind the workers to cores instead of PUs (hyperthreads).

STARPU_NTHREADS_PER_CORE

This allows to specify how many threads StarPU should run on each core. The default is 1 because kernels are usually already optimized for using a full core. Setting this to e.g. 2 instead allows exploiting hyperthreading.

STARPU_MAIN_THREAD_BIND

When defined, this make StarPU bind the thread that calls starpu_initialize() to a reserved CPU, subtracted from the CPU workers.

STARPU_MAIN_THREAD_CPUID

When defined, this make StarPU bind the thread that calls starpu_initialize() to the given CPU ID (using logical numbering).

STARPU_MAIN_THREAD_COREID

Same as STARPU_MAIN_THREAD_CPUID, but bind the thread that calls starpu_initialize() to the given core (using logical numbering), instead of the PU (hyperthread).

STARPU_WORKER_TREE

Define to 1 to enable the tree iterator in schedulers.

STARPU_SINGLE_COMBINED_WORKER

If set, StarPU will create several workers which won't be able to work concurrently. It will by default create combined workers, which size goes from 1 to the total number of CPU workers in the system. STARPU_MIN_WORKERSIZE and STARPU_MAX_WORKERSIZE can be used to change this default.

STARPU_MIN_WORKERSIZE

Specify the minimum size of the combined workers. Default value is 2.

STARPU_MAX_WORKERSIZE

Specify the minimum size of the combined workers. Default value is the number of CPU workers in the system.

STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER

Specify how many elements are allowed between combined workers created from hwloc information. For instance, in the case of sockets with 6 cores without shared L2 caches, if STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER is set to 6, no combined worker will be synthesized beyond one for the socket and one per core. If it is set to 3, 3 intermediate combined workers will be synthesized, to divide the socket cores into 3 chunks of 2 cores. If it set to 2, 2 intermediate combined workers will be synthesized, to divide the socket cores into 2 chunks of 3 cores, and then 3 additional combined workers will be synthesized, to divide the former synthesized workers into a bunch of 2 cores, and the remaining core (for which no combined worker is synthesized since there is already a normal worker for it).

The default, 2, thus makes StarPU tend to build binary trees of combined workers.

STARPU_DISABLE_ASYNCHRONOUS_COPY

Disable asynchronous copies between CPU and GPU devices. The AMD implementation of OpenCL is known to fail when copying data asynchronously. When using this implementation, it is therefore necessary to disable asynchronous data transfers. One can call starpu_asynchronous_copy_disabled() to check whether asynchronous data transfers between CPU and accelerators are disabled.

See also STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY and STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY.

STARPU_EXPECTED_TRANSFER_TIME_WRITEBACK

Setting this to 1 makes task transfer time estimations artificially include the time that will be needed to write back data to the main memory.

STARPU_DISABLE_PINNING

Disable (1) or Enable (0) pinning host memory allocated through starpu_malloc(), starpu_memory_pin() and friends. The default is Enabled. This permits to test the performance effect of memory pinning.

STARPU_BACKOFF_MIN

Set minimum exponential backoff of number of cycles to pause when spinning. Default value is 1.

STARPU_BACKOFF_MAX

Set maximum exponential backoff of number of cycles to pause when spinning. Default value is 32.

STARPU_SINK

Defined internally by StarPU when running in master slave mode.

STARPU_ENABLE_MAP

Disable (0) or Enable (1) support for memory mapping between memory nodes. The default is Disable. One can call starpu_map_enabled() to check whether memory mapping support between memory nodes is enabled.

STARPU_DATA_LOCALITY_ENFORCE

Enable (1) or Disable(0) data locality enforcement when picking up a worker to execute a task. The default is Disabled.

STARPU_ENABLE_CUDA_GPU_GPU_DIRECT

3.1.2 CPU Workers

STARPU_NCPU

Specify the number of CPU workers (thus not including workers dedicated to control accelerators). Note that by default, StarPU will not allocate more CPU workers than there are physical CPUs, and that some CPUs are used to control the accelerators.

STARPU_RESERVE_NCPU

Specify the number of CPU cores that should not be used by StarPU, so the application can use starpu_get_next_bindid() and starpu_bind_thread_on() to bind its own threads.

This option is ignored if STARPU_NCPU or starpu_conf::ncpus is set.

STARPU_NCPUS

This variable is deprecated. You should use STARPU_NCPU.

3.1.3 CUDA Workers

STARPU_NCUDA

Specify the number of CUDA devices that StarPU can use. If STARPU_NCUDA is lower than the number of physical devices, it is possible to select which GPU devices should be used by the means of the environment variable STARPU_WORKERS_CUDAID. By default, StarPU will create as many CUDA workers as there are GPU devices.

STARPU_NWORKER_PER_CUDA

Specify the number of workers per CUDA device, and thus the number of kernels which will be concurrently running on the devices, i.e. the number of CUDA streams. The default value is 1.

STARPU_CUDA_THREAD_PER_WORKER

Specify whether the cuda driver should use one thread per stream (1) or to use a single thread to drive all the streams of the device or all devices (0), and STARPU_CUDA_THREAD_PER_DEV determines whether is it one thread per device or one thread for all devices. The default value is 0. Setting it to 1 is contradictory with setting STARPU_CUDA_THREAD_PER_DEV.

STARPU_CUDA_THREAD_PER_DEV

Specify whether the cuda driver should use one thread per device (1) or to use a single thread to drive all the devices (0). The default value is 1. It does not make sense to set this variable if STARPU_CUDA_THREAD_PER_WORKER is set to to 1 (since STARPU_CUDA_THREAD_PER_DEV is then meaningless).

STARPU_CUDA_PIPELINE

Specify how many asynchronous tasks are submitted in advance on CUDA devices. This for instance permits to overlap task management with the execution of previous tasks, but it also allows concurrent execution on Fermi cards, which otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous execution of all tasks.

STARPU_WORKERS_CUDAID

Similarly to the STARPU_WORKERS_CPUID environment variable, it is possible to select which CUDA devices should be used by StarPU. On a machine equipped with 4 GPUs, setting STARPU_WORKERS_CUDAID = "1 3" and STARPU_NCUDA=2 specifies that 2 CUDA workers should be created, and that they should use CUDA devices #1 and #3 (the logical ordering of the devices is the one reported by CUDA).

This variable is ignored if the field starpu_conf::use_explicit_workers_cuda_gpuid passed to starpu_init() is set.

STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY

Disable asynchronous copies between CPU and CUDA devices. One can call starpu_asynchronous_cuda_copy_disabled() to check whether asynchronous data transfers between CPU and CUDA accelerators are disabled.

See also STARPU_DISABLE_ASYNCHRONOUS_COPY and STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY.

STARPU_ENABLE_CUDA_GPU_GPU_DIRECT

Enable (1) or Disable (0) direct CUDA transfers from GPU to GPU, without copying through RAM. The default is Enabled. This permits to test the performance effect of GPU-Direct.

STARPU_CUDA_ONLY_FAST_ALLOC_OTHER_MEMNODES

Specify if CUDA workers should do only fast allocations when running the datawizard progress of other memory nodes. This will pass the internal value _STARPU_DATAWIZARD_ONLY_FAST_ALLOC to allocation methods. Default value is 0, allowing CUDA workers to do slow allocations.

This can also be specified with starpu_conf::cuda_only_fast_alloc_other_memnodes.

3.1.4 OpenCL Workers

STARPU_NOPENCL

Specify the number of OpenCL devices that StarPU can use. If STARPU_NOPENCL is lower than the number of physical devices, it is possible to select which GPU devices should be used by the means of the environment variable STARPU_WORKERS_OPENCLID. By default, StarPU will create as many OpenCL workers as there are GPU devices.

Note that by default StarPU will launch CUDA workers on GPU devices. You need to disable CUDA to allow the creation of OpenCL workers.

STARPU_WORKERS_OPENCLID

Similarly to the STARPU_WORKERS_CPUID environment variable, it is possible to select which GPU devices should be used by StarPU. On a machine equipped with 4 GPUs, setting STARPU_WORKERS_OPENCLID = "1 3" and STARPU_NOPENCL=2 specifies that 2 OpenCL workers should be created, and that they should use GPU devices #1 and #3.

This variable is ignored if the field starpu_conf::use_explicit_workers_opencl_gpuid passed to starpu_init() is set.

STARPU_OPENCL_PIPELINE

Specify how many asynchronous tasks are submitted in advance on OpenCL devices. This for instance permits to overlap task management with the execution of previous tasks, but it also allows concurrent execution on Fermi cards, which otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous execution of all tasks.

STARPU_OPENCL_ON_CPUS

By default, the OpenCL driver only enables GPU and accelerator devices. By setting the environment variable STARPU_OPENCL_ON_CPUS to 1, the OpenCL driver will also enable CPU devices.

STARPU_OPENCL_ONLY_ON_CPUS

By default, the OpenCL driver enables GPU and accelerator devices. By setting the environment variable STARPU_OPENCL_ONLY_ON_CPUS to 1, the OpenCL driver will ONLY enable CPU devices.

STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY

Disable asynchronous copies between CPU and OpenCL devices. The AMD implementation of OpenCL is known to fail when copying data asynchronously. When using this implementation, it is therefore necessary to disable asynchronous data transfers. One can call starpu_asynchronous_opencl_copy_disabled() to check whether asynchronous data transfers between CPU and OpenCL accelerators are disabled.

See also STARPU_DISABLE_ASYNCHRONOUS_COPY and STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY.

3.1.5 Maxeler FPGA Workers

STARPU_NMAX_FPGA

Specify the number of Maxeler FPGA devices that StarPU can use. If STARPU_NMAX_FPGA is lower than the number of physical devices, it is possible to select which Maxeler FPGA devices should be used by the means of the environment variable STARPU_WORKERS_MAX_FPGAID. By default, StarPU will create as many Maxeler FPGA workers as there are GPU devices.

STARPU_WORKERS_MAX_FPGAID

Similarly to the STARPU_WORKERS_CPUID environment variable, it is possible to select which Maxeler FPGA devices should be used by StarPU. On a machine equipped with 4 Maxeler FPGAs, setting STARPU_WORKERS_MAX_FPGAID = "1 3" and STARPU_NMAX_FPGA=2 specifies that 2 Maxeler FPGA workers should be created, and that they should use Maxeler FPGA devices #1 and #3 (the logical ordering of the devices is the one reported by the Maxeler stack).

STARPU_DISABLE_ASYNCHRONOUS_MAX_FPGA_COPY

Disable asynchronous copies between CPU and Maxeler FPGA devices. One can call starpu_asynchronous_max_fpga_copy_disabled() to check whether asynchronous data transfers between CPU and Maxeler FPGA devices are disabled.

3.1.6 MPI Master Slave Workers

STARPU_NMPI_MS

Specify the number of MPI master slave devices that StarPU can use.

STARPU_NMPIMSTHREADS

Number of threads to use on the MPI Slave devices.

STARPU_MPI_MS_MULTIPLE_THREAD

Specify whether the master should use one thread per slave, or one thread for driver all slaves. The default is 0.

STARPU_MPI_MASTER_NODE

This variable allows to chose which MPI node (with the MPI ID) will be the master.

STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY

Disable asynchronous copies between CPU and MPI Slave devices. One can call starpu_asynchronous_mpi_ms_copy_disabled() to check whether asynchronous data transfers between CPU and MPI Slave devices are disabled.

3.1.7 TCP/IP Master Slave Workers

STARPU_NTCPIP_MS

Specify the number of TCP/IP master slave devices that StarPU can use.

STARPU_TCPIP_MS_SLAVES

Specify the number of TCP/IP master slave processes that are expected to be run. This should be provided both to the master and to the slaves.

STARPU_TCPIP_MS_MASTER

Specify (for slaves) the IP address of the master so they can connect to it. They will then automatically connect to each other.

STARPU_TCPIP_MS_PORT

Specify the port of the master, for connexions between slaves and the master. The default if unspecified is 1234.

STARPU_NTCPIPMSTHREADS

Number of threads to use on the TCP/IP Slave devices.

STARPU_TCPIP_MS_MULTIPLE_THREAD

Specify whether the master should use one thread per slave, or one thread for driver all slaves. The default is 0.

STARPU_DISABLE_ASYNCHRONOUS_TCPIP_MS_COPY

Disable asynchronous copies between CPU and TCP/IP Slave devices. One can call starpu_asynchronous_tcpip_ms_copy_disabled() to check whether asynchronous data transfers between CPU and TCP/IP Slave devices are disabled.

3.1.8 HIP Workers

STARPU_NHIP

Specify the number of HIP devices that StarPU can use. If STARPU_NHIP is lower than the number of physical devices, it is possible to select which HIP devices should be used by the means of the environment variable STARPU_WORKERS_HIPID. By default, StarPU will create as many HIP workers as there are HIP devices.

STARPU_WORKERS_HPIID

Similarly to the STARPU_WORKERS_HIPID environment variable, it is possible to select which HIP devices should be used by StarPU. On a machine equipped with 4 HIP devices, setting STARPU_WORKERS_HIPID = "1 3" and STARPU_NHIP=2 specifies that 2 HIP workers should be created, and that they should use HIP devices #1 and #3.

This variable is ignored if the field starpu_conf::use_explicit_workers_hip_gpuid passed to starpu_init() is set.

STARPU_DISABLE_ASYNCHRONOUS_HIP_COPY

Disable asynchronous copies between CPU and HIP devices. One can call starpu_asynchronous_hip_copy_disabled() to check whether asynchronous data transfers between CPU and HIP accelerators are disabled.

3.1.9 MPI Configuration

STARPU_MPI_THREAD_CPUID

When defined, this make StarPU bind its MPI thread to the given CPU ID, subtracted from the CPU workers (unless STARPU_NCPU is defined).

Setting it to -1 (the default value) will let StarPU allocate a CPU.

STARPU_MPI_THREAD_COREID

Same as STARPU_MPI_THREAD_CPUID, but bind the MPI thread to the given core ID, instead of the PU (hyperthread).

STARPU_MPI_NOBIND

Setting it to non-zero will prevent StarPU from binding the MPI to a separate core. This is for instance useful when running the testsuite on a single system.

STARPU_MPI_GPUDIRECT

This variable allows to enable (1) MPI GPUDirect support or not (0). The default (-1) is to enable it if available. If STARPU_MPI_GPUDIRECT is explicitly set to 1, StarPU-MPI will warn if MPI does not provide the GPUDirect support.

STARPU_MPI_REDUX_ARITY_THRESHOLD

The arity of the automatically-detected reduction trees follows the following rule: when the data to be reduced is of small size a flat tree is unrolled i.e. all the contributing nodes send their contribution to the root of the reduction. When the data to be reduced is of big size, a binary tree is used instead. The default threshold between flat and binary tree is 1024 bytes. By setting the environment variable with a negative value, all the automatically detected reduction trees will use flat trees. If this value is set to 0, then binary trees will always be selected. Otherwise, the setup value replaces the default 1024.

3.2 Configuring The Scheduling Engine

STARPU_SCHED

Choose between the different scheduling policies proposed by StarPU: work random, stealing, greedy, with performance models, etc.

Use STARPU_SCHED=help to get the list of available schedulers.

STARPU_SCHED_LIB

Allow to specify the location of a dynamic library to choose a user-defined scheduling policy. See UsingaNewSchedulingPolicy for more information.

STARPU_MIN_PRIO

Set the mininum priority used by priorities-aware schedulers. The flag can also be set through the field starpu_conf::global_sched_ctx_min_priority.

STARPU_MAX_PRIO

Set the maximum priority used by priorities-aware schedulers. The flag can also be set through the field starpu_conf::global_sched_ctx_max_priority.

STARPU_CALIBRATE

If this variable is set to 1, the performance models are calibrated during the execution. If it is set to 2, the previous values are dropped to restart calibration from scratch. Setting this variable to 0 disable calibration, this is the default behaviour.

Note: this currently only applies to dm and dmda scheduling policies.

STARPU_CALIBRATE_MINIMUM

Define the minimum number of calibration measurements that will be made before considering that the performance model is calibrated. The default value is 10.

STARPU_BUS_CALIBRATE

If this variable is set to 1, the bus is recalibrated during intialization.

STARPU_PREFETCH

Indicate whether data prefetching should be enabled (0 means that it is disabled). If prefetching is enabled, when a task is scheduled to be executed e.g. on a GPU, StarPU will request an asynchronous transfer in advance, so that data is already present on the GPU when the task starts. As a result, computation and data transfers are overlapped. Note that prefetching is enabled by default in StarPU.

STARPU_SCHED_ALPHA

To estimate the cost of a task StarPU takes into account the estimated computation time (obtained thanks to performance models). The alpha factor is the coefficient to be applied to it before adding it to the communication part.

STARPU_SCHED_BETA

To estimate the cost of a task StarPU takes into account the estimated data transfer time (obtained thanks to performance models). The beta factor is the coefficient to be applied to it before adding it to the computation part.

STARPU_SCHED_GAMMA

Define the execution time penalty of a joule (Energy-basedScheduling).

STARPU_SCHED_READY

For a modular scheduler with sorted queues below the decision component, workers pick up a task which has most of its data already available. Setting this to 0 disables this.

STARPU_SCHED_SORTED_ABOVE

For a modular scheduler with queues above the decision component, it is usually sorted by priority. Setting this to 0 disables this.

STARPU_SCHED_SORTED_BELOW

For a modular scheduler with queues below the decision component, they are usually sorted by priority. Setting this to 0 disables this.

STARPU_IDLE_POWER

Define the idle power of the machine (Energy-basedScheduling).

STARPU_PROFILING

Enable on-line performance monitoring (EnablingOn-linePerformanceMonitoring).

STARPU_CODELET_PROFILING

Enable on-line performance monitoring of codelets (Per-codeletFeedback). (enabled by default)

STARPU_PROF_PAPI_EVENTS

Specify which PAPI events should be recorded in the trace (PapiCounters).

3.3 Configuring The Heteroprio Scheduler

3.3.1 Configuring LAHeteroprio

STARPU_HETEROPRIO_USE_LA

Enable the locality aware mode of Heteroprio which guides the distribution of tasks to workers in order to reduce the data transfers between memory nodes.

STARPU_LAHETEROPRIO_PUSH

Choose between the different push strategies for locality aware Heteroprio: WORKER, LcS, LS_SDH, LS_SDH2, LS_SDHB, LC_SMWB, AUTO (by default: AUTO). These are detailed in LAHeteroprio

STARPU_LAHETEROPRIO_S_[ARCH]

[ARCH] Specify the number of memory nodes contained in an affinity group. An affinity group will be composed of the closests memory nodes to a worker of a given architecture, and this worker will look for tasks available inside these memory nodes, before considering stealing tasks outside this group. ARCH can be CPU, CUDA, OPENCL, MICC, SCC, MPI_MS, etc.

STARPU_LAHETEROPRIO_PRIO_STEP_[ARCH]

[ARCH] Specify the number of buckets in the local memory node in which a worker will look for available tasks, before this worker starts looking for tasks in other memory nodes' buckets. ARCH indicates that this number is specific to a given arch which can be: CPU, CUDA, OPENCL, MICC, SCC, MPI_MS, etc.

3.3.2 Configuring AutoHeteroprio

STARPU_HETEROPRIO_USE_AUTO_CALIBRATION

Enable the auto calibration mode of Heteroprio which assign priorities to tasks automatically

STARPU_HETEROPRIO_DATA_DIR

Specify the path of the directory where Heteroprio stores data about program executions. By default, these are stored in the same directory used by perfmodel.

STARPU_HETEROPRIO_DATA_FILE

Specify the filename where Heteroprio will save data about the current program's execution.

STARPU_HETEROPRIO_CODELET_GROUPING_STRATEGY

Choose how Heteroprio groups similar tasks. It can be 0 to group the tasks with the same perfmodel or the same codelet's name if no perfmodel was assigned. Or, it could be 1 to group the tasks only by codelet's name.

STARPU_AUTOHETEROPRIO_PRINT_DATA_ON_UPDATE

Enable the printing of priorities' data every time they get updated.

STARPU_AUTOHETEROPRIO_PRINT_AFTER_ORDERING

Enable the printing of priorities' order for each architecture every time there's a reordering.

STARPU_AUTOHETEROPRIO_PRIORITY_ORDERING_POLICY

Specify the heuristic which will be used to assign priorities automatically. It should be an integer between 0 and 27.

STARPU_AUTOHETEROPRIO_ORDERING_INTERVAL

Specify the period (in number of tasks pushed), between priorities reordering operations.

STARPU_AUTOHETEROPRIO_FREEZE_GATHERING

Disable data gathering from task executions.

3.4 Extensions

SOCL_OCL_LIB_OPENCL

THE SOCL test suite is only run when the environment variable SOCL_OCL_LIB_OPENCL is defined. It should contain the location of the file libOpenCL.so of the OCL ICD implementation.

OCL_ICD_VENDORS

When using SOCL with OpenCL ICD (https://forge.imag.fr/projects/ocl-icd/), this variable may be used to point to the directory where ICD files are installed. The default directory is /etc/OpenCL/vendors. StarPU installs ICD files in the directory $prefix/share/starpu/opencl/vendors.

STARPU_COMM_STATS

This variable is deprecated. You should use STARPU_MPI_STATS.

STARPU_MPI_STATS

Communication statistics for starpumpi (MPIDebug) will be enabled when the environment variable STARPU_MPI_STATS is defined to an value other than 0.

STARPU_MPI_CACHE

Communication cache for starpumpi (MPISupport) will be disabled when the environment variable STARPU_MPI_CACHE is set to 0. It is enabled by default or for any other values of the variable STARPU_MPI_CACHE.

STARPU_MPI_COMM

Communication trace for starpumpi (MPISupport) will be enabled when the environment variable STARPU_MPI_COMM is set to 1, and StarPU has been configured with the option --enable-verbose.

STARPU_MPI_CACHE_STATS

When set to 1, statistics are enabled for the communication cache (MPISupport). For now, it prints messages on the standard output when data are added or removed from the received communication cache.

STARPU_MPI_PRIORITIES

When set to 0, the use of priorities to order MPI communications is disabled (MPISupport).

STARPU_MPI_NDETACHED_SEND

This sets the number of send requests that StarPU-MPI will emit concurrently. The default is 10. Setting it to 0 removes the limit of concurrent send requests.

STARPU_MPI_NREADY_PROCESS

This sets the number of requests that StarPU-MPI will submit to MPI before polling for termination of existing requests. The default is 10. Setting it to 0 removes the limit: all requests to submit to MPI will be submitted before polling for termination of existing ones.

STARPU_MPI_FAKE_SIZE

Setting to a number makes StarPU believe that there are as many MPI nodes, even if it was run on only one MPI node. This allows e.g. to simulate the execution of one of the nodes of a big cluster without actually running the rest. It of course does not provide computation results and timing.

STARPU_MPI_FAKE_RANK

Setting to a number makes StarPU believe that it runs the given MPI node, even if it was run on only one MPI node. This allows e.g. to simulate the execution of one of the nodes of a big cluster without actually running the rest. It of course does not provide computation results and timing.

STARPU_MPI_COOP_SENDS

Setting to 0 disables dynamic collective operations: grouping same requests to different nodes until the data becomes available and then use a broadcast tree to execute requests.
By now, it is only supported with the NewMadeleine library (see Nmad).

STARPU_MPI_RECV_WAIT_FINALIZE

Setting to 1 disables releasing the write acquire of receiving handles when data is received but the communication library still needs the data. Set to 0 by default to unlock as soon as possible tasks which only require a read access on the handle; write access will become possible for tasks when the communication library will not need the data anymore.
By now, it is only supported with the NewMadeleine library (see Nmad).

STARPU_MPI_TRACE_SYNC_CLOCKS

When mpi_sync_clocks is available, this library will be used to have more precise clock synchronization in traces coming from different nodes. However, the clock synchronization process can take some time (several seconds) and can be disabled by setting this variable to 0. In that case, a less precise but faster synchronization will be used. See TraceMpi for more details.

STARPU_MPI_DRIVER_CALL_FREQUENCY

When set to a positive value, activates the interleaving of the execution of tasks with the progression of MPI communications (MPISupport). The starpu_mpi_init_conf() function must have been called by the application for that environment variable to be used. When set to 0, the MPI progression thread does not use at all the driver given by users, and only focuses on making MPI communications progress.

STARPU_MPI_DRIVER_TASK_FREQUENCY

When set to a positive value, the interleaving of the execution of tasks with the progression of MPI communications mechanism to execute several tasks before checking communication requests again (MPISupport). The starpu_mpi_init_conf() function must have been called by the application for that environment variable to be used, and the STARPU_MPI_DRIVER_CALL_FREQUENCY environment variable set to a positive value.

STARPU_MPI_MEM_THROTTLE

When set to a positive value, this makes the starpu_mpi_*recv* functions block when the memory allocation required for network reception overflows the available main memory (as typically set by STARPU_LIMIT_CPU_MEM)

STARPU_MPI_EARLYDATA_ALLOCATE

When set to 1, the MPI Driver will immediately allocate the data for early requests instead of issuing a data request and blocking. The default value is 0, issuing a data request. Because it is an early request and we do not know its real priority, the data request will assume STARPU_DEFAULT_PRIO. In cases where there are many data requests with priorities greater than STARPU_DEFAULT_PRIO the MPI drive could be blocked for long periods.

STARPU_SIMGRID

When set to 1 (the default is 0), this makes StarPU check that it was really build with simulation support. This is convenient in scripts to avoid using a native version, that would try to update performance models...

STARPU_SIMGRID_TRANSFER_COST

When set to 1 (which is the default), data transfers (over PCI bus, typically) are taken into account in SimGrid mode.

STARPU_SIMGRID_CUDA_MALLOC_COST

When set to 1 (which is the default), CUDA malloc costs are taken into account in SimGrid mode.

STARPU_SIMGRID_CUDA_QUEUE_COST

When set to 1 (which is the default), CUDA task and transfer queueing costs are taken into account in SimGrid mode.

STARPU_PCI_FLAT

When unset or set to 0, the platform file created for SimGrid will contain PCI bandwidths and routes.

STARPU_SIMGRID_CUDA_QUEUE_COST

When unset or set to 1, simulate within SimGrid the GPU transfer queueing.

STARPU_MALLOC_SIMULATION_FOLD

Define the size of the file used for folding virtual allocation, in MiB. The default is 1, thus allowing 64GiB virtual memory when Linux's sysctl vm.max_map_count value is the default 65535.

STARPU_SIMGRID_TASK_SUBMIT_COST

When set to 1 (which is the default), task submission costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, especially for the beginning of the execution.

STARPU_SIMGRID_TASK_PUSH_COST

When set to 1 (which is the default), task push costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, especially with large dependency arities.

STARPU_SIMGRID_FETCHING_INPUT_COST

When set to 1 (which is the default), fetching input costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, especially regarding data transfers.

STARPU_SIMGRID_SCHED_COST

When set to 1 (0 is the default), scheduling costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, and allows studying scheduling overhead of the runtime system. However, it also makes simulation non-deterministic.

STARPUPY_MULTI_INTERPRETER

When set to 1 (the default is 0), multi interpreters are enabled in the StarPU Python interface (MultipleInterpreters).

3.5 Miscellaneous And Debug

STARPU_HOME

Specify the main directory in which StarPU stores its configuration files. The default is $HOME on Unix environments, and $USERPROFILE on Windows environments.

STARPU_PATH

Only used on Windows environments. Specify the main directory in which StarPU is installed (RunningABasicStarPUApplicationOnMicrosoft)

STARPU_PERF_MODEL_DIR

Specify the main directory in which StarPU stores its performance model files. The default is $STARPU_HOME/.starpu/sampling.

STARPU_PERF_MODEL_PATH

Specify a list of directories separated with ':' in which StarPU stores its performance model files.

STARPU_PERF_MODEL_HOMOGENEOUS_CPU

When set to 0, StarPU will assume that CPU devices do not have the same performance, and thus use different performance models for them, thus making kernel calibration much longer, since measurements have to be made for each CPU core.

STARPU_PERF_MODEL_HOMOGENEOUS_CUDA

When set to 1, StarPU will assume that all CUDA devices have the same performance, and thus share performance models for them, thus allowing kernel calibration to be much faster, since measurements only have to be once for all CUDA GPUs.

STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL

When set to 1, StarPU will assume that all OpenCL devices have the same performance, and thus share performance models for them, thus allowing kernel calibration to be much faster, since measurements only have to be once for all OpenCL GPUs.

STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS

When set to 1, StarPU will assume that all MPI Slave devices have the same performance, and thus share performance models for them, thus allowing kernel calibration to be much faster, since measurements only have to be once for all MPI Slaves.

STARPU_HOSTNAME

When set, force the hostname to be used when dealing performance model files. Models are indexed by machine name. When running for example on a homogenenous cluster, it is possible to share the models between machines by setting export STARPU_HOSTNAME=some_global_name.

STARPU_MPI_HOSTNAMES

Similar to STARPU_HOSTNAME but to define multiple nodes on a heterogeneous cluster. The variable is a list of hostnames that will be assigned to each StarPU-MPI rank considering their position and the value of starpu_mpi_world_rank on each rank. When running, for example, on a heterogeneous cluster, it is possible to set individual models for each machine by setting export STARPU_MPI_HOSTNAMES="name0 name1 name2". Where rank 0 will receive name0, rank1 will receive name1, and so on. This variable has precedence over STARPU_HOSTNAME.

STARPU_OPENCL_PROGRAM_DIR

Specify the directory where the OpenCL codelet source files are located. The function starpu_opencl_load_program_source() looks for the codelet in the current directory, in the directory specified by the environment variable STARPU_OPENCL_PROGRAM_DIR, in the directory share/starpu/opencl of the installation directory of StarPU, and finally in the source directory of StarPU.

STARPU_SILENT

Allow to disable verbose mode at runtime when StarPU has been configured with the option --enable-verbose. Also disable the display of StarPU information and warning messages.

STARPU_MPI_DEBUG_LEVEL_MIN

Set the minimum level of debug when StarPU has been configured with the option --enable-mpi-verbose.

STARPU_MPI_DEBUG_LEVEL_MAX

Set the maximum level of debug when StarPU has been configured with the option --enable-mpi-verbose.

STARPU_LOGFILENAME

Specify in which file the debugging output should be saved to.

STARPU_FXT_PREFIX

Specify in which directory to save the generated trace if FxT is enabled.

STARPU_FXT_SUFFIX

Specify in which file to save the generated trace if FxT is enabled.

STARPU_FXT_TRACE

Specify whether to generate (1) or not (0) the FxT trace in /tmp/prof_file_XXX_YYY (the directory and file name can be changed with STARPU_FXT_PREFIX and STARPU_FXT_SUFFIX). The default is 0 (do not generate it)

STARPU_FXT_EVENTS

Specify which events will be recorded in traces. By default, all events (but VERBOSE_EXTRA ones) are recorded. One can set this variable to a comma- or pipe-separated list of the following categories, to record only events belonging to the selected categories:

  • USER
  • TASK
  • TASK_VERBOSE
  • TASK_VERBOSE_EXTRA
  • DATA
  • DATA_VERBOSE
  • WORKER
  • WORKER_VERBOSE
  • DSM
  • DSM_VERBOSE
  • SCHED
  • SCHED_VERBOSE
  • LOCK
  • LOCK_VERBOSE
  • EVENT
  • EVENT_VERBOSE
  • MPI
  • MPI_VERBOSE
  • MPI_VERBOSE_EXTRA
  • HYP
  • HYP_VERBOSE

The choice of which categories have to be recorded is a tradeoff between required informations for offline analyzis and acceptable overhead introduced by tracing. For instance, to inspect with ViTE which tasks workers execute, one has to at least select the TASK category.

Events in VERBOSE_EXTRA are very costly to record and can have an important impact on application performances. This is why there are disabled by default, and one has to explicitly select their categories using this variable to record them.

STARPU_LIMIT_CUDA_devid_MEM

Specify the maximum number of megabytes that should be available to the application on the CUDA device with the identifier devid. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory. When defined, the variable overwrites the value of the variable STARPU_LIMIT_CUDA_MEM.

STARPU_LIMIT_CUDA_MEM

Specify the maximum number of megabytes that should be available to the application on each CUDA devices. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory.

STARPU_LIMIT_OPENCL_devid_MEM

Specify the maximum number of megabytes that should be available to the application on the OpenCL device with the identifier devid. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory. When defined, the variable overwrites the value of the variable STARPU_LIMIT_OPENCL_MEM.

STARPU_LIMIT_OPENCL_MEM

Specify the maximum number of megabytes that should be available to the application on each OpenCL devices. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory.

STARPU_LIMIT_HIP_devid_MEM

Specify the maximum number of megabytes that should be available to the application on the HIP device with the identifier devid. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory. When defined, the variable overwrites the value of the variable STARPU_LIMIT_HIP_MEM.

STARPU_LIMIT_HIP_MEM

Specify the maximum number of megabytes that should be available to the application on each HIP devices. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory.

STARPU_LIMIT_CPU_MEM

Specify the maximum number of megabytes that should be available to the application in the main CPU memory. Setting it enables allocation cache in main memory. Setting it to zero lets StarPU overflow memory.

Note: for now not all StarPU allocations get throttled by this parameter. Notably MPI reception are not throttled unless STARPU_MPI_MEM_THROTTLE is set to 1.

STARPU_LIMIT_CPU_NUMA_devid_MEM

Specify the maximum number of megabytes that should be available to the application on the NUMA node with the OS identifier devid. Setting it overrides the value of STARPU_LIMIT_CPU_MEM.

STARPU_LIMIT_CPU_NUMA_MEM

Specify the maximum number of megabytes that should be available to the application on each NUMA node. This is the same as specifying that same amount with STARPU_LIMIT_CPU_NUMA_devid_MEM for each NUMA node number. The total memory available to StarPU will thus be this amount multiplied by the number of NUMA nodes used by StarPU. Any STARPU_LIMIT_CPU_NUMA_devid_MEM additionally specified will take over STARPU_LIMIT_CPU_NUMA_MEM.

STARPU_LIMIT_BANDWIDTH

Specify the maximum available PCI bandwidth of the system in MB/s. This can only be effective with simgrid simulation. This allows to easily override the bandwidths stored in the platform file generated from measurements on the native system. This can be used e.g. for convenient

Specify the maximum number of megabytes that should be available to the application on each NUMA node. This is the same as specifying that same amount with STARPU_LIMIT_CPU_NUMA_devid_MEM for each NUMA node number. The total memory available to StarPU will thus be this amount multiplied by the number of NUMA nodes used by StarPU. Any STARPU_LIMIT_CPU_NUMA_devid_MEM additionally specified will take over STARPU_LIMIT_BANDWIDTH.

STARPU_SUBALLOCATOR

Specifies to enable (1) the StarPU suballocator or not (0). The default is to enable it to amortize the cost of GPU and pinned RAM allocations for small allocations: StarPU allocate large chunks of memory at a time, and suballocates the small buffers within them.

STARPU_MINIMUM_AVAILABLE_MEM

Specify the minimum percentage of memory that should be available in GPUs, i.e. not used at all by StarPU (or in main memory, when using out of core), below which a eviction pass is performed. The default is 0%.

STARPU_TARGET_AVAILABLE_MEM

Specify the target percentage of memory that should be available in GPUs, i.e. not used at all by StarPU (or in main memory, when using out of core), when performing a periodic eviction pass. The default is 0%.

STARPU_MINIMUM_CLEAN_BUFFERS

Specify the minimum percentage of number of buffers that should be clean in GPUs (or in main memory, when using out of core), i.e. used by StarPU, but for which a copy is available in memory (or on disk, when using out of core), below which asynchronous writebacks will be issued. The default is 5%.

STARPU_TARGET_CLEAN_BUFFERS

Specify the target percentage of number of buffers that should be reached in GPUs (or in main memory, when using out of core), i.e. used by StarPU, but for which a copy is available in memory (or on disk, when using out of core), when performing an asynchronous writeback pass. The default is 10%.

STARPU_DISK_SWAP

Specify a path where StarPU can push data when the main memory is getting full.

STARPU_DISK_SWAP_BACKEND

Specify the backend to be used by StarPU to push data when the main memory is getting full. The default is unistd (i.e. using read/write functions), other values are stdio (i.e. using fread/fwrite), unistd_o_direct (i.e. using read/write with O_DIRECT), leveldb (i.e. using a leveldb database), and hdf5 (i.e. using HDF5 library).

STARPU_DISK_SWAP_SIZE

Specify the maximum size in MiB to be used by StarPU to push data when the main memory is getting full. The default is unlimited.

STARPU_LIMIT_MAX_SUBMITTED_TASKS

Allow users to control the task submission flow by specifying to StarPU a maximum number of submitted tasks allowed at a given time, i.e. when this limit is reached task submission becomes blocking until enough tasks have completed, specified by STARPU_LIMIT_MIN_SUBMITTED_TASKS. Setting it enables allocation cache buffer reuse in main memory. See HowToReduceTheMemoryFootprintOfInternalDataStructures.

STARPU_LIMIT_MIN_SUBMITTED_TASKS

Allow users to control the task submission flow by specifying to StarPU a submitted task threshold to wait before unblocking task submission. This variable has to be used in conjunction with STARPU_LIMIT_MAX_SUBMITTED_TASKS which puts the task submission thread to sleep. Setting it enables allocation cache buffer reuse in main memory. See HowToReduceTheMemoryFootprintOfInternalDataStructures.

STARPU_TRACE_BUFFER_SIZE

Set the buffer size for recording trace events in MiB. Setting it to a big size allows to avoid pauses in the trace while it is recorded on the disk. This however also consumes memory, of course. The default value is 64.

STARPU_GENERATE_TRACE

When set to 1, indicate that StarPU should automatically generate a Paje trace when starpu_shutdown() is called.

STARPU_GENERATE_TRACE_OPTIONS

When the variable STARPU_GENERATE_TRACE is set to 1 to generate a Paje trace, this variable can be set to specify options (see starpu_fxt_tool –help).

STARPU_ENABLE_STATS

When defined, enable gathering various data statistics (DataStatistics).

STARPU_MEMORY_STATS

When set to 0, disable the display of memory statistics on data which have not been unregistered at the end of the execution (MemoryFeedback).

STARPU_MAX_MEMORY_USE

When set to 1, display at the end of the execution the maximum memory used by StarPU for internal data structures during execution.

STARPU_BUS_STATS

When defined, statistics about data transfers will be displayed when calling starpu_shutdown() (Profiling). By default, statistics are printed on the standard error stream, use the environment variable STARPU_BUS_STATS_FILE to define another filename.

STARPU_BUS_STATS_FILE

Define the name of the file where to display data transfers statistics, see STARPU_BUS_STATS.

STARPU_WORKER_STATS

When defined, statistics about the workers will be displayed when calling starpu_shutdown() (Profiling). When combined with the environment variable STARPU_PROFILING, it displays the energy consumption (Energy-basedScheduling). By default, statistics are printed on the standard error stream, use the environment variable STARPU_WORKER_STATS_FILE to define another filename.

STARPU_WORKER_STATS_FILE

Define the name of the file where to display workers statistics, see STARPU_WORKER_STATS.

STARPU_STATS

When set to 0, data statistics will not be displayed at the end of the execution of an application (DataStatistics).

STARPU_WATCHDOG_TIMEOUT

When set to a value other than 0, allows to make StarPU print an error message whenever StarPU does not terminate any task for the given time (in µs), but lets the application continue normally. Should be used in combination with STARPU_WATCHDOG_CRASH (see DetectionStuckConditions).

STARPU_WATCHDOG_CRASH

When set to a value other than 0, trigger a crash when the watch dog is reached, thus allowing to catch the situation in gdb, etc (see DetectionStuckConditions)

STARPU_WATCHDOG_DELAY

Delay the activation of the watchdog by the given time (in µs). This can be convenient for letting the application initialize data etc. before starting to look for idle time.

STARPU_TASK_PROGRESS

Print the progression of tasks. This is convenient to determine whether a program is making progress in task execution, or is just stuck.

STARPU_TASK_BREAK_ON_PUSH

When this variable contains a job id, StarPU will raise SIGTRAP when the task with that job id is being pushed to the scheduler, which will be nicely catched by debuggers (see DebuggingScheduling)

STARPU_TASK_BREAK_ON_SCHED

When this variable contains a job id, StarPU will raise SIGTRAP when the task with that job id is being scheduled by the scheduler (at a scheduler-specific point), which will be nicely catched by debuggers. This only works for schedulers which have such a scheduling point defined (see DebuggingScheduling)

STARPU_TASK_BREAK_ON_POP

When this variable contains a job id, StarPU will raise SIGTRAP when the task with that job id is being popped from the scheduler, which will be nicely catched by debuggers (see DebuggingScheduling)

STARPU_TASK_BREAK_ON_EXEC

When this variable contains a job id, StarPU will raise SIGTRAP when the task with that job id is being executed, which will be nicely catched by debuggers (see DebuggingScheduling)

STARPU_DISABLE_KERNELS

When set to a value other than 1, it disables actually calling the kernel functions, thus allowing to quickly check that the task scheme is working properly, without performing the actual application-provided computation.

STARPU_HISTORY_MAX_ERROR

History-based performance models will drop measurements which are really far froom the measured average. This specifies the allowed variation. The default is 50 (%), i.e. the measurement is allowed to be x1.5 faster or /1.5 slower than the average.

STARPU_RAND_SEED

The random scheduler and some examples use random numbers for their own working. Depending on the examples, the seed is by default juste always 0 or the current time() (unless SimGrid mode is enabled, in which case it is always 0). STARPU_RAND_SEED allows to set the seed to a specific value.

STARPU_GLOBAL_ARBITER

When set to a positive value, StarPU will create a arbiter, which implements an advanced but centralized management of concurrent data accesses (see ConcurrentDataAccess).

STARPU_USE_NUMA

When defined to 1, NUMA nodes are taking into account by StarPU, i.e. StarPU will expose one StarPU memory node per NUMA node, and will thus schedule tasks according to data locality, migrated data when appropriate, etc.

STARPU_MAIN_RAM is then associated to the NUMA node associated to the first CPU worker if it exists, the NUMA node associated to the first GPU discovered otherwise. If StarPU doesn't find any NUMA node after these steps, STARPU_MAIN_RAM is the first NUMA node discovered by StarPU.

Applications should thus rather pass a NULL pointer and a -1 memory node to starpu_data_*_register functions, so that StarPU can manage memory as it wishes.

If the application wants to control memory allocation on NUMA nodes for some data, it can use starpu_malloc_on_node and pass the memory node to the starpu_data_*_register functions to tell StarPU where the allocation was made. starpu_memory_nodes_get_count_by_kind() and starpu_memory_node_get_ids_by_type() can be used to get the memory nodes numbers of the CPU memory nodes.

starpu_memory_nodes_numa_id_to_devid() and starpu_memory_nodes_numa_devid_to_id() are also available to convert between OS NUMA id and StarPU memory node number.

If this variable is unset, or set to 0, CPU memory is considered as only one memory node (STARPU_MAIN_RAM) and it will be up to the OS to manage migration etc. and the StarPU scheduler will not know about it.

STARPU_IDLE_FILE

When defined, a file named after its contents will be created at the end of the execution. This file will contain the sum of the idle times of all the workers.

STARPU_HWLOC_INPUT

When defined to the path of an XML file, hwloc will use this file as input instead of detecting the current platform topology, which can save significant initialization time.

To produce this XML file, use lstopo file.xml

STARPU_CATCH_SIGNALS

By default, StarPU catch signals SIGINT, SIGSEGV and SIGTRAP to perform final actions such as dumping FxT trace files even though the application has crashed. Setting this variable to a value other than 1 will disable this behaviour. This should be done on JVM systems which may use these signals for their own needs. The flag can also be set through the field starpu_conf::catch_signals.

STARPU_DISPLAY_BINDINGS
Display the binding of all processes and threads running on the machine. If MPI is enabled, display the binding of each node.
Users can manually display the binding by calling starpu_display_bindings().

3.6 Configuring The Hypervisor

SC_HYPERVISOR_POLICY

Choose between the different resizing policies proposed by StarPU for the hypervisor: idle, app_driven, feft_lp, teft_lp, ispeed_lp, throughput_lp etc.

Use SC_HYPERVISOR_POLICY=help to get the list of available policies for the hypervisor

SC_HYPERVISOR_TRIGGER_RESIZE

Choose how should the hypervisor be triggered: speed if the resizing algorithm should be called whenever the speed of the context does not correspond to an optimal precomputed value, idle it the resizing algorithm should be called whenever the workers are idle for a period longer than the value indicated when configuring the hypervisor.

SC_HYPERVISOR_START_RESIZE

Indicate the moment when the resizing should be available. The value correspond to the percentage of the total time of execution of the application. The default value is the resizing frame.

SC_HYPERVISOR_MAX_SPEED_GAP

Indicate the ratio of speed difference between contexts that should trigger the hypervisor. This situation may occur only when a theoretical speed could not be computed and the hypervisor has no value to compare the speed to. Otherwise the resizing of a context is not influenced by the the speed of the other contexts, but only by the the value that a context should have.

SC_HYPERVISOR_STOP_PRINT

By default the values of the speed of the workers is printed during the execution of the application. If the value 1 is given to this environment variable this printing is not done.

SC_HYPERVISOR_LAZY_RESIZE

By default the hypervisor resizes the contexts in a lazy way, that is workers are firstly added to a new context before removing them from the previous one. Once this workers are clearly taken into account into the new context (a task was poped there) we remove them from the previous one. However if the application would like that the change in the distribution of workers should change right away this variable should be set to 0

SC_HYPERVISOR_SAMPLE_CRITERIA

By default the hypervisor uses a sample of flops when computing the speed of the contexts and of the workers. If this variable is set to time the hypervisor uses a sample of time (10% of an aproximation of the total execution time of the application)