CUBY logo
back to Interfaces

Parallelization

This page describes the paralellization options implemented in Cuby itself. It is available for specific types of calculations that requires multiple calls of an external software that can be done simultaneously.

How it works

To run larger number of calculations on limited resources, Cuby uses an internal queue system that ensures that only one calculation is running on one available CPU all the time. These calculations are sorted by an estimate of time needed to finish, and largest calculations are run first. This ensures the most efficient use of the allocated CPUs.

Keywords

Keyword distributed switches on this parallelization scheme. Keyword nproc specifies how many processors are available (if run on a cluster, this is per node). Cluster-specific keywords are listed below.

Supported infrastructure

  1. Multiprocessor machine
    Calculations are run locally in separate threads.
  2. Cluster
    Use of multiple nodes of a cluster is activated using distributed_remote keyword. List of nodes available must be provided to Cuby using keyword nodes. Usually, this list is provided by the cluster environment. Cuby has to be accessible from the nodes. These nodes must be accessible via ssh, without the need to type password. Shared disk space is optional (keyword distributed_protocol), otherwise files are copied using scp protocol to the nodes, and the calculation is run in a temporary directory created in distributed_calcdir. Such a calculation is usually submitted using a queue system of the cluster; the master process runs on one of the nodes but takes virtually no resources and so a calculation is run on that node too.