Table of Contents:
Cluster nodes
Getting an overview of the different nodes on the Gbar cluster
The HPC cluster is a system with lots of users with varying needs and requirements. Therefore, one needs to run programs from different nodes depending on those needs. This makes the process of launching programs more ‘tricky’ than just double clicking a shortcut icon. The different types of nodes are listed here, and explained individually in the proceeding sections.
- Standard interactive node (normal)
- Interactive node with lots of RAM (bigmem)
- Interactive node for graphical applications (graphic)
- Interactive node with GPUs
- Compute nodes with/without GPUs
There are a couple of more special case nodes – e.g. a login node, where programs should not be run from – but they are not described here.
1. Normal nodes:
These are the default interactive nodes that you run from when
- Launching a terminal in ThinLinc
- Launching a program through the ThinLinc menu bars,
- e.g. Applications -> DTU -> Mathematics -> Matlab (default)
Intuitively, one can think of a normal node as roughly equivalent to a good desktop with respect to hardware specifications. Call e.g. htop to get a picture of the available resources.
There are plenty of things you can do from the default nodes, and a good rule of thumb: “If it is possible to do from default node – do it from there”.
❗ | NB: Despite the seemingly large amount of memory available, there is a user limitation of approx. 20GB of RAM. This is quite restrictive for 3DIM/QIM users – as volume datasets quickly hug that magnitude of memory during processing. |
2. “Bigmem” nodes:
These nodes are designed for cases where one needs large amounts of memory AND interactive processing (if you just need lots of RAM – see Compute nodes)
# – Launch from menu – #
Applications -> DTU -> xterm (application node (bigmem))
# – Launch from terminal – #
- Start a terminal (on a normal node)
- Launch bigmem node, bigmemlinuxsh -X
3. “Graphic” nodes:
This node is designed for cases where one needs programs with heavy graphical and interactive elements, that typically have poor performance when run on normal cluster nodes.
# – Launch from menu – #
Applications -> DTU -> xterm (VirtualGL-application-node)
Launching programs on this node works a little differently than on other nodes. The process of loading relevant Linux modules, initializing conda installations and so on, is exactly the same. However, the actual terminal call to launch an application should be prefixed with a
vglrun command.
For example in order to start Paraview,
- Start graphic node – see above
- Launch program vglrun paraview
4. Interactive GPU nodes:
These nodes are designed for cases where one needs GPUs for solving computationally heavy problems AND interactive elements. The cluster hosts a couple of such interactive nodes with GPUs. The node specifications can be seen on this HPC page.
# – Launch from terminal – #
- Start a terminal (on a normal node)
- Call either, voltash -X, sxm2sh -X or a100sh -X
Useful commands:
- gpustat
- nvidia-smi
NB: The memory on Volta is often “too limited” (16GB) for QIM/3DIM users, so prioritize using either SXM or A100.
Code of Conduct:
GPU resources are quite limited, so use them sensibly:
- Don’t leave programs hanging, so make sure to close programs after completion.
- We kindly ask you to use the interactive nodes mainly for development, profiling, and short test jobs.
- Please submit `heavy’ jobs into the gpu-queue and don’t use the interactive nodes for heavy stuff if avoidable.