Overview of the public partitions
Partition | Max Walltime | Comment |
---|---|---|
short | 02:00:00 | - |
med | 08:00:00 | - |
long | 48:00:00 | - |
ultralong | 672:00:00 | No GPU or "non-blocking nodes usable" |
Furthermore, certain node requirements (constraints or features) can be specified when generating the job.
Overview of the most important node features (note that by default only 512 MB RAM is available per core. Usually some gigabytes of the main memory are already occupied by the operating system and therefore cannot be allocated by Slurm. A typical resource request should therefore always request a few gigabytes less than are physically available in the machine):
Constraint | CPU type | Accelerator type | Max Cores | Max RAM/Node | Comment |
---|---|---|---|---|---|
cstd01 | 2x Intel Xeon E5-2640v4 | - | 20 | 64 GB | |
cstd10 | 2x Intel Xeon E5-2640v4 | - | 20 | 256 GB | upgraded cstd02 with more RAM |
cstd02 oder ib_1to1 | 2x Intel Xeon E5-2640v4 | - | 20 | 64 GB | non-blocking |
cquad01 | 4x Intel Xeon E5-4640v4 | - | 48 | 256 GB | |
cquad02 | 4x Intel Xeon E5-4640v4 | - | 48 | 1024 GB | |
cgpu01 oder tesla_k40 | 2x Intel Xeon E5-2640v4 | 2x NVIDIA Tesla K40 (12 GB) | 20 | 64 GB |
The following nodes have been acquired subsequently from third parties and are only available to the general public to a limited extent, i.e. the job runtime is restricted to a maximum of two hours.
In particular, these nodes are not accessed via the usual partitions short/med/long but have their own partition and no other special constraints such as cstd01.
Partition | CPU type | Accelerator type | Max Cores | Max RAM / Knoten | Comment |
---|---|---|---|---|---|
ext_vwl_norm | 2x Intel Xeon E5 2690v4 | 1x NVIDIA Tesla P100 (12 GB) | 28 | 256 GB | cgpu02 |
ext_phy_norm | 1x Intel Xeon Phi KNK 7210 | 64 | 256 GB | cknl01 | |
ext_iom_norm | 2x Intel Xeon E5 2690v4 | 28 | 256 GB | cstd03 | |
ext_iom_norm | 2x Intel Gold 6134 | 16 | 192 GB | cstd04 | |
ext_iom_norm | 4x Intel Gold 6230 | 80 | 512 GB | cquad03 | |
ext_iom_norm | 2x AMD EPYC 7313 16-Core Processor | 32 | 768 GB | cstd09 | |
ext_math_norm | 2x AMD EPYC 7542 32-Core Processor | 64 | 1024 GB | cstd05 | |
ext_chem_norm | 2x Intel Gold 6242R | 40 | 96 GB | cstd06 | |
ext_biochem_norm | 2x AMD EPYC 7542 32-Core Processor | 64 | 256 GB | cstd07 | |
2x AMD EPYC 7542 32-Core Processor | 1x NVIDIA Tesla V100 (32 GB) oder A100 (40GB) | 64 | 512 GB | cgpu03 | |
ext_chem2_norm | 1x AMD EPYC 7252 8-Core Processor | 1x NVIDIA RTX A6000 (48 GB) | 8 | 256 GB | cgpu04 |
ext_ace_prio | 2x AMD EPYC 7453 28-Core Processor | 56 | 512 GB | cstd08 |
new improvised gpu partitions
As the original GPU nodes (cgpu01) contain only rather old GPUs, we have come up with an intermediate solution:
The partitions gpu_short, gpu_med and gpu_long contain the compute-nodes cgpu02-001, cgpu02-002 and cgpu03-002 and thus provide access to NVIDIA Tesla P100, NVIDIA Tesla V100 and NVIDIA Tesla A100.
(See above for details.)