Product SiteDocumentation Site

Qlustar Cluster OS 10.1

Release Notes


The 10.1 release extends Qlustar's reach significantly by adding CentOS as a fully supported edge-platform together with a tight OpenHPC integration. A second major milestone is the revamp of the node boot process which now supports fault-tolerant multi-cast.
To prepare for ExaScale support, we now ship pmix as the default MPI startup method from within slurm and ucx as a new communication library implementing a high-performance messaging layer for MPI, PGAS, and RPC frameworks.
The highlights among the numerous component updates and bug fixes are: Kernel 4.14.x, Slurm 17.11.x, CUDA 9.1, OpenMPI 3.1.2, Lustre 2.11.
1. Basic Info
2. New features
2.1. CentOS edge-platform
2.2. Fault-tolerant multi-cast booting of OS images
3. Major component updates
4. Other notable package version updates
5. General changes/improvements
6. Update instructions
7. Changelogs

1. Basic Info

The Qlustar 10.1 release is based on Ubuntu 16.04.5. It includes all security fixes and other package updates published before Oct 12th 2018. Available security updates relevant to Qlustar 10.1, that have appeared after this date, will be announced on the Qlustar website and in the Qlustar security newsletter. Supported edge-platforms are Ubuntu 16.04 (Xenial) and the newly available CentOS 7.5 with integration of OpenHPC 1.3.5.

2. New features

2.1. CentOS edge-platform

CentOS image modules
The following Qlustar image modules are provided to create CentOS based OS images with exactly the same functionality as their Ubuntu counterparts:
  • Core, Slurm, OFED, Nvidia, Lustre Client, BeeGFS client and Samba
OpenHPC integration
The OpenHPC 1.3.5 software repository is automatically available in Qlustar CentOS chroots. If selected at install time, a basic set of OpenHPC packages is available in the CentOS chroot that is created for netboot nodes. Additional desired packages can be installed in a standard fashion using yum.


Since OpenHPC provides a full-blown development stack for HPC applications, Qlustar relies on these packages rather than providing its own stack as for Ubuntu.

2.2. Fault-tolerant multi-cast booting of OS images

Extending our revamp of the netboot process for Qlustar nodes in the 10.0 release, two major improvements were made:
  • QluMan now supports downloading the OS image via a fast and fault-tolerant multi-cast mechanism. This can reduce boot-time dramatically and allows for simultaneous booting of a virtually unlimited number of cluster nodes without increasing overall boot-time.
  • Qlustar OS images are now created using squashfs with compression. This reduces the memory footprint of the OS by roughly 66% so that a standard compute node image with slurm and IB support now consumes a mere 160MB of RAM.

3. Major component updates

QluMan 10.1
QluMan 10.1 adds a much improved calculation of the node-specific memory available to slurm jobs. This makes sure that nodes are not oversubscribed concerning RAM allocation and prevents unjustified killing of jobs by the kernel OOM killer. It now also automatically detects/configures CPU hardware threads for usage in slurm. Additionally a large number of minor improvements were made and several bugs fixed.
Kernel 4.14
Qlustar 10.1 is based on the 4.14 LTS kernel series.
Qlustar 10.1 introduces the Slurm 17.11 series with the current version being
Qlustar 10.1 upgrades to OpenMPI 3.1.2 now including support for libfabric.
Nvidia CUDA
Qlustar 10.1 provides optimal support for Nvidia GPU hardware by supplying pre-compiled and up-to-date kernel drivers as well as CUDA 9.1.
Qlustar 10.1 has integrated the most recent Lustre release 2.11 for clients and servers with ready-to-use image modules. This is the first release with the long awaited Data on MDT (DoM) feature to significantly improve small-file performance.

4. Other notable package version updates

  • rdma-core: 18.1 (Ubuntu only, on CentOS, the original RHEL OFED stack is used).
  • Intel/PGI Compiler support: The Qlustar wrapper packages have been updated to support the new version of the PGI community edition 18.4 (package qlustar-pgi-dev-tools). Corresponding OpenMPI package variants for this compiler are also provided (both Ubuntu only).
  • BeeGFS: 6.19
  • ZFS: 0.7.10
  • singularity: 2.6.0
  • openblas: 3.2
  • hwloc: 1.11.7

5. General changes/improvements

  • Removed support for BLCR checkpoint/restart due to lack of upstream support for newer kernels.

6. Update instructions

  1. Preparations

    Upgrading to Qlustar 10.1 is only supported from a 10.0.x release.


    Make sure that you have no unwritten changes in the QluMan database. If you do, write them to disk as described in the QluMan Guide before proceeding with the update.
  2. Optionally clone chroots

    Clone existing Ubuntu 16.04 chroots based on 10.0 and then afterwards upgrade the clones to 10.1. That allows for easy rollback.
  3. Update to 10.1 package sources list

    The Qlustar apt sources list needs to be changed as follows both on the head-node(s) and in all existing chroot(s) that should be updated.
    0 root@cl-head ~ #
    apt-get update
    0 root@cl-head ~ #
    apt-get install qlustar-sources-list-10.1
  4. Update packages

    Now proceed as explained in the Qlustar Administration Manual.
  5. Reboot head-node(s)

    Initially only reboot the head-node(s).
  6. Change default MPI startup for slurm

    To make use of pmix as the default MPI startup method, edit the slurm config header in QluMan and set MpiDefault=pmix. Then write the slurm config.


    After this change, if you want to run MPI programs compiled before the update, you need to add the srun option --mpi=pmi2 to your submit scripts.
  7. Regenerating Qlustar images

    Regenerate your Qlustar images with the 10.1 image modules. To accomplish this, you'll have to select Version 10.1 in the QluMan Qlustar Images dialog. If you have new cloned chroots, select those as well.
  8. Write config file changes

    To activate all changes in the QluMan database that were introduced by the update, they need to be written to disk now. Check the QluMan Guide about how to write such changes.
  9. Reboot all netboot nodes

    After the regeneration of the images is complete, and all the above steps have been done, you can reboot all other nodes in the cluster, including virtual FE nodes. This completes the update procedure.

7. Changelogs

A detailed log of changes in the image modules can be found in the directories /usr/share/doc/qlustar-module-<module-name>-*-amd64-10.1.0. As an example, in the directory /usr/share/doc/qlustar-module-core-xenial-amd64-10.1.0 you will find a summary changelog in core.changelog, a complete list of packages with version numbers entering the current core module in core.packages.version.gz, a complete changelog of the core modules package versions in core.packages.version.gz and finally a complete log of changed files in core.contents.changelog.gz.