Qlustar 13 introduced Spack as the new package manager for HPC related software beyond the HPC core stack. Spack has a huge list of supported software packages and provides hardware optimized versions of them by design. Another big advantage of Spack: Multiple versions of the same software can easily co-exist on the same cluster.
The Qlustar version of Spack is provided as an OS package (deb/rpm) and defines the packages of the Qlustar HPC core stack as so-called external packages thus assuring a flawless integration. The same versions of these packages are provided for all Qlustar edge-platforms hence guaranteeing that Spack HPC applications work the same on all of them.
The Qlustar Spack setup has a few particularities that need explanation.
- Provided as an OS package (deb/rpm)
If you follow the Spack upstream documentation, it will tell you to clone the Spack git repository for installation. This step is unnecessary on Qlustar since the Spack installation is provided as an OS package for all Qlustar edge-platforms. This package is automatically installed when you create a chroot via QluMan. Using these packages assures fully functional integration of Spack packages into Qlustar which would not be the case when using a git cloned version of Spack.
- Definition of external packages
As already mentioned, the packages of the Qlustar HPC core Stack are defined as external packages together with a number of other packages mainly containing tools which are not critical to performance and unrelated to HPC. The definitions are in
/etc/spack/defaults/packages.yamlwhich is part of the Qlustar Spack package.
- Important directory paths / configs
The base directory of the Qlustar Spack instance on a cluster is always at
/apps/local/spack. This directory is automatically created at install time and must be used as the container for the Spack root. If for some reason you want to have this base directory at a different location, you may copy the original
/apps/local/spackdirectory to there and then link
/apps/local/spackto the new location.
Root of Qlustar Spack instance
The root of the Qlustar Spack instance denoted as
/usr/share/spack/rootand is part of the provided deb/rpm. At many places in Spack configuration files you can use
$spackto refer to this path.
At some places of the upstream Spack documentation (e.g. about configuration scopes),
$(prefix)is also used instead of
$spackto indicate this root. In the Qlustar docs, we only use
To allow for write access by non-root admins, many sub-directories of
$spackare symbolic links to directories underneath the base directory
Root of the Spack install tree
The root of the Spack install tree
$spack/opt/spackwhich is a link to
/apps/local/spack/spack. Installed packages are located in sub-directories of
If you need to define local custom Spack repositories, you can do so in
/etc/spack/repos.yaml. This file doesn’t exist per default and needs to be created in the corresponding chroot. Consult the upstream Spack documentation about the structure of
Spack compiler definitions
Spack compiler definitions are in the file
- Multi-admin setup
To allow a group of non-root admins to work on the Qlustar Spack instance, correct access permissions must be given to the root
$spack. For this purpose, a special user and group softadm are created at Qlustar installation time. The base directory of the Qlustar Spack instance
/apps/local/spackis owned by this user, but access permissions are setup such that any member of the group softadm can manage Spack packages.
- Compile all packages
Rather than installing pre-compiled Spack packages, Qlustar Spack is configured such that all packages will be configured and compiled from scratch on the cluster. This adds an additional layer of security and at the same time serves as a test-bed for the whole Spack development stack.
All commands shown below must be executed as a user who is a member of the softadm group on the cluster FE node or any other node (not the head-nodes themselves) with a unionFS chroot containing Spack. Since all Spack packages need to be compiled, the first thing that needs to be done on a new Spack instance is adding a system compiler.
To add the system gcc compiler to the instance, execute (example for Qlustar 13/jammy)
0 cl-fe:~ $ spack compiler find ==> Added 1 new compiler to /usr/share/spack/root/opt/.user/config/linux/compilers.yaml email@example.com ==> Compilers are defined in the following files: /usr/share/spack/root/opt/.user/config/linux/compilers.yaml
After this we can compile/install the newest gcc compiler (version 12.2.0 in this example) with all its dependencies using the system gcc as follows:
0 cl-fe:~ $ spack install firstname.lastname@example.org target=x86_64 [+] /usr (external diffutils-3.8-ptwf25tneglryigainabw5n3newdmp6e) [+] /usr (external gawk-5.1.0-hzdvttiw75b4jpecd2ipfz7sxx34qk7a) [+] /usr (external m4-1.4.18-buskmfvwfb5tiadj6koxcwcvjac7elmm) [+] /usr (external perl-5.34.0-xbzlntwxjvembfgz72yf6ugmo72jshqw).....
The installation can take more than an hour depending on your hardware.
We added the target=x86_64 option here to make sure that this gcc version will work on any node in the cluster. Using this target also makes sense for binary packages like Cuda or the Intel oneAPI compilers e.g.
Finally add the new gcc to the available compilers for Spack.
0 cl-fe:~ $ spack compiler add $(spack location -i email@example.com) ==> Added 1 new compiler to /usr/share/spack/root/opt/.user/config/linux/compilers.yaml firstname.lastname@example.org ==> Compilers are defined in the following files: /usr/share/spack/root/opt/.user/config/linux/compilers.yaml
Now we’re ready to install packages using the new compiler. If you also want to use the Intel oneAPI compiler family proceed as follows.
0 cl-fe:~ $ spack install intel-oneapi-compilers %email@example.com target=x86_64 0 cl-fe:~ $ spack compiler add $(spack location -i intel-oneapi-compilers)/compiler/latest/linux/bin/intel64 0 cl-fe:~ $ spack compiler add $(spack location -i intel-oneapi-compilers)/compiler/latest/linux/bin
We explicitly specified the system firstname.lastname@example.org compiler here, which we always advise to do for binary packages like this. If you don’t, most likely the newest Spack installed gcc (here email@example.com) will be used instead. This can cause problems when using the Intel compilers later, e.g. while searching for system header files. Also note that we needed two separate commands here to add the classic and the new oneAPI variant of the compiler package.
There are a number of other compilers available on Spack. You can install and create a tool-chain based on any of them, if the need arises.
The most common MPI variant is OpenMPI which we can now install using the new gcc compiler.
0 cl-fe:~ $ spack install openmpi +pmi+legacylaunchers schedulers=slurm fabrics=ucx %firstname.lastname@example.org
If needed, an OpenMPI variant based on the Intel oneAPI compilers may be created as follows:
0 cl-fe:~ $ spack install openmpi +pmi+legacylaunchers schedulers=slurm fabrics=ucx %oneapi
For the classical Intel Compilers (icc, ifort, etc.) do
0 cl-fe:~ $ spack install openmpi +pmi+legacylaunchers schedulers=slurm fabrics=ucx %intel
If you need special features, add clauses like
+lustre. Sometimes ucx causes
problems, in that case, you can add libfabric support by using
OPA networks, you should use
There are a number of other MPI variants available on Spack. You can install and create a tool-chain based on any of them, if the need arises.
As an example for an application, let’s build Linpack. Linpack needs to be compiled against an
implementation of a BLAS library. The standard open-source high-performance BLAS library is
openblas which we first build using the new gcc (specified by
0 cl-fe:~ $ spack install openblas threads=openmp %email@example.com 0 cl-fe:~ $ spack install hpl %firstname.lastname@example.org
%email@example.com for the hpl installation, Spack automatically knows that the gcc
based OpenMPI we built previously should be used. After successful installation, you can
proceed as described in the First
Steps Guide and start a Linpack run on the cluster.
If you need to squeeze the maximum performance out of your hardware, a Linpack version based on the Intel MKL will be the best choice in most circumstances. To build it, install the MKL and use the Intel based MPI you just installed above:
0 cl-fe:~ $ spack install intel-oneapi-mkl target=x86_64 %oneapi 0 cl-fe:~ $ spack install hpl %oneapi
This should have given you a rough idea how Spack may be used on Qlustar. For more details consult the official Spack documentation.