Basic Services
This section describes the basic services running on a typical Qlustar cluster.
Disk Partitions and File-systems
Typically, a head-node has two mirrored boot disks. Sometimes it also holds additional data disks, which are setup either as a mirror, a RAID 5/6 or are part of an external storage system. The boot disk (or the RAID device in case of a RAID boot setup) is used as a physical volume for the basic system LVM volume group (default name vgroot). See Logical Volume Management for more details on LVM.
The system volume group is the container of the following logical volumes: root, var, tmp, swap, apps and data (the latter can also be chosen to be located on a separate volume group made from additionally available disks during installation). Each of these logical volumes is used as the underlying block device for the correspondingly named file-system. Hence, the whole head-node setup, including the / (root) file-system is typically under control of LVM, adding large flexibility for storage management.
All additional disks or RAID sets are partitioned with a single partition of type LVM, and used
as LVM physical devices. Static mount configuration for file-systems is entered in
/etc/fstab
. All file-systems are of type ext4 unless requested otherwise.
Qlustar installations also have the option to use ZFS pools (see ZPool Administration) to setup additional data disks. |
NIS
NIS (Network Information System) is used as the default cluster wide name service database for
user account (passwd and shadow map) and group information (group map) The head-node is
configured as a NIS master server when running qlustar-initial-config
during installation. In
case of a HA head-node setup, the second head-node becomes a NIS slave server.
The generated NIS databases are located on the NIS master server under /var/yp
and the
corresponding source files in the directory /etc/qlustar/yp
. The passwd and shadow tables are
updated automatically by the script adduser.sh
when users are added (see
Adding User Accounts). Apart from that, usually nothing needs to
be changed in the provided NIS configuration.
For security reasons, the file /etc/qlustar/yp/shadow
should be readable and writable only by
root. In case NIS source files have been changed manually, the command make -C /var/yp
must
be executed to regenerate the maps and activate the changes. For more detailed information
about NIS, you may also consult the NIS package HowTo at
/usr/share/doc/nis/nis.debian.howto.gz
.
Another important security aspect is the access restriction to the NIS server. Usually, only
the cluster nodes should be allowed to contact the NIS server. In case the head-node is also
used as a work-group NIS server, additional access can be allowed for the corresponding subnet
to which the work-group workstations are connected. The access settings are configured in
/etc/ypserv.securenets
(see man ypserv
).
The master NIS server is also its own client. The corresponding configuration for the NIS
client (ypbind) process is set in /etc/yp.conf
. The NIS domain name is set in
/etc/defaultdomain
and usually defined as qlustar
. On cluster nodes booting over the
network, these settings are all configured automatically by DHCP (see also
DHCP).
NFS
To ensure a cluster wide homogeneous directory structure, the head-node provides NFS
(Network
File System) services to the compute-nodes. The kernel NFS server with protocol version 3 is
used for accomplishing this goal. The typical Qlustar directory structure consists of three
file-systems that are exported by the head-node via NFS to all other nodes: /srv/apps
,
/srv/data
and /srv/ql-common
.
In NFS version 4 (normally not used in Qlustar), one directory serves as the root path for all
exported file-systems and all exported directories must be a sub-directory of this path. To
achieve compatibility with NFS 4 in Qlustar, the directory |
While /srv/apps
and /srv/data
are typically separate file-systems on the head-node, the
entry /srv/ql-common
is a bind mount of the global Qlustar configuration directory
/etc/qlustar/common
. This mount is created as a result of the following entry in
/etc/fstab
:
/etc/qlustar/common /srv/ql-common none bind 0 0
File-systems to be shared via NFS need an entry in the file /etc/exports
. Execute man
exports
for a detailed explanation of the corresponding syntax. For security reasons, access
to shared file-systems should be limited to trusted networks. The directory /srv
is exported
with a special parameter fsid
. An export entry with the parameter no_root_squash
for a host
will enable full write access for the root user on that host (without that parameter, root is
mapped to the user nobody on NFS mounts). In the following example, root on the host login-c
(default name of the FrontEnd node) will have full write access to all exported file-systems:
/srv login-c(async,rw,no_subtree_check,fsid=0,insecure,no_root_squash)\ 192.168.52.0/24(async,rw,no_subtree_check,fsid=0,insecure) /srv/data login-c(async,rw,no_subtree_check,insecure,nohide,no_root_squash)\ 192.168.52.0/24(async,rw,no_subtree_check,insecure,nohide) /srv/apps login-c(async,rw,no-subtree_check,insecure,nohide,no_root_squash)\ 192.168.52.0/24(async,rw,no_subtree_check,insecure,nohide) /srv/ql-common login-c(async,rw,subtree_check,insecure,nohide,no_root_squash)\ 192.168.52.0/24(async,ro,subtree_check,insecure,nohide)
After changing the exports information, the NFS server needs to reload its configuration to activate it. This is achieved by executing the command
0 root@cl-head ~ # service nfs-kernel-server reload
SSH - Secure Shell
Remote shell access from the LAN to the head-node and from the head-node to the compute-nodes is only allowed using the OpenSSH secure shell (ssh). A correct configuration of the ssh daemon is of crucial importance for the security of the whole cluster. Most important is to allow only ssh protocol version 2 connections.
The Qlustar default configuration allows for AgentForwarding
and X11Forwarding
. This way,
X11 programs can be executed without any further hassle from any compute-node with the X
display appearing on a users workstation in the LAN. The relevant ssh configuration files are
/etc/ssh/sshd_config /etc/ssh/ssh_config /etc/ssh/sshd_config.d/qlustar.config
To allow password-less root access from the head to the other cluster nodes, the root ssh
public key that is generated on the head-node is automatically put into the QluMan database
during installation. Its content is then copied into the file /root/.ssh/authorized_keys
on
any netboot node during its boot process.
One last step is required in order to prevent interactive questions when using ssh logins
between nodes: A file named ssh_known_hosts
containing all hosts keys in the cluster must
exist. It is automatically generated by QluMan, placed into the directory
/etc/qlustar/common/image-files/ssh
and linked to /etc/ssh/ssh_known_hosts
on netboot
nodes.
- Host-based authentication
-
To enable host-based authentication, the parameter
HostbasedAuthentication
must be set toyes
in/etc/ssh/sshd_config
on the clients. This is the default in Qlustar. Furthermore, the file/etc/ssh/shosts.equiv
must contain the hostnames of all hosts from where login should be allowed. This file is also automatically generated by QluMan. Note that this mechanism works for ordinary users but not for the root user.
Mail server - Postfix
Mostly for the purpose of sending alert and other informational messages, the mail server
postfix
is setup on the head-node. Typically it is configured to simply transfer all mail to
a central mail relay, whose name can be entered during installation. The main postfix
configuration file is /etc/postfix/main.cf
. Mail aliases can be added in /etc/aliases
(initial aliases were configured during installation). A change in this file requires execution
of the command postalias /etc/aliases
to activate the changes. Have a look at
Transport Agent to find out, how to configure mail on the
compute-nodes.