Charmed-HPC allows automatic integration with shared filesystems using the
filesystem-client charm. This how-to guide shows you how to
deploy filesystem-client to integrate with externally managed shared filesystems.
Note
If you plan on using Terraform to handle your deployment, we also provide Terraform modules to setup a
cloud managed NFS server on the charmed-hpc-terraform repository, with
examples on how to deploy the modules.
External servers that provide a shared filesystem cannot be integrated directly. Instead,
we can use a proxy charm in order to expose
the required information to applications managed by Juju.
To integrate with an external NFS server, you will require:
An externally managed NFS server.
The server’s hostname.
The exported path.
(optional) the port.
Each public cloud has its own procedure to deploy a public NFS server. Provided here are links to
the set up procedures on a few well-known public clouds.
You can verify if the NFS server is exporting the desired directories
by using the command showmount-elocalhost while inside the LXD virtual machine.
Grab the network address of the LXD virtual machine and exit the current shell session:
hostname-I
exit
After gathering all the required information, you can deploy the nfs-server-proxy charm in order to
expose the externally managed server inside a Juju model.
Inside the LXD virtual machine, set up MicroCeph to export a Ceph filesystem.
# Setup environment
ln-s/bin/true/usr/local/bin/udevadm
apt-get-yupdate
apt-get-yinstallceph-commonjq
snapinstallmicroceph
# Bootstrap Microceph
microcephclusterbootstrap
# Add a storage disk to Microceph
microcephdiskaddloop,2G,3
We will create two new disk pools, then
assign the two pools to a new filesystem with the name cephfs.
# Create a new data pool for our filesystem
microceph.cephosdpoolcreatecephfs_data
# and a metadata pool for the same filesystem
microceph.cephosdpoolcreatecephfs_metadata
# Create a new filesystem that uses the two created data pools
microceph.cephfsnewcephfscephfs_metadatacephfs_data
We will also use fs-client as the username for the
clients, and expose the whole directory tree (/) in read-write mode (rw).
Print the required information for reference and then exit the current shell session:
echo$HOSTecho$FSIDecho$CLIENT_KEYexit
Having collected all the required information, you can deploy the cephfs-server-proxy charm to
expose the externally managed Ceph filesystem inside a Juju model.
jujudeploycephfs-server-proxy\--channellatest/edge\--configfsid=<valueof$FSID>\--configsharepoint=cephfs:/\--configmonitor-hosts="<value of $HOST>"\--configauth-info=fs-client:<valueof$CLIENT_KEY>
The mountpoint configuration represents the path that the filesystem will be mounted onto.
filesystem-client is a subordinate charm
that can automatically mount any shared filesystems for the application related with it.
In this case, we will relate it to the slurmd application in order to have a shared storage between
all the compute nodes in the cluster: