How to deploy GLAuth and SSSD¶
This how-to guide shows you how to set up an IAM stack for your Charmed HPC cluster by deploying GLAuth as an LDAP server, and SSSD as the service for enrolling your cluster’s machines with GLAuth. The deployment, management, operations of both GLAuth and SSSD are controlled by the GLAuth and SSSD charms, respectively.
Hint
If you’re unfamiliar with operating GLAuth, see the GLAuth quick start guide for a high-level introduction to GLAuth. If you’re unfamiliar with integrating SSSD with an LDAP server, see the SSSD with LDAP how-to guide for a high-level introduction to integrating SSSD with an LDAP server such as GLAuth.
Prerequisites¶
An active Slurm deployment in your
charmed-hpc
machine cloud.An initialized
charmed-hpc-k8s
Kubernetes cloud.The Juju CLI client installed on your machine.
Deploy GLAuth and SSSD¶
You have two options for deploying GLAuth and SSSD:
Using the Juju CLI client.
Using the Juju Terraform client.
If you want to use Terraform to deploy GLAuth and SSSD, see the
Manage terraform-provider-juju
how-to guide for additional
requirements.
Deploy GLAuth¶
First, use juju add-model
to create the iam
model in your
charmed-hpc-k8s
Kubernetes cloud:
juju add-model iam charmed-hpc-k8s
Now, with the iam
model created, use juju deploy
to deploy
GLAuth with Postgres as GLAuth’s database back-end:
juju deploy glauth-k8s --channel "edge" \
--config anonymousdse_enabled=true \
--trust
juju deploy postgresql-k8s --channel "14/stable" --trust
juju deploy self-signed-certificates
juju deploy traefik-k8s --trust
Important
GLAuth must have the anonymousdse_enabled
configuration option set to
true
so that SSSD can anonymously inspect the GLAuth server’s root directory
server agent service entry (RootDSE) before binding to the GLAuth server.
If anonymousdse_enabled
is not set to true
, SSSD will fail to bind to
the GLAuth server as GLAuth will disallow unauthenticated clients from inspecting
its RootDSE.
Now run the following set of commands to integrate GLAuth and the other
deployed applications together with juju integrate
:
juju integrate glauth-k8s postgresql-k8s
juju integrate glauth-k8s self-signed-certificates
juju integrate glauth-k8s:ingress traefik-k8s
After a few minutes, your GLAuth deployment will become active. The output
of juju status
will be similar to the following:
~$
juju status
Model Controller Cloud/Region Version SLA Timestamp
iam charmed-hpc-controller charmed-hpc-k8s/default 3.6.4 unsupported 14:24:50-04:00
App Version Status Scale Charm Channel Rev Address Exposed Message
glauth-k8s active 1 glauth-k8s latest/edge 52 10.152.183.159 no
postgresql-k8s 14.15 active 1 postgresql-k8s 14/stable 495 10.152.183.236 no
self-signed-certificates active 1 self-signed-certificates latest/stable 264 10.152.183.57 no
traefik-k8s 2.11.0 active 1 traefik-k8s latest/stable 232 10.152.183.122 no Serving at 10.175.90.230
Unit Workload Agent Address Ports Message
glauth-k8s/0* active idle 10.1.0.165
postgresql-k8s/0* active idle 10.1.0.45 Primary
self-signed-certificates/0* active idle 10.1.0.128
traefik-k8s/0* active idle 10.1.0.73 Serving at 10.175.90.230
With GLAuth successfully deployed, you’ll now need to deploy SSSD in your slurm
model to enroll your cluster’s machines with GLAuth.
First, create the Terraform deployment plan file glauth/main.tf using the following set of commands:
mkdir glauth
touch glauth/main.tf
Now, editing glauth/main.tf, configure your plan to use the Juju Terraform provider:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = ">= 0.17.0"
}
}
}
Now, using the juju_model
resource, direct Juju to create the iam
model
on your charmed-hpc-k8s
Kubernetes cloud:
resource "juju_model" "iam" {
name = "iam"
credential = "charmed-hpc-k8s"
cloud {
name = "charmed-hpc-k8s"
}
}
Now declare the following modules in your deployment plan to load in GLAuth. These Terraform modules will direct Juju to deploy GLAuth with Postgres as GLAuth’s database back-end:
module "glauth-k8s" {
source = "git::https://github.com/canonical/glauth-k8s-operator//terraform"
model_name = juju_model.iam.name
config = {
anonymousdse_enabled = true
}
channel = "latest/edge"
}
module "postgresql-k8s" {
source = "git::https://github.com/canonical/postgresql-k8s-operator//terraform"
juju_model_name = juju_model.iam.name
}
module "self-signed-certificates" {
source = "git::https://github.com/canonical/self-signed-certificates-operator//terraform"
model = juju_model.iam.name
channel = "latest/stable"
base = "[email protected]"
}
module "traefik-k8s" {
source = "git::https://github.com/canonical/traefik-k8s-operator//terraform"
model_name = juju_model.iam.name
app_name = "traefik-k8s"
}
Important
GLAuth must have the anonymousdse_enabled
configuration option set to
true
so that SSSD can anonymously inspect the GLAuth server’s root directory
server agent service entry (RootDSE) before binding to the GLAuth server.
If anonymousdse_enabled
is not set to true
, SSSD will fail to bind to
the GLAuth server as GLAuth will disallow unauthenticated clients from inspecting
its RootDSE.
Now, using the juju_integration
resource, direct Juju to integrate GLAuth
and the other deployed applications together:
resource "juju_integration" "glauth-k8s-to-postgresql-k8s" {
model = juju_model.iam.name
application {
name = module.glauth-k8s.app_name
}
application {
name = module.postgresql-k8s.application_name
}
}
resource "juju_integration" "glauth-k8s-to-self-signed-certificates" {
model = juju_model.iam.name
application {
name = module.glauth-k8s.app_name
}
application {
name = module.self-signed-certificates.app_name
}
}
resource "juju_integration" "glauth-k8s-to-traefik-k8s" {
model = juju_model.iam.name
application {
name = module.glauth-k8s.app_name
endpoint = module.glauth-k8s.requires.ingress
}
application {
name = module.traefik-k8s.app_name
endpoint = module.traefik-k8s.endpoints.ingress_per_unit
}
}
With all the juju_model
and juju_integration
resources declared, and all
the charm modules loaded, you are now ready to deploy GLAuth using your glauth/main.tf
deployment plan. You can expand the dropdown below to see the full plan:
Full glauth/main.tf deployment plan
1terraform {
2 required_providers {
3 juju = {
4 source = "juju/juju"
5 version = ">= 0.17.0"
6 }
7 }
8}
9
10resource "juju_model" "iam" {
11 name = "iam"
12 credential = "charmed-hpc-k8s"
13 cloud {
14 name = "charmed-hpc-k8s"
15 }
16}
17
18module "glauth-k8s" {
19 source = "git::https://github.com/canonical/glauth-k8s-operator//terraform"
20 model_name = juju_model.iam.name
21 config = {
22 anonymousdse_enabled = true
23 }
24 channel = "latest/edge"
25}
26
27module "postgresql-k8s" {
28 source = "git::https://github.com/canonical/postgresql-k8s-operator//terraform"
29 juju_model_name = juju_model.iam.name
30}
31
32module "self-signed-certificates" {
33 source = "git::https://github.com/canonical/self-signed-certificates-operator//terraform"
34 model = juju_model.iam.name
35 channel = "latest/stable"
36 base = "[email protected]"
37}
38
39module "traefik-k8s" {
40 source = "git::https://github.com/canonical/traefik-k8s-operator//terraform"
41 model_name = juju_model.iam.name
42 app_name = "traefik-k8s"
43}
44
45resource "juju_integration" "glauth-k8s-to-postgresql-k8s" {
46 model = juju_model.iam.name
47
48 application {
49 name = module.glauth-k8s.app_name
50 }
51
52 application {
53 name = module.postgresql-k8s.application_name
54 }
55}
56
57resource "juju_integration" "glauth-k8s-to-self-signed-certificates" {
58 model = juju_model.iam.name
59
60 application {
61 name = module.glauth-k8s.app_name
62 }
63
64 application {
65 name = module.self-signed-certificates.app_name
66 }
67}
68
69resource "juju_integration" "glauth-k8s-to-traefik-k8s" {
70 model = juju_model.iam.name
71
72 application {
73 name = module.glauth-k8s.app_name
74 endpoint = module.glauth-k8s.requires.ingress
75 }
76
77 application {
78 name = module.traefik-k8s.app_name
79 endpoint = module.traefik-k8s.endpoints.ingress_per_unit
80 }
81}
To deploy GLAuth using your deployment plan, run the following terraform
commands:
terraform -chdir=glauth init
terraform -chdir=glauth apply -auto-approve
After a few minutes, your GLAuth deployment will become active. The output
of juju status
will be similar to the following:
~$
juju status
Model Controller Cloud/Region Version SLA Timestamp
iam charmed-hpc-controller charmed-hpc-k8s/default 3.6.4 unsupported 14:24:50-04:00
App Version Status Scale Charm Channel Rev Address Exposed Message
glauth-k8s active 1 glauth-k8s latest/edge 52 10.152.183.159 no
postgresql-k8s 14.15 active 1 postgresql-k8s 14/stable 495 10.152.183.236 no
self-signed-certificates active 1 self-signed-certificates latest/stable 264 10.152.183.57 no
traefik-k8s 2.11.0 active 1 traefik-k8s latest/stable 232 10.152.183.122 no Serving at 10.175.90.230
Unit Workload Agent Address Ports Message
glauth-k8s/0* active idle 10.1.0.165
postgresql-k8s/0* active idle 10.1.0.45 Primary
self-signed-certificates/0* active idle 10.1.0.128
traefik-k8s/0* active idle 10.1.0.73 Serving at 10.175.90.230
With GLAuth successfully deployed, you’ll now need to deploy SSSD in your slurm
model to enroll your cluster’s machines with GLAuth.
Deploy SSSD¶
First, use juju switch
to switch from the iam
model in your
charmed-hpc-k8s
Kubernetes cloud to the slurm
model in your charmed-hpc
machine cloud:
juju switch slurm
Now use juju deploy
to deploy SSSD:
juju deploy sssd --base "[email protected]" --channel "edge"
Now use juju integrate
to integrate SSSD with the Slurm services
sackd
and slurmd
:
juju integrate sssd sackd
juju integrate sssd slurmd
After a few minutes, your SSSD deployment will reach waiting status.
The output of juju status
will be similar to the following:
~$
juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.4 unsupported 16:17:13-04:00
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... waiting 2 sssd latest/edge 6 no Waiting for integrations: [`ldap`]
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 waiting idle 10.175.90.64 Waiting for integrations: [`ldap`]
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* waiting idle 10.175.90.107 Waiting for integrations: [`ldap`]
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
For SSSD to reach active status, you’ll now need to integrate SSSD with
GLAuth in your iam
model so that SSSD can enroll your machines with GLAuth.
First, create the Terraform deployment plan file sssd/main.tf using the following set of commands:
mkdir sssd
touch sssd/main.tf
Now, editing sssd/main.tf, configure your plan to use the Juju Terraform provider:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = ">= 0.17.0"
}
}
}
Now declare the following external data sources in your deployment plan. These
data sources make Terraform aware of your pre-existing slurm
model,
as well as the sackd
and slurmd
applications:
data "juju_model" "slurm" {
name = "slurm"
}
data "juju_application" "sackd" {
model = data.juju_model.slurm.name
name = "sackd"
}
data "juju_application" "slurmd" {
model = data.juju_model.slurm.name
name = "slurmd"
}
Now declare the following module in your deployment plan to load in SSSD.
This Terraform module will direct Juju to deploy SSSD in your slurm
module:
module "sssd" {
source = "git::https://github.com/canonical/sssd-operator//terraform"
model_name = data.juju_model.slurm.name
}
Now, using the juju_integration
resource, direct Juju to integrate SSSD
with the sackd
and slurmd
applications:
resource "juju_integration" "sssd-to-sackd" {
model = data.juju_model.slurm.name
application {
name = module.sssd.app_name
}
application {
name = data.juju_application.sackd.name
}
}
resource "juju_integration" "sssd-to-slurmd" {
model = data.juju_model.slurm.name
application {
name = module.sssd.app_name
}
application {
name = data.juju_application.slurmd.name
}
}
With the juju_integration
resources declared, and modules and external data
sources loaded, you are now ready to deploy SSSD using your sssd/main.tf
deployment plan. You can expand the dropdown below to see the full plan:
Full sssd/main.tf deployment plan
1terraform {
2 required_providers {
3 juju = {
4 source = "juju/juju"
5 version = ">= 0.17.0"
6 }
7 }
8}
9
10data "juju_model" "slurm" {
11 name = "slurm"
12}
13
14data "juju_application" "sackd" {
15 model = data.juju_model.slurm.name
16 name = "sackd"
17}
18
19data "juju_application" "slurmd" {
20 model = data.juju_model.slurm.name
21 name = "slurmd"
22}
23
24module "sssd" {
25 source = "git::https://github.com/canonical/sssd-operator//terraform"
26 model_name = data.juju_model.slurm.name
27}
28
29resource "juju_integration" "sssd-to-sackd" {
30 model = data.juju_model.slurm.name
31
32 application {
33 name = module.sssd.app_name
34 }
35
36 application {
37 name = data.juju_application.sackd.name
38 }
39}
40
41resource "juju_integration" "sssd-to-slurmd" {
42 model = data.juju_model.slurm.name
43
44 application {
45 name = module.sssd.app_name
46 }
47
48 application {
49 name = data.juju_application.slurmd.name
50 }
51}
To deploy SSSD using your deployment plan, run the following terraform
commands:
terraform -chdir=sssd init
terraform -chdir=sssd apply -auto-approve
After a few minutes, your SSSD deployment will reach waiting status.
The output of juju status
will be similar to the following:
~$
juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.4 unsupported 16:17:13-04:00
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... waiting 2 sssd latest/edge 6 no Waiting for integrations: [`ldap`]
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 waiting idle 10.175.90.64 Waiting for integrations: [`ldap`]
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* waiting idle 10.175.90.107 Waiting for integrations: [`ldap`]
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
For SSSD to reach active status, you’ll now need to integrate SSSD with
GLAuth in your iam
model so that SSSD can enroll your machines with GLAuth.
Connect SSSD to GLAuth¶
First, create offers for GLAuth in your iam
model using juju offer
:
juju offer iam.glauth-k8s:ldap ldap
juju offer iam.glauth-k8s:send-ca-cert ldap-certs
After creating the offers in your iam
model, use juju consume
to the
consume the offers in your slurm
model:
juju consume iam.ldap
juju consume iam.ldap-certs
Now use juju integrate
to connect SSSD to the GLAuth endpoints:
juju integrate ldap sssd
juju integrate ldap-certs sssd
After a few minutes, SSSD will become active. The output of juju status
will be similar to the following:
~$
juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.4 unsupported 16:17:13-04:00
SAAS Status Store URL
ldap active local admin/iam.ldap
ldap-certs active local admin/iam.ldap-certs
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... active 2 sssd latest/edge 6 no
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 active idle 10.175.90.64
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* active idle 10.175.90.107
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
You have successfully deployed and integrated an IAM stack for your Charmed HPC cluster!
First, create the Terraform deployment plan file connect-sssd-to-glauth/main.tf using the following set of commands:
mkdir connect-sssd-to-glauth
touch connect-sssd-to-glauth/main.tf
Now, editing connect-sssd-to-glauth/main.tf, configure your plan to use the Juju Terraform provider:
terraform {
required_providers {
juju = {
source = "juju/juju"
version = ">= 0.17.0"
}
}
}
Now declare the following external data sources in your plan. These
data sources make Terraform aware of your pre-existing iam
and slurm
models,
as well as the glauth-k8s
and sssd
applications:
data "juju_model" "iam" {
name = "iam"
}
data "juju_model" "slurm" {
name = "slurm"
}
data "juju_application" "glauth-k8s" {
model = data.juju_model.iam.name
name = "glauth-k8s"
}
data "juju_application" "sssd" {
model = data.juju_model.slurm.name
name = "sssd"
}
Now, using the juju_offer
resource, direct Juju to create offers
for GLAuth in the iam
model:
resource "juju_offer" "ldap" {
model = data.juju_model.iam.name
application_name = data.juju_application.glauth-k8s.name
endpoint = "ldap"
name = "ldap"
}
resource "juju_offer" "ldap-certs" {
model = data.juju_model.iam.name
application_name = data.juju_application.glauth-k8s.name
endpoint = "send-ca-certs"
name = "ldap-certs"
}
After declaring the offers, use the juju_integration
resource to direct
Juju to consume and integrate SSSD with the GLAuth offers in the slurm
model:
resource "juju_integration" "sssd-to-ldap" {
model = data.juju_model.slurm.name
application {
name = data.juju_application.sssd.name
}
application {
offer_url = data.juju_offer.ldap.url
}
}
resource "juju_integration" "sssd-to-ldap-certs" {
model = data.juju_model.slurm.name
application {
name = data.juju_application.sssd.name
}
application {
offer_url = data.juju_offer.ldap-certs.url
}
}
With the juju_offer
and juju_integration
resources declared, and external
data sources loaded, you are now ready to connect SSSD to GLAuth using
your connect-sssd-to-glauth/main.tf plan. You can expand the dropdown below to
see the full plan:
Full connect-sssd-to-glauth/main.tf plan
1terraform {
2 required_providers {
3 juju = {
4 source = "juju/juju"
5 version = ">= 0.17.0"
6 }
7 }
8}
9
10data "juju_model" "iam" {
11 name = "iam"
12}
13
14data "juju_model" "slurm" {
15 name = "slurm"
16}
17
18data "juju_application" "glauth-k8s" {
19 model = data.juju_model.iam.name
20 name = "glauth-k8s"
21}
22
23data "juju_application" "sssd" {
24 model = data.juju_model.slurm.name
25 name = "sssd"
26}
27
28resource "juju_offer" "ldap" {
29 model = data.juju_model.iam.name
30 application_name = data.juju_application.glauth-k8s.name
31 endpoint = "ldap"
32 name = "ldap"
33}
34
35resource "juju_offer" "ldap-certs" {
36 model = data.juju_model.iam.name
37 application_name = data.juju_application.glauth-k8s.name
38 endpoint = "send-ca-certs"
39 name = "ldap-certs"
40}
41
42resource "juju_integration" "sssd-to-ldap" {
43 model = data.juju_model.slurm.name
44
45 application {
46 name = data.juju_application.sssd.name
47 }
48
49 application {
50 offer_url = data.juju_offer.ldap.url
51 }
52}
53
54resource "juju_integration" "sssd-to-ldap-certs" {
55 model = data.juju_model.slurm.name
56
57 application {
58 name = data.juju_application.sssd.name
59 }
60
61 application {
62 offer_url = data.juju_offer.ldap-certs.url
63 }
64}
To use your plan to connect SSSD to GLAuth, run the following terraform
commands:
terraform -chdir=connect-sssd-to-glauth init
terraform -chdir=connect-sssd-to-glauth apply -auto-approve
After a few minutes, SSSD will become active. The output of juju status
will be similar to the following:
~$
juju status
Model Controller Cloud/Region Version SLA Timestamp
slurm charmed-hpc-controller charmed-hpc/default 3.6.4 unsupported 16:17:13-04:00
SAAS Status Store URL
ldap active local admin/iam.ldap
ldap-certs active local admin/iam.ldap-certs
App Version Status Scale Charm Channel Rev Exposed Message
mysql 8.0.39-0ubun... active 1 mysql 8.0/stable 313 no
sackd 23.11.4-1.2u... active 1 sackd latest/edge 13 no
slurmctld 23.11.4-1.2u... active 1 slurmctld latest/edge 95 no
slurmd 23.11.4-1.2u... active 1 slurmd latest/edge 116 no
slurmdbd 23.11.4-1.2u... active 1 slurmdbd latest/edge 87 no
slurmrestd 23.11.4-1.2u... active 1 slurmrestd latest/edge 89 no
sssd 2.9.4-1.1ubu... active 2 sssd latest/edge 6 no
Unit Workload Agent Machine Public address Ports Message
mysql/0* active idle 3 10.175.90.111 3306,33060/tcp Primary
sackd/0* active idle 0 10.175.90.64
sssd/1 active idle 10.175.90.64
slurmctld/0* active idle 4 10.175.90.100
slurmd/0* active idle 5 10.175.90.107
sssd/0* active idle 10.175.90.107
slurmdbd/0* active idle 2 10.175.90.105
slurmrestd/0* active idle 1 10.175.90.215
Machine State Address Inst id Base AZ Message
0 started 10.175.90.64 juju-0f356d-0 ubuntu@24.04 Running
1 started 10.175.90.215 juju-0f356d-1 ubuntu@24.04 Running
2 started 10.175.90.105 juju-0f356d-2 ubuntu@24.04 Running
3 started 10.175.90.111 juju-0f356d-3 ubuntu@22.04 Running
4 started 10.175.90.100 juju-0f356d-4 ubuntu@24.04 Running
5 started 10.175.90.107 juju-0f356d-5 ubuntu@24.04 Running
You have successfully deployed and integrated an IAM stack for your Charmed HPC cluster!
Next steps¶
You can now use GLAuth and SSSD as the IAM stack to manage users and groups on your Charmed HPC cluster. See the Access Postgres tutorial for how to access your deployed Postgres database, and GLAuth’s documentation for how to manage users and groups on your cluster using SQL queries.