kubernetes adapter¶
Adapter to manage resources on Kubernetes.
How to use the adapter¶
This section illustrates how the kubernetes Inmanta module is used in this project to deploy the open5gs mobile core on Kubernetes.
Module Imports¶
Every model file that creates Kubernetes resources imports two modules:
import kubernetes::infra # Cluster, KubeConfig
import kubernetes::resources # Namespace, Secret, ConfigMap, Deployment,
# StatefulSet, Service, ServiceAccount,
# ClusterRole, ClusterRoleBinding, Rule, Subject
1. Connecting to a Cluster¶
Before creating any resource, declare the target cluster and its kubeconfig.
cluster = kubernetes::infra::Cluster(
name="minikube",
config=kubernetes::infra::KubeConfig(
config="", # raw kubeconfig YAML, or empty string for in-cluster
context="minikube", # kubectl context name
)
)
Every resource below receives cluster=cluster to bind it to this cluster.
2. Namespace¶
namespace = kubernetes::resources::Namespace(
name="open5gs-c001",
labels={"open5gs": "c001"},
cluster=cluster,
)
All namespaced resources (ConfigMap, Secret, Deployment, …) reference this namespace. Resources are indexed by namespace so that only one instance of a given workload can exist per namespace.
3. Secret¶
Used here as an image pull secret for a private container registry:
pull_secret = kubernetes::resources::Secret(
name="image-pull-secret",
namespace=namespace,
cluster=cluster,
type="kubernetes.io/dockerconfigjson",
data={
".dockerconfigjson": "<base64-encoded-docker-config>",
},
)
The secret is passed down to every pod template via imagePullSecrets.
4. ConfigMap¶
ConfigMaps carry configuration files that are volume-mounted into pods.
config_map = kubernetes::resources::ConfigMap(
name="open5gs-nrf-config-map",
namespace=self.namespace,
cluster=self.cluster,
labels={"open5gs": "nrf"},
data={
"nrf.yaml": std::template("open5gs/mc_5g/nrf.yaml.j2"),
},
purged=self.purged,
requires=self.requires,
provides=[self.provides, deployment],
)
Jinja2 templates or other plugins can be used to generate the content of configuration files. The example above uses a template to generate a yaml file.
5. Deployment¶
Stateless network functions (NRF, AUSF, PCF, NSSF, UDM, UDR) use a Deployment.
The pod spec is a dict. In the example below it is rendered from a Jinja2 template and parsed with the yaml_to_dict plugin to a dict.
# Set template variables in local scope
name_tmpl = "nrf"
namespace_tmpl = self.namespace.name
image_pull_secret_name_tmpl = self.image_pull_secret.name
config_map_name_tmpl = config_map.name
config_map_hash_tmpl = hash_dict(config_map.data) # for change detection
deployment_dict = yaml_to_dict(std::template("open5gs/mc_5g/nrf-deployment.yaml.j2"))
deployment = kubernetes::resources::Deployment(
name="open5gs-nrf-deployment",
namespace=self.namespace,
cluster=self.cluster,
labels={"open5gs": "nrf"},
spec=deployment_dict["spec"], # only the spec section
purged=self.purged,
requires=[
config_map, # ConfigMap must exist first
self.requires,
self.image_pull_secret,
],
provides=self.provides,
)
The corresponding Jinja2 template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ name_tmpl }}
namespace: {{ namespace_tmpl }}
spec:
replicas: 1
selector:
matchLabels:
open5gs: nrf
template:
metadata:
annotations:
"cni.projectcalico.org/ipAddrs": "[\"{{ ip_address_tmpl }}\"]"
configHash: {{ config_map_hash_tmpl }}
labels:
open5gs: nrf
spec:
containers:
- name: nrf
image: code.inmanta.com:4567/demo/open5gs-k8s/open5gs:2.2.2
command: ["open5gs-nrfd", "-c", "/open5gs/config-map/nrf.yaml"]
volumeMounts:
- name: open5gs-nrf-config
mountPath: /open5gs/config-map/nrf.yaml
subPath: nrf.yaml
imagePullSecrets:
- name: {{ image_pull_secret_name_tmpl }}
volumes:
- name: open5gs-nrf-config
configMap:
name: {{ config_map_name_tmpl }}
6. StatefulSet¶
Stateful workloads (AMF, SMF, UPF, WebUI, MongoDB) use a StatefulSet.
The pattern is identical to Deployment: render from a Jinja2 template, extract the spec:
stateful_set_dict = yaml_to_dict(std::template("open5gs/mc_5g/amf-stateful-set.yaml.j2"))
stateful_set = kubernetes::resources::StatefulSet(
name="open5gs-amf-stateful-set",
namespace=self.namespace,
cluster=self.cluster,
labels={"open5gs": "amf"},
spec=stateful_set_dict["spec"],
purged=self.purged,
requires=[
service, # Service must exist before the StatefulSet
config_map,
self.requires,
self.image_pull_secret,
],
provides=self.provides,
)
7. Service¶
ClusterIP with multiple ports¶
service = kubernetes::resources::Service(
name="open5gs-smf-svc-pool",
namespace=self.namespace,
cluster=self.cluster,
labels={"open5gs": "smf"},
spec={
"type": "ClusterIP",
"ports": [
{"name": "gtpc", "port": 2123, "protocol": "UDP"},
{"name": "gtpu", "port": 2152, "protocol": "UDP"},
{"name": "gx", "port": 3868, "protocol": "TCP"},
],
"selector": {"open5gs": "smf"},
},
purged=self.purged,
requires=self.requires,
provides=[self.provides, stateful_set],
)
NodePort (external exposure)¶
service = kubernetes::resources::Service(
name="open5gs-amf-svc-pool",
namespace=self.namespace,
cluster=self.cluster,
labels={"open5gs": "amf"},
spec={
"type": "NodePort",
"ports": [{
"name": "n2",
"port": 38412,
"protocol": "SCTP",
"nodePort": self.node_port, # dynamic node port from model
}],
"selector": {"open5gs": "amf"},
},
purged=self.purged,
requires=self.requires,
provides=[stateful_set, self.provides],
)
Headless Service (for StatefulSet DNS)¶
Setting clusterIp: "None" creates a headless service, used here by MongoDB:
service = kubernetes::resources::Service(
name="open5gs-db-svc",
namespace=self.namespace,
cluster=self.cluster,
labels={"name": "mongo"},
spec={
"ports": [{"port": 27017, "targetPort": 27017}],
"clusterIp": "None",
"selector": {"open5gs": "db"},
},
purged=self.purged,
requires=self.requires,
provides=[stateful_set, self.provides],
)
8. ServiceAccount, ClusterRole, and ClusterRoleBinding¶
MongoDB needs to read pod/service/endpoint resources from the API server. This requires a ServiceAccount, a ClusterRole with the needed permissions, and a ClusterRoleBinding that ties them together.
service_account = kubernetes::resources::ServiceAccount(
name="db",
namespace=self.namespace,
cluster=self.cluster,
purged=self.purged,
requires=self.requires,
provides=[read_role_binding, self.provides],
)
read_role = kubernetes::resources::ClusterRole(
name="pod-service-endpoint-reader",
rules=[
kubernetes::resources::Rule(
api_groups=[""],
resources=["pods", "services", "endpoints"],
verbs=["get", "list", "watch"],
)
],
cluster=self.cluster,
purged=self.purged,
requires=self.requires,
provides=self.provides,
)
read_role_binding = kubernetes::resources::ClusterRoleBinding(
name="system:serviceaccount:open5g:db",
role_ref=read_role,
subjects=[
kubernetes::resources::Subject(
kind="ServiceAccount",
name=service_account.name,
namespace=service_account.namespace.name,
)
],
cluster=self.cluster,
purged=self.purged,
requires=[service_account, self.requires],
provides=[stateful_set, self.provides],
)
9. Dependency Ordering¶
Resources are ordered using requires and provides.
This ensures that Inmanta deploys resources in the correct sequence. For the open5gs application this is:
ConfigMap before Deployment / StatefulSet
Service before StatefulSet
ServiceAccount before ClusterRoleBinding
NRF (network repository) before all other 5G network functions
MongoDB before PCF, UDR, and WebUI
# ConfigMap must be created before the Deployment
config_map = kubernetes::resources::ConfigMap(
...
provides=[deployment], # ConfigMap is a prerequisite for the deployment
)
deployment = kubernetes::resources::Deployment(
...
requires=[config_map], # explicitly wait for ConfigMap
)
10. Config Change Detection (Config Hash Annotation)¶
Kubernetes does not automatically restart pods when a mounted ConfigMap changes. To trigger a rollout on every config change, the SHA256 hash of the ConfigMap data is set as a pod template annotation. When the config changes, the hash changes, which changes the pod spec, which causes a rollout.
config_map_hash_tmpl = hash_dict(config_map.data) # SHA256 of the config data dict
With the plugin defined as:
@plugin
def hash_dict(dict_to_hash: "dict") -> "string":
return hashlib.sha256(str(dict_to_hash).encode("utf-8")).hexdigest()
In the pod template this is used as:
template:
metadata:
annotations:
configHash: {{ config_map_hash_tmpl }}
11. Static Pod IPs via Calico¶
All pods are pinned to a specific IP address using the Calico CNI annotation. This makes it possible to refer to pods by a well-known IP address rather than relying on DNS:
# In every pod template
metadata:
annotations:
"cni.projectcalico.org/ipAddrs": "[\"{{ ip_address_tmpl }}\"]"
The IP address is an explicit attribute on each Inmanta entity and passed as a template variable.
Resource Summary¶
Inmanta Entity |
K8s Kind |
Module |
|---|---|---|
|
(connection target) |
|
|
(kubeconfig value) |
|
|
Namespace |
|
|
Secret |
|
|
ConfigMap |
|
|
Deployment |
|
|
StatefulSet |
|
|
Service |
|
|
ServiceAccount |
|
|
ClusterRole |
|
|
ClusterRoleBinding |
|
|
(inline PolicyRule value) |
|
|
(inline Subject value) |
|