Developing South Bound Integrations

The inmanta orchestrator comes with a set of integrations with different platforms (see: Inmanta modules). But it is also possible to develop your own south bound integrations.

To integrate a new platform into the orchestrator, you must take the following steps:

  1. Create a new module to contain the integration (see: Understanding Modules).

  2. Model the target platform as set of entities.

  3. Create resources and handler, as described below.

Overview

A South Bound integration always consists of three parts:
  • one or more entities in the model

  • a resource that serializes the entities and captures all information required to enforce the desired state.

  • a handler: the python code required to enforce the desired state.

../_images/handler_flow.svg
  • In the compiler, a model is constructed that consists of entities. The entities can be related to each other.

  • The exporter will search for all entities that can be directly deployed by a handler. These are the resources. Resources are self-contained and can not refer to any other entity or resource.

  • The resources will be sent to the server in json serialized form.

  • The agent will present the resources to a handler in order to have the desired state enforced on the managed infrastructure.

Resource

A resource is represented by a Python class that is registered with Inmanta using the @resource decorator. This decorator decorates a class that inherits from the Resource class.

The fields of the resource are indicated with a fields field in the class. This field is a tuple or list of strings with the name of the desired fields of the resource. The orchestrator uses these fields to determine which attributes of the matching entity need to be included in the resource.

Fields of a resource cannot refer to an instance in the orchestration model or fields of other resources. The resource serializers allows to map field values. Instead of referring directly to an attribute of the entity it serializes (path in std::File and path in the resource map one on one). This mapping is done by adding a static method to the resource class with get_$(field_name) as name. This static method has two arguments: a reference to the exporter and the instance of the entity it is serializing.

 1from inmanta.resources import resource, Resource
 2
 3@resource("std::File", agent="host.name", id_attribute="path")
 4class File(Resource):
 5    fields = ("path", "owner", "hash", "group", "permissions", "purged", "reload")
 6
 7    @staticmethod
 8    def get_hash(exporter, obj):
 9        hash_id = md5sum(obj.content)
10        exporter.upload_file(hash_id, obj.content)
11        return hash_id
12
13    @staticmethod
14    def get_permissions(_, obj):
15        return int(x.mode)

Classes decorated with @resource do not have to inherit directly from Resource. The orchestrator already offers two additional base classes with fields and mappings defined: PurgeableResource and ManagedResource. This mechanism is useful for resources that have fields in common.

A resource can also indicate that it has to be ignored by raising the IgnoreResourceException exception.

Handler

Handlers interface the orchestrator with resources in the infrastructure. Handlers take care of changing the current state of a resource to the desired state expressed in the orchestration model.

The compiler collects all python modules from Inmanta modules that provide handlers and uploads them to the server. When a new orchestration model version is deployed, the handler code is pushed to all agents and imported there.

Handlers should inherit the class CRUDHandler. The @provider decorator registers the class with the orchestrator.

Each Handler should override 4 methods of the CRUDHandler:

  1. read_resource() to read the current state of the system.

  2. create_resource() to create the resource if it doesn’t exist.

  3. update_resource() to update the resource when required.

  4. delete_resource() to delete the resource when required.

The context (See HandlerContext) passed to most methods is used to report results, changes and logs to the handler and the server.

Using facts

Facts are properties of the environment whose values are not managed by the orchestrator. Facts are either used as input in a model, e.g. a virtual machine provider provides an ip and the model then uses this ip to run a service, or used for reporting purposes.

Retrieving a fact in the model is done with the std::getfact() function.

Example taken from the openstack Inmanta module:

1implementation fipAddr for FloatingIP:
2    self.address = std::getfact(self, "ip_address")
3end

Facts can be pushed or pulled through the handler.


Pushing a fact is done in the handler with the set_fact() method during resource deployment (in read_resource and/or create_resource). e.g.:

 1@provider("openstack::FloatingIP", name="openstack")
 2class FloatingIPHandler(OpenStackHandler):
 3    def read_resource(self, ctx: handler.HandlerContext, resource: FloatingIP) -> None:
 4        ...
 5
 6    def create_resource(self, ctx: handler.HandlerContext, resource: FloatingIP) -> None:
 7        ...
 8        # Setting fact manually
 9        for key, value in ...:
10            ctx.set_fact(fact_id=key, value=value, expires=True)

By default, facts expire when they have not been refreshed or updated for a certain time, controlled by the server.fact-expire config option. Querying for an expired fact will force the agent to refresh it first.

When reporting a fact, setting the expires parameter to False will ensure that this fact never expires. This is useful to take some load off the agent when working with facts whose values never change. On the other hand, when working with facts whose values are subject to change, setting the expires parameter to True will ensure they are periodically refreshed.


Facts are automatically pulled periodically (this time interval is controlled by the server.fact-renew config option) when they are about to expire or if a model requested them and they were not present. The server periodically asks the agent to call into the handler’s facts() method. e.g.:

1@provider("openstack::FloatingIP", name="openstack")
2class FloatingIPHandler(OpenStackHandler):
3    ...
4
5    def facts(self, ctx, resource):
6        port_id = self.get_port_id(resource.port)
7        fip = self._neutron.list_floatingips(port_id=port_id)["floatingips"]
8        if len(fip) > 0:
9            ctx.set_fact("ip_address", fip[0]["floating_ip_address"])

Warning

If you ever push a fact that does expire, make sure it is also returned by the handler’s facts() method. If you omit to do so, when the fact eventually expires, the agent will keep on trying to refresh it unsuccessfully.

Note

Facts should not be used for things that change rapidly (e.g. cpu usage), as they are not intended to refresh very quickly.

Built-in Handler utilities

The Inmanta Agent, responsible for executing handlers has built-in utilities to help handler development. This section describes the most important ones.

Logging

The agent has a built-in logging facility, similar to the standard python logger. All logs written to this logger will be sent to the server and are available via the web-console and the API. Additionally, the logs go into the agent’s logfile and into the resource-action log on the server.

To use this logger, use one of the methods: ctx.debug, ctx.info, ctx.warning, ctx.error, ctx.critical or ctx.exception.

This logger implements the ~inmanta.agent.handler.LoggerABC logging interface and supports kwargs. The kwargs have to be json serializable. They will be available via the API in their json structured form.

For example:

def create_resource(self, ctx: HandlerContext, resource: ELB) -> None:
    # ...
    ctx.debug("Creating loadbalancer with security group %(sg)s", sg=sg_id)

An alternative implementation of the ~inmanta.agent.handler.LoggerABC logging interface that just logs to the Python logger is provided in ~inmanta.agent.handler.PythonLogger. This logger is not meant to be used in actual handlers but it can be used for the automated testing of helper methods that accept a ~inmanta.agent.handler.LoggerABC instance. In production, these helpers would receive the actual HandlerContext and log appropriately, while for testing the PythonLogger can be passed.

Caching

The agent maintains a cache, that is kept over handler invocations. It can, for example, be used to cache a connection, so that multiple resources on the same device can share a connection.

The cache can be invalidated either based on a timeout or on version. A timeout based cache is kept for a specific time. A version based cache is used for all resource in a specific version. The cache will be dropped when the deployment for this version is ready.

The cache can be used through the @cache decorator. Any method annotated with this annotation will be cached, similar to the way lru_cache works. The arguments to the method will form the cache key, the return value will be cached. When the method is called a second time with the same arguments, it will not be executed again, but the cached result is returned instead. To exclude specific arguments from the cache key, use the ignore parameter.

For example, to cache the connection to a specific device for 120 seconds:

@cache(timeout=120, ignore=["ctx"])
def get_client_connection(self, ctx, device_id):
   # ...
   return connection

To do the same, but additionally also expire the cache when the next version is deployed, the method must have a parameter called version. for_version is True by default, so when a version parameter is present, the cache is version bound by default.

@cache(timeout=120, ignore=["ctx"], for_version=True)
def get_client_connection(self, ctx, device_id, version):
   # ...
   return connection

To also ensure the connection is properly closed, an on_delete function can be attached. This function is called when the cache is expired. It gets the cached item as argument.

@cache(timeout=120, ignore=["ctx"], for_version=True,
   call_on_delete=lambda connection:connection.close())
def get_client_connection(self, ctx, device_id, version):
   # ...
   return connection