Module Developers Guide¶
In inmanta all orchestration model code and related files, templates, plugins and resource handlers are packaged in a module.
Inmanta expects that each module is a git repository with a specific layout:
The name of the module is determined by the top-level directory. Within this module directory, a
module.ymlfile has to be specified.
The only mandatory subdirectory is the
modeldirectory containing a file called
_init.cf. What is defined in the
_init.cffile is available in the namespace linked with the name of the module. Other files in the model directory create subnamespaces.
pluginsdirectory contains Python files that are loaded by the platform and can extend it using the Inmanta API. This python code can provide plugins or resource handlers.
The template, file and source plugins from the std module expect the following directories as well:
filesdirectory contains files that are deployed verbatim to managed machines.
templatesdirectory contains templates that use parameters from the orchestration model to generate configuration files.
A complete module might contain the following files:
module | |__ module.yml | |__ files | |__ file1.txt | |__ model | |__ _init.cf | |__ services.cf | |__ plugins | |__ functions.py | |__ templates |__ conf_file.conf.tmpl
To quickly initialize a module use cookiecutter:
pip install cookiecutter cookiecutter gh:inmanta/inmanta-module-template
The module.yml file provides metadata about the module. This file is a yaml file with the following three keys mandatory:
name: The name of the module. This name should also match the name of the module directory.
license: The license under which the module is distributed.
version: The version of this module. For a new module a start version could be 0.1dev0 These versions are parsed using the same version parser as python setuptools.
For example the following module.yaml from the Quickstart
name: lamp license: Apache 2.0 version: 0.1
Module depdencies are indicated by importing a module in a model file. However, these import do not have a specifc version identifier. The version of a module import can be constrained in the module.yml file. The requires key excepts a list of version specs. These version specs use PEP440 syntax.
To specify specific version are required, constraints can be added to the requires list:
license: Apache 2.0 name: ip source: firstname.lastname@example.org:inmanta/ip version: 0.1.15 requires: net: net ~= 0.2.4 std: std >1.0 <2.5
A module can also indicate a minimal compiler version with the compiler_version key.
source indicates the authoritative repository where the module is maintained.
To automatically freeze all versions to the currently checked out versions
inmanta module freeze --recursive --operator ==
Or for the the current project
inmanta project freeze --recursive --operator ==
Inmanta modules should be versioned. The current version is reflected in the module.yml file and in the commit is should be tagged in the git repository as well. To ease the use inmanta provides a command (inmanta modules commit) to modify module versions, commit to git and place the correct tag.
To make changes to a module, first create a new git branch:
git checkout -b mywork
When done, first use git to add files:
git add *
To commit, use the module tool. This will create a new dev release.:
inmanta module commit -m "First commit"
For the dev releases, no tags are created by default. If a tag is required for a dev release, use the –tag option.:
inmanta module commit -m "First commit" --tag
To make an actual release. It will automatically set the right tags on the module:
inmanta module commit -r -m "First Release"
If a release shouldn’t be tagged, the –no-tag option should be specified:
inmanta module commit -r -m "First Release" --no-tag
To set a specific version:
inmanta module commit -r -m "First Release" -v 1.0.1
The module tool also support semantic versioning instead of setting versions directly. Use one
--patch to update version numbers: <major>.<minor>.<patch>
Inmanta offers module developers an orchestration platform with many extension possibilities. When
modelling with existing modules is not sufficient, a module developer can use the Python SDK of
Inmanta to extend the platform. Python code that extends Inmanta is stored in the plugins directory
of a module. All python modules in the plugins subdirectory will be loaded by the compiler when at
__init__.py file exists, exactly like any other python package.
The Inmanta Python SDK offers several extension mechanism:
Only the compiler and agents load code included in modules (See Architecture). A module can
include a requirements.txt file with all external dependencies. Both the compiler and the agent will
install this dependencies with
pip install in an virtual environment dedicated to the compiler
or agent. By default this is in .env of the project for the compiler and in
/var/lib/inmanta/agent/env for the agent.
Inmanta uses a special format of requirements that was defined in python PEP440 but never fully implemented in all python tools (setuptools and pip). Inmanta rewrites this to the syntax pip requires. This format allows module developers to specify a python dependency in a repo on a dedicated branch. And it allows inmanta to resolve the requirements of all module to a single set of requirements, because the name of module is unambiguously defined in the requirement. The format for requires in requirements.txt is the folllowing:
Either, the name of the module and an optional constraint
Or a repository location such as git+https://github.com/project/python-foo The correct syntax to use is then: eggname@git+https://../repository#branch with branch being optional.
Plugins provide functions that can be called from the DSL. This is the primary mechanism to interface Python code with the orchestration model at compile time. For Example, this mechanism is also used for std::template and std::file. In addition to this, Inmanta also registers all plugins with the template engine (Jinja2) to use as filters.
A plugin is a python function, registered with the platform with the
decorator. This plugin accepts arguments when called from the DSL and can return a value. Both the
arguments and the return value must by annotated with the allowed types from the orchestration model.
Type annotations are provided as a string (Python3 style argument annotation).
any is a special
type that effectively disables type validation.
Through the arguments of the function, the Python code in the plugin can navigate the orchestration model. The compiler takes care of scheduling the execution at the correct point in the model evaluation.
A simple plugin that accepts no arguments, prints out “hello world” and returns no value requires the following code:
1 2 3 4 5
from inmanta.plugins import plugin @plugin def hello(): print("Hello world!")
If the code above is placed in the plugins directory of the example module
examples/plugins/__init__.py) the plugin can be invoked from the orchestration model as
import example example::hello()
The plugin decorator accepts an argument name. This can be used to change the name of the plugin in
the DSL. This can be used to create plugins that use python reserved names such as
1 2 3 4 5 6 7 8
from inmanta.plugins import plugin @plugin("print") def printf(): """ Prints inmanta """ print("inmanta")
A more complex plugin accepts arguments and returns a value. The following example creates a plugin that converts a string to uppercase:
1 2 3 4 5
from inmanta.plugins import plugin @plugin def upper(value: "string") -> "string": return value.upper()
This plugin can be tested with:
import example std::print(example::upper("hello world"))
Argument type annotations are strings that refer to Inmanta primitive types or to entities. If an entity is passed to a plugin, the python code of the plugin can navigate relations throughout the orchestration model to access attributes of other entities.
A base exception for plugins is provided in
inmanta.plugins.PluginException. Exceptions raised
from a plugin should be of a subtype of this base exception.
1 2 3 4 5
from inmanta.plugins import plugin, PluginException @plugin def raise_exception(message: "string"): raise PluginException(message)
If your plugin requires external libraries, include a requirements.txt in the module. The libraries listed in this file are automatically installed by the compiler and agents.
South Bound Integration¶
The inmanta orchestrator comes with a set of integrations with different platforms (see: Inmanta modules). But it is also possible to develop your own south bound integrations.
To integrate a new platform into the orchestrator, you must take the following steps:
Create a new module to contain the integration (see: Module Developers Guide).
Model the target platform as set of entities.
A resource defines how to serialize an entity so that it can be sent over to the server and the agent. A handler is the python code required by the agent to enforce the desired state expressed by a resource.
The fields of the resource are indicated with a
fields field in the class. This field is a tuple
or list of strings with the name of the desired fields of the resource. The orchestrator uses these
fields to determine which attributes of the matching entity need to be included in the resource.
Fields of a resource cannot refer to an instance in the orchestration model or fields of other
resources. The resource serializers allows to map field values. Instead of referring directly to an
attribute of the entity it serializes (path in std::File and path in the resource map one on one).
This mapping is done by adding a static method to the resource class with
name. This static method has two arguments: a reference to the exporter and the instance of the
entity it is serializing.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
from inmanta.resources import resource, Resource @resource("std::File", agent="host.name", id_attribute="path") class File(Resource): fields = ("path", "owner", "hash", "group", "permissions", "purged", "reload") @staticmethod def get_hash(exporter, obj): hash_id = md5sum(obj.content) exporter.upload_file(hash_id, obj.content) return hash_id @staticmethod def get_permissions(_, obj): return int(x.mode)
Classes decorated with
@resource do not have to inherit directly from
Resource. The orchestrator already offers two additional base classes with fields and mappings
ManagedResource. This mechanism is useful for resources that have fields
A resource can also indicate that it has to be ignored by raising the
Handlers interface the orchestrator with resources in the infrastructure. Handlers take care of changing the current state of a resource to the desired state expressed in the orchestration model.
The compiler collects all python modules from Inmanta modules that provide handlers and uploads them to the server. When a new orchestration model version is deployed, the handler code is pushed to all agents and imported there.
Handlers should inherit the class
@provider decorator registers the class with the orchestrator. When the
agent needs a handler for a resource it will load all handler classes registered for that resource
and call the
available() method. This method should check
if all conditions are fulfilled to use this handler. The agent will select a handler, only when a
single handler is available, so the
available() method of all handlers of a resource need to be
mutually exclusive. If no handler is available, the resource will be marked unavailable.
ResourceHandler is the handler base class.
CRUDHandler provides a more recent base class that is better suited
for resources that are manipulated with Create, Delete or Update operations. These operations often
match managed APIs very well. The CRUDHandler is recommended for new handlers unless the resource
has special resource states that do not match CRUD operations.
Each handler basically needs to support two things: reading the current state and changing the state of the resource to the desired state in the orchestration model. Reading the state is used for dry runs and reporting. The CRUDHandler handler also uses the result to determine whether create, delete or update needs to be invoked.
The context (See
HandlerContext) passed to most methods is used to
report results, changes and logs to the handler and the server.
Built-in Handler utilities¶
The Inmanta Agent, responsible for executing handlers has built-in utilities to help handler development. This section describes the most important ones.
The agent has a built-in logging facility, similar to the standard python logger. All logs written to this logger will be sent to the server and are available via the dashboard and the API. Additionally, the logs go into the agent’s logfile and into the resource-action log on the server.
This logger supports kwargs. The kwargs have to be json serializable. They will be available via the API in their json structured form.
def create_resource(self, ctx: HandlerContext, resource: ELB) -> None: # ... ctx.debug("Creating loadbalancer with security group %(sg)s", sg=sg_id)
The agent maintains a cache, that is kept over handler invocations. It can, for example, be used to cache a connection, so that multiple resources on the same device can share a connection.
The cache can be invalidated either based on a timeout or on version. A timeout based cache is kept for a specific time. A version based cache is used for all resource in a specific version. The cache will be dropped when the deployment for this version is ready.
The cache can be used through the
@cache decorator. Any
method annotated with this annotation will be cached, similar to the way
lru_cache works. The arguments to
the method will form the cache key, the return value will be cached. When the method is called a
second time with the same arguments, it will not be executed again, but the cached result is
returned instead. To exclude specific arguments from the cache key, use the ignore parameter.
For example, to cache the connection to a specific device for 120 seconds:
@cache(timeout=120, ignore=["ctx"]) def get_client_connection(self, ctx, device_id): # ... return connection
To do the same, but additionally also expire the cache when the next version is deployed, the method must have a parameter called version. for_version is True by default, so when a version parameter is present, the cache is version bound by default.
@cache(timeout=120, ignore=["ctx"], for_version=True) def get_client_connection(self, ctx, device_id, version): # ... return connection
To also ensure the connection is properly closed, an on_delete function can be attached. This function is called when the cache is expired. It gets the cached item as argument.
@cache(timeout=120, ignore=["ctx"], for_version=True, call_on_delete=lambda connection:connection.close()) def get_client_connection(self, ctx, device_id, version): # ... return connection