Luna daemon plugins

The luna daemon was designed with plugins in mind. The abstraction is being handled by the 'core' of the luna daemon, where the plugins are considered the arms and legs to 'communicate' with the outside 'luna daemon'-world.

The analogy was to allow administrators modifying the plugins to (further) tailor to their environment without changing the function of the daemon.

List of plugins and templates

  • plugins end with .py
  • templates end with .templ

At the time of writing, the set of out-of-the-box provided plugins are as follows:

  • osuser/obol.py
  • boot/provision/http.py
  • boot/provision/kickstart.py
  • boot/provision/torrent.py
  • boot/detection/cloud.py
  • boot/detection/switchport.py
  • boot/roles/default.py
  • boot/roles/bond.py
  • boot/bmc/default.py
  • boot/scripts/default.py
  • boot/scripts/diskfull.py
  • boot/scripts/nodhcp.py
  • boot/scripts/raid1.py
  • boot/network/default.py
  • boot/network/redhat9.py
  • boot/network/redhat8.py
  • boot/network/ubuntu.py
  • hooks/config/node.py
  • hooks/config/dns.py
  • hooks/config/group.py
  • hooks/control/default.py
  • hooks/monitor/node.py
  • osimage/operations/osgrab/default.py
  • osimage/operations/image/default.py
  • osimage/operations/image/ubuntu.py
  • osimage/operations/ospush/default.py
  • osimage/other/cleanup.py
  • osimage/filesystem/default.py
  • control/default.py
  • import/prometheus_rules_settings.py
  • import/prometheus_rules.py
  • import/prometheus_hw_rules.py
  • export/prometheus.py
  • export/prometheus_rules_settings.py
  • export/prometheus_rules.py
  • export/prometheus_hw_rules.py

The following templates (besides the templates in the templates directory) are provided:

  • boot/network/redhat9.templ
  • boot/network/redhat8.templ

Plugins and templates can be added or modified to ones needs. after adding or modification it comes highly recommended to restart the luna2-daemon.

Philosophy

Most, but not all plugins allow for OS distribution, group or node override from the default. An example:

for configuring compute node interfaces during luna provisioning, the plugin loader of the daemon will do a search of what's available. the default is default.py in boot/network/, however based on what's first found first, it'll load redhat.py, redhat8.py and then default.py when the nodes assigned OS is defined as 'redhat' and has a configured release as well, in this case '8'.

in short:

# luna osimage show compute
+----------------------------------------------------------------------------------------------+
|                                         << Osimage >>                                        |
+---+---------+---------------------------+-------------------------+--------------+-----------+
| # |   name  |       kernelversion       |           path          | distribution | osrelease |
+---+---------+---------------------------+-------------------------+--------------+-----------+
| 1 | compute | 4.18.0-425.3.1.el8.x86_64 | /trinity/images/compute |    redhat    |     8     |
+---+---------+---------------------------+-------------------------+--------------+-----------+

note the 'distribution' and 'osrelease'

during the boot the network plugins are searched and loaded as follows:

in boot/network/ do we have:

  • redhat8.py? if yes, use it, exit search
  • redhat.py? if yes, use it, exit search
  • default.py? if yes, use it, exit search
  • return an error.

In each plugin directory, a small README has been placed if relevant to inform the search path for each type of plugin.

README example:

# NETWORK TEMPLATE
#
# contains a 'live' config file which is being parsed and rendered during boot
# Takes precedence over a plugin with the same name

# NETWORK PLUGIN
#
#        three defined variables are mandatory which contains snippets of bash code:
#        - gateway
#        - interface
#        - hostname
#
#        these snippets are included and run during node installation time

# Plugin/Template selection based on the following search path and priority:
#
# 1 distribution + osrelease, e.g. plugins/network/redhat/el9.py or redhat9.py
# 2 distribution              e.g. plugins/network/redhat/default.py  or  plugins/osimage/redhat.py
# 3 default                   e.g. plugins/network/default.py

Templates

for networking, besides plugins, templates are provided, that follow the same search path as described earlier. In principle a template will be searched for first. When there is no suitable template available, then the described plugin search will be followed.

In short, the complete search for when templates and plugins are available:

in boot/network/ do we have:

  • redhat8.templ? if yes, use it, exit search
  • redhat.templ? if yes, use it, exit search
  • default.templ? if yes, use it, exit search
  • redhat8.py? if yes, use it, exit search
  • redhat.py? if yes, use it, exit search
  • default.py? if yes, use it, exit search
  • return an error.

Scripts and Roles plugins

Scripts and roles are powerful plugins used during the provisioning of nodes. These can be configured on a group or node level. Refer to Luna group and Luna node for information on how to set scripts and roles.

Example group configuration:

+-------------------------------------------------------------------------------+
|                                Group => compute                               |
+---------------------+---------------------------------------------------------+
| info                | Config differs from parent - local overrides            |
+---------------------+---------------------------------------------------------+
| name                | compute                                                 |
| domain              | cluster                                                 |
| osimage             | compute                                                 |
| osimagetag          | default (default)                                       |
| kerneloptions       | net.ifnames=0 biosdevname=0 (compute)                   |
| interfaces          | interface = BOOTIF                                      |
|                     |   network = cluster                                     |
|                     |   dhcp = True                                           |
|                     | interface = BMC                                         |
|                     |   network = ipmi                                        |
|                     |   dhcp = False                                          |
| setupbmc            | True                                                    |
| bmcsetupname        | compute                                                 |
| unmanaged_bmc_users | None                                                    |
| netboot             | True                                                    |
| bootmenu            | False (default)                                         |
| roles               | security-hardening, beegfs-server    <-- Default is None, see below
| scripts             | diskfull                             <-- Default is None, see below
+---------------------+---------------------------------------------------------+
| prescript           | <empty> (default)                                       |
+---------------------+---------------------------------------------------------+
| partscript          | mount -t tmpfs tmpfs /sysroot                           |
+---------------------+---------------------------------------------------------+
| postscript          | echo 'tmpfs / tmpfs defaults 0 0' >> /sysroot/etc/fstab |
+---------------------+---------------------------------------------------------+
| provision_interface | BOOTIF (default)                                        |
| provision_method    | http                                                    |
| provision_fallback  | http (cluster)                                          |
| comment             | None                                                    |
+---------------------+---------------------------------------------------------+

The defaults are None, meaning no scripts or roles are loaded. In the above example, one script and two roles are defined:

  • script: diskfull, a by us provided plugin (see above) that allows a node to do a diskfull installation without having to change the 'pre', 'part' and 'postscript' sections.
  • roles: security-hardening and beegfs-server. These are just examples and are not provided by default.

How 'scripts' work

Scripts are loaded or executed in addition to what's in the group's or node's respective 'pre', 'part' and 'postscript' section. The configuration in 'pre', 'part' and 'postscript' is being run prior to what's defined for 'scripts' in the group or node. In the above, partscript mount -t tmpfs tmpfs /sysroot is being executed prior to what's configured in the diskfull plugin section 'part'

An example, boot/scripts/default.py:

class Plugin():
    """
    This is default class for pre, part, and post plugins.
    """

    def __init__(self):
        """
        prescript = runs before mounting sysroot during install
        partscript = runs before mounting sysroot during install
        postscript = runs before OS pivot
        """

    prescript = """
         # <--- bash commands go here....
    """

    partscript = """
         # or here....
    """

    postscript = """
         # and maybe here?
    """


How 'roles' work

A role is meant to extend an image or 'function' to fulfil a certain 'role'. This could be serving files, as in becoming a file server, or deploying additional configuration, or packages provided by another department. In short, items that cannot or should not be really inside an image or require a system to be up and running before it can be executed or deployed. As such, roles run after a node has booted up; it is being called for through systemd.

Roles is a perfect way for keeping images clean, or re-use images across different types or nodes/servers but also allow for delegation. E.g. department A is responsible for booting nodes, while department B has a recipe for deploying software, config, etc on top of that node.

An example, boot/roles/default.py:

class Plugin():
    """
    Class for a role.

    This plugin needs a mandatory variable set for template functionality
    -- script   --> This code will be called through the unit/systemd
                    The content will be written to /usr/local/roles/<role name>
    -- unit     --> This part provides the systemd unit or service file
    """


    script = """
#!/bin/bash
echo "Default example role" | logger
    """


    unit = """
[Unit]
Description=Luna Default example
After=multi-user.target

[Service]
Type=oneshot
ExecStart=/usr/local/roles/__LUNA_ROLE__

[Install]
WantedBy=multi-user.target
    """

The example will add the line 'Default example role' to the log file for the node.

Adding custom plugins and templates

The general rule would be to consult the README in the respective directory where the plugin is to be added. The README provides information about the naming convention, i.e. load order or search path and what functions and variables are required.

It also comes recommended to have a look at a default, e.g. default.py as it would provide additional information how the plugin could look like.

Note that each type of plugin comes with different requirements. As such plugins for network configs are inherently different compared to plugins that are loaded for osimage packing.