Software and configuration
Installation
The installation is started from the to-be-controller node. Note that: - The controller is installed using a base/Minimal installation (see supported distributions). - The steps previously mentioned are assumed to be configured (hostname, network configuration and partitioning), - SELinux is set to permissive (enforced is not yet supported), - The installation is carried out as root, - Any proxy or firewall has been configured correctly, the installation will install software from other repositories.
Cloning the code
Note with git, it will default to the main branch. This will have all the latest stable code. If you want to stick with a point-release, download the tagged releases instead:
# git clone --branch {tag} http://github.com/clustervision/trinityX
Where {tag}
is any of the release numbers in https://github.com/clustervision/trinityX/releases
Clone TrinityX repository into your working directory. Then run prepare.sh
to install all the prerequisites:
# git clone http://github.com/clustervision/trinityX
Cloning into 'trinityX'...
warning: redirecting to https://github.com/clustervision/trinityX/
[...]
Receiving objects: 100% (16396/16396), 63.00 MiB | 19.31 MiB/s, done.
Resolving deltas: 100% (9240/9240), done.
TrinityX installation method
Go to the directory where you have the Ansible code. This could be the place where you have cloned or unpacked the code to. We will assume it is /root/trinityX
.
The TrinityX configuration tool will install and configure all packages required to set up a working TrinityX controller. First, make sure that all pre-requisites are present. This can be easily done by running the prepare.sh
script:
# cd trinityX
# bash prepare.sh
[...]
Complete!
Please modify the site/hosts.example and save it as site/hosts
Please modify the site/group_vars/all.yml.example and save it as site/group_vars/all.yml
#### Please configure the network before starting Ansible ####
Note the last warning. The network must be configured since a lot of cluster services run on the internal cluster network interface, see Pre-installation
The configuration for a default controller installation is described in the file controller.yml, as well as the files located in the group_vars/ subdirectory of the TrinityX tree, while the list of machines to which the configuration needs to be applied is described in the file called hosts. Copy the .example files to the production version:
# cd ~/trinityX/site
# cp hosts.example hosts
# cp group_vars/all.yml.example group_vars/all.yml
You will now have to edit 2 files:
/root/trinityX/site/group_vars/all.yml
/root/trinityX/site/hosts
Ansible configuration
These files can be edited to reflect the user’s own installation choices. For a full list of configuration options supported by TrinityX, refer to TrinityX configuration variables Ansible configuration variables.
The group_var/all.yml
needs at least the following adjustments, note the 10.14. addresses are the defaults. The controller hostname trix_ctrl1_hostname
must be set correctly.
# -----------------------------------------------------------------------
# Default hostname and IP for the controller
# In an HA pair, those are the hostname and IP for the first controller.
# Those variables are required, with or without HA.
trix_ctrl1_ip: 10.141.255.254
trix_ctrl1_bmcip: 10.148.255.254
trix_ctrl1_heartbeat_ip: 10.146.255.254
trix_ctrl1_hostname: controller1
The firewall is configured on the controller, the interface names must match. See nmcli con show
for interface names.
# -----------------------------------------------------------------------
# Default firewalld configuration
# Only public tcp/udp ports are allowed on the public interfaces
# whereas everything is allowed on the trusted interfaces
firewalld_public_interfaces: [ens3]
firewalld_trusted_interfaces: [ens6]
firewalld_public_tcp_ports: [22, 443]
firewalld_public_udp_ports: []
Further tailoring may be required, depending on the cluster requirements.
The host (site/hosts
) file is used to install the controller node. The hostname must match:
[controllers]
controller1 ansible_host=127.0.0.1 ansible_connection=local
Using Ansible to install the controller
Once the configuration files are ready, the controller.yml Ansible playbook can be run to apply the configuration to the controller(s):
# pwd
# /root/trinityX/site/
# ansible-playbook controller.yml
PLAY [controllers] ***********************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************
ok: [controller1]
[...]
PLAY RECAP *******************************************************************************************************************************************
controller1 : ok=404 changed=304 unreachable=0 failed=0 skipped=67 rescued=0 ignored=0
This must complete without any errors. If there are any, please review the message and correct the problem. The installation will not work reliable when the playbook has not succesfully completed. Note you can run ansible-playbook multiple times without any harm.
Once the controller.yml playbook is complete, you will need to configure a software image. This can be done by starting the compute* playbook.
Using Ansible to install the compute image
The creation and configuration of an OS image for the compute nodes uses the same tool and a similar configuration file as for the controller. While the controller configuration applies its setting to the machine on which it runs, the image configuration does so in a directory that will contain the whole image of the compute node.
# pwd
# /root/trinityX/site/
# ansible-playbook compute-redhat.yml
[...]
PLAY [compute.osimages.luna] *************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************
ok: [compute.osimages.luna]
TASK [trinity/wrapup-images : Cleanup the image] *****************************************************************************************************
changed: [compute.osimages.luna]
TASK [trinity/wrapup-images : Cleanup the image] *****************************************************************************************************
skipping: [compute.osimages.luna]
TASK [trinity/wrapup-images : Cleanup /tmp] **********************************************************************************************************
changed: [compute.osimages.luna]
PLAY [controllers] ***********************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************
ok: [dev-l2controller-001]
TASK [trinity/pack-images : Pack the image] **********************************************************************************************************
changed: [dev-l2controller-001]
PLAY RECAP *******************************************************************************************************************************************
compute.osimages.luna : ok=112 changed=69 unreachable=0 failed=0 skipped=109 rescued=0 ignored=1
controller1 : ok=41 changed=17 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0
This will set up a RedHat image and configure it in Luna. This can be verified with:
# luna osimage list
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| << Osimage >> |
+---+---------+------------------------------+---------------------------------------------------------+----------------------------+-------------------------+--------------+-----------+
| # | name | kernelversion | kernelfile | imagefile | path | distribution | osrelease |
+---+---------+------------------------------+---------------------------------------------------------+----------------------------+-------------------------+--------------+-----------+
| 1 | compute | 4.18.0-477.27.1.el8_8.x86_64 | compute-1697629733-vmlinuz-4.18.0-477.27.1.el8_8.x86_64 | compute-1697629768.tar.bz2 | /trinity/images/compute | redhat | None |
+---+---------+------------------------------+---------------------------------------------------------+----------------------------+-------------------------+--------------+-----------+
Any newly created image will reside in the directory defined by the configuration variable trix_image
which points to /trinity/images/
by default.