Distributed-CI and InfraRed

Introduction

Red Hat OpenStack QE team maintains a tool to deploy and test OpenStack. This tool can deploy different types of topologies and is very modular. You can extend it to cover some new use-case. This tool is called InfraRed and is a free software and is available on GitHub.

The purpose of Distributed-CI (or DCI) is to help OpenStack partners to test new Red Hat OpenStack (RHOSP) releases before they are published. This allows them to train on new releases, identify regression or prepare new driver ahead of time. In this article, we will explain how to integrate InfraRed with another too called Distributed-CI, or DCI.

InfraRed

InfraRed has been designed to be flexible and it can address numerous different use-cases. In this article, we will use it to prepare a virtual environment and driver a regular Red Hat OpenStack Platform 13 (OSP13) deployment on it.

InfraRed is covered by a complete documentation that we won’t copy past here. To summarize, once it’s installed, InfraRed exposes a CLI. This CLI gives the user the ability to create a workspace that will trace the state of the environment. The user can then trigger all the required steps to ultimately get a running OpenStack. In addition, InfraRed offers additional features through a plug-in system.

Distributed-CI

Global diagram of DCI

The partners use DCI to validate OpenStack on their labs. It’s a way to validate that they will still be able to use their gear with the next release. A DCI agent runs the deployment and is in charge of the communication with Red Hat. They then have to provide a set of scripts to deploy OpenStack on it automatically. These scripts will be used during the deployment.

DCI can be summarized with the following list of actions:

  1. Red Hat exposes the last internal snapshots of the product on the DCI
  2. Partner’s DCI agent pulls the last snapshot and deploys it internally using the local configuration and deployment scripts
  3. Partner’s DCI agent runs the tests and sends back the final result to DCI.

Deployment of the lab

For this article, we will use a libvirt hypervisor to virtualize our lab. The hypervisor can be based either on RHEL7 or a CentOS7.

The network configuration

In this tutorial, we will rely on libvirt ‘default’ network. This network uses the 192.168.122.0 range. 192.168.122.1 is our hypervisor. The IP addresses of the other VM will by dynamical and InfraRed will create some additional networks for you. We also use the hypervisor public IP which is `192.168.1.40.

Installation of the Distributed-CI agent for OpenStack

The installation of DCI agent is covered by its own documentation. All the steps are rather simple as soon as the partner has a host to run the agent that matches DCI requirements. This host is called the jumpbox in DCI jargon. In this document, the jumpbox is also the hypervisor host.

In the rest of this document will assume you have an admin access to a DCI project, that you created the remoteci on http://www.distributed-ci.io and that you have deployed the agent on your jumpbox with the help if its installation guide. To validate everything, you should be able to list the remoteci of your tenant with the following command.

So far so good, we can now start the agent for the very first time with:

The agent pulls the bits from Red Hat and uses the jumpbox to expose them. Technically speaking, it’s a Yum repository in /var/www/html and a image registry on port 5000. These resources need to be consumed during the deployment. Since we don’t have any configuration yet, the run will fail. It’s time to fix that and prepare our integration with InfraRed.

One of the crucial requirement is the set of scripts that will be used to deploy OpenStack. Those scripts are maintained by the user. They will be called by the agent through a couple of Ansible playbooks:

  • hooks/pre-run.yml: This playbook is the very first one to called on the jumpbox. It’s the place where the partner can, for instance, fetch the last copy of the configuration.
  • hooks/running.yml: This is the place where the automation will be called. Most of the time, it’s a couple of extra Ansible tasks that will call a script or include another playbook.

Preliminar configuration

Security, firewall and SSH keypair

Some services like Apache will be exposed without any restriction. This is why we assume the hypervisor is on a trusted network.

We take the freedom to disable firewalld to simplify the whole process. Please do:

InfraRed interacts with the hypervisor using SSH. Just a reminder, in our case, the hypervisor is the local machine. To keep the whole setup simple, we share the same SSH key for the root and dci-ansible-agent users:

You can validate everything work fine with:

Libvirt

We will deploy OpenStack on our libvirt hypervisor with the Virsh provisioner.

Red Hat Subscription Manager configuration (RHSM)

InfraRed uses the RHSM during the deployment to register the nodes and pull the last RHEL updates. It loads the credentials from a little YAML file that you can store in the /etc/dci-ansible-agent directory with the other files:

RHEL guest image

InfraRed needs a RHEL guest image to prepare the nodes. It tries hard to download it by itself, thanks InfraRed… But the default location is https://url.corp.redhat.com/rhel-guest-image-7-5-146-x86-64-qcow2 which is unlikely to match your environment. Got on https://access.redhat.com/downloads and download the last RHEL guest image. The file should be stored here on your hypervisor: /var/lib/libvirt/images/rhel-guest-image-7-5-146-x86-64-qcow2. The default image name will probably change in the future, you can list the default values for the driver with the infrared (or ir) command:

Configure the agent for InfraRed

All the configuration files of this example are available on GitHub.

Run bootstrap (pre-run.yml)

First, we want to install InfraRed dependencies and prepare a virtual environment. These steps will be done with the pre-run.yml.

We pull InfraRed directly from its Git repository using Ansible’s git module:

Finally, we prepare a Python virtual environment to preserve the integrity of the system and we install InfraRed in it.

As mentioned above, the agent is called by the dci-ansible-agent user, we have to ensure everything is done in its home directory.

Before we start anything, we do a cleanup of the environment. For that, we rely on InfraRed. Its virsh plugin can remove all the existing resources thinks to the --cleanup argument:

Be warned, InfraRed removes all the existing VM, network and storages from your hypervisor.

Hosts deployment (running.yml)

As mentioned before, the running.yml is actually the place where the deployment is actually done. We ask InfraRed to prepare our hosts:

Undercloud deployment (running.yml)

We can now deploy the Undercloud:

At this stage, our libvirt virtual machines are ready and one of them host the undercloud. All these machines have a floating IP. InfraRed keeps the machines names up to date in /etc/hosts. We rely on that to get the undercloud IP address:

You can also use InfraRed to interact with all these hosts with a dynamic IP:

Here ir is an alias for the infrared command. In both cases, it’s pretty cool, InfraRed did all the voodoo for us.

Overcloud deployment (running.yml)

It’s time to run the final step of our deployment.

Here we pass some extra arguments to accommodate InfraRed:

  • --registry-mirror: we don’t want to use the images from Red Hat. Instead, we will pick the ones delivered by DCI. Here 192.168.1.40 is the first IP address of our jumpbox. It’s the one the agent use when it deploys the image registry. Use the following command to validate you use the correct address: cat /etc/docker-distribution/registry/config.yml|grep addr
  • --registry-namespace and --registry-prefix: our images name start with /rhosp13/openstack-.
  • --vbmc-host undercloud: During the Overcloud installation, TripleO uses Ironic for the node provisioning. Ironic interacts with the nodes through a Virtual BMC server. By default InfraRed install it on the hypervisor, in our case we prefer to keep it clean. This is why we target the undercloud instead.

The virtual BMC instances will look like that on the undercloud:

DCI lives

Let’s start the beast!

Ok, at this stage, we can start the agent. The standard way to trigger a DCI run is through systemd:

A full run takes more than 2 hours, the --no-block argument above tells systemctl to give back the control to the shell. Even if the unit’s start-up is not completed yet.

You can follow the progress of your deployment either on the web interface: https://www.distributed-ci.io/ or with journalctl:

The CLI

DCI also comes with CLI interface that you can use directly on the hypervisor.

This command can also give you an output in the JSON format. It’s handy when you want to reuse the DCI results in some script:

To conclude

I hope you enjoyed the article and this will help you to prepare your own configuration. Please, don’t hesitate to contact me if you have any question.

I would like to thanks François Charlier and the InfraRed team. François started the DCI InfraRed integration several months ago. He did a great job to resolve all the issues one by one with the help of the InfraRed team.

Article written by

Gonéri is a Senior Software Engineer at Red Hat. He has been involved in various free software projects during the last 15 years. Today, his daily routine includes Python, Ansible, reviewing patches and a lot of interaction with other people. He is also an OpenStack contributor.