Using TripleO Composable roles in Openstack Newton to perform extra Overcloud configuration

With the Newton release of TripleO and Openstack, one of the major changes that took place was the overhaul of the overcloud heat templates to a “composable roles” model. What this means is that the roles (controller, compute, ceph, etc) in tripleo can now be freely defined, and the services you wish to deploy on each of those roles is fully composable. The primary use case for this is if you wish to start decomposing the current roles to deploy some services on different nodes (e.g. isolate all database services onto their own nodes from the rest of the controller services).
However, there is another use case we can use composable roles for, and that is a slightly more mature implementation of the pre and post deploy hooks that an Operator might leverage during a tripleO deployment to apply their own node configuration. In the following example, we will create our own service to set the /etc/motd file on an overcloud node, and then add that service to one or more of our server roles, and then deploy our overcloud, noting how our new service is also deployed against the nodes in question.
So lets say I as an Operator wish to apply some extra configuration to all my overcloud nodes using puppet. In this case I wish to modify the /etc/motd file on all my machines, to display a custom message to anyone who logs into my overcloud systems via ssh. To do this we will leverage the puppetlabs motd module. The first thing we need to do is create a new heat template to capture our new service (called motd-service.yaml) which will look like the following
 

heat_template_version: 2016-04-08
description: >
  This installs the puppet class puppetlabs-motd from git, then
  sets the motd on all servers with this service
parameters:
  ServiceNetMap:
    default: {}
    description: Mapping of service_name -> network name. Typically set
                 via parameter_defaults in the resource registry.  This
                 mapping overrides those in ServiceNetMapDefaults.
    type: json
  DefaultPasswords:
    default: {}
    type: json
  EndpointMap:
    default: {}
    description: Mapping of service endpoint -> protocol. Typically set
                 via parameter_defaults in the resource registry.
    type: json
  MotdText:
    default: ''
    type: string
outputs:
  role_data:
    description: motd
    value:
      service_name:
      config_settings:
        motd::content: {get_param: MotdText}
      step_config: |
        if hiera('step') >= 1 {
          vcsrepo{'/etc/puppet/modules/motd':
            ensure   => present,
            provider => git,
            source   => "https://github.com/puppetlabs/puppetlabs-motd",
          }
        }
        if hiera('step') >= 2 {
          class{'motd':}
        }

You will notice a few things about this template file. It takes a specific set of inputs and provides a specific set of outputs. This is part of the standard needed for all services that wish to be used as part of composable roles. This includes ServiceNetMap (mapping services to specific networks), DefaultPasswords (passwords used for all services), and EndpointMap (mapping service endpoints to protocols like http, https, etc). We have added in our own parameter for this service called MotdText, which is the text we will want to go into /etc/motd. The outputs section of the service defines one output called role_data, which is a map containing all the information related to this service. This includes puppet hieradata to be populated (in config_settings), and what puppet code to actually apply as part of deploying the service (in step_config). More information about the anatomy of a service is available here.
Our particular service actually needs puppet to be run twice, the first time puppet will use the vcsrepo module to install the puppetlabs-motd module into the puppet modules directory, the second time will actually call the module to apply the change. Fortunately TripleO already has a mechanism to run puppet over the same services multiple times, called “steps”. The value of the current step (puppet run) is stored in hiera as “step”, so we leverage this to only install the motd module on first pass, and apply the class from the second run onwards.
Now that we have defined the inner workings of our service, we can create a new heat environment file we will pass at deploy time to map this file into a tripleo service, and also set the value of MotdText. We do this by creating a file called motd-environment.yaml with the following

resource_registry:
  OS::TripleO::Services::Motd: motd-service.yaml
parameter_defaults:
  MotdText: |
    ********************************************************************
    *                                                                  *
    * This system is for the use of authorized users only.  Usage of   *
    * this system may be monitored and recorded by system personnel.   *
    *                                                                  *
    * Anyone using this system expressly consents to such monitoring   *
    * and is advised that if such monitoring reveals possible          *
    * evidence of criminal activity, system personnel may provide the  *
    * evidence from such monitoring to law enforcement officials.      *
    *                                                                  *
    ********************************************************************

Now that we have mapped our service in the resource registry, we can apply it to the nodes as we wish. To do this, we take a copy of the current tripleo roles file, and add our new service as needed. Copy /usr/share/openstack-tripleo-heat-templates/roles_data.yaml to your local directory, and then edit it to add our Motd service to the roles you would like. For example, if you wish to add it to the compute role, you would modify it to look like the following

- name: Compute
  CountDefault: 1
  HostnameFormatDefault: '%stackname%-novacompute-%index%'
  ServicesDefault:
    - OS::TripleO::Services::CACerts
    - OS::TripleO::Services::CephClient
    - OS::TripleO::Services::CephExternal
    - OS::TripleO::Services::Timezone
    - OS::TripleO::Services::Ntp
    - OS::TripleO::Services::Snmp
    - OS::TripleO::Services::NovaCompute
    - OS::TripleO::Services::NovaLibvirt
    - OS::TripleO::Services::Kernel
    - OS::TripleO::Services::ComputeNeutronCorePlugin
    - OS::TripleO::Services::ComputeNeutronOvsAgent
    - OS::TripleO::Services::ComputeCeilometerAgent
    - OS::TripleO::Services::ComputeNeutronL3Agent
    - OS::TripleO::Services::ComputeNeutronMetadataAgent
    - OS::TripleO::Services::TripleoPackages
    - OS::TripleO::Services::TripleoFirewall
    - OS::TripleO::Services::NeutronSriovAgent
    - OS::TripleO::Services::OpenDaylightOvs
    - OS::TripleO::Services::SensuClient
    - OS::TripleO::Services::FluentdClient
    - OS::TripleO::Services::VipHosts
    - OS::TripleO::Services::Motd # our new motd role

Then it’s simply a matter of providing both the new environment file, and our new roles_data file to our deploy command, and our new service will be applied

openstack overcloud deploy -e motd-environment.yaml -r roles_data.yaml

Remember to also include all other environment files and overcloud deploy flags as usual, and that’s it! It’s worth noting that in our example with the motd service, there is no real requirements around what other services it depends on. However, you can note that the services defined for a role in roles_data.yaml are executed in order top to bottom, so if you do end up writing a service that needs to be done before or after other services, you can place it in the appropriate place in that file (as well as leveraging the multiple puppet executions with “steps”) to make sure it is run at the right time.
This pattern of using services and composable roles for arbitrary Operator configuration has some upsides and some downsides to leveraging the existing pre and post configuration hooks present in TripleO. The biggest benefit is that by defining your operator specific code as a service, you get greater control and flexibility over when it’s executed (and re-applied) during the deployment process, and in future you will be able to define your own workflow (steps to do during upgrades, etc) tailored to that service. One of the biggest downsides currently however is that services currently only support running puppet code, so if you have any configuration you would like to apply using one of the other heat softwaredeployment types (shell, ansible, chef, or salt), you are unable to use services in its current state.

Article written by