Having an industrialized way to deploy OpenStack for our customers, independant of the OpenStack/Linux distributions our customers may chose to use, is a key element to ensure maintainability and upgradability, and therefore customer satisfaction. As our experience grows, our set of tools is getting better everyday, and while we have often described some of these tools individually, providing a complete overview of the tooling was on our todo list until now… Of course, some of this tooling will continue to evolve over time, and as the TripleO project matures, we plan to replace some of the components we currently use, but here is a snapshot of what we use today.
At eNovance, we build OpenStack Clouds using Continuous Integration and optionally Continuous Delivery (CI/CD) for our customers. To make it easy, we use some automation tools to do all the job:
- Puppet modules from Stackforge (for OpenStack services)
- Puppet modules from the Puppet forge
- Our puppet module which is a flexible implementation capable of configuring a scalable OpenStack Cloud
- Jenkins which is in charge of building eDeploy roles and validate installation & upgrade flows.
In this post, I’m going to introduce the way we take to deploy OpenStack in production.
Requirements
- At least 4 servers. They can be physical or virtual. But you need to take care of disks for Ceph and also you will need a second NIC on one of the OpenStack nodes.
- Having an eDeploy server with install-server and openstack-full roles already built. If you don’t know anything about eDeploy and roles, you can have a look at this previous blog post by Frederic Lepied.
Architecture
Our reference architecture is based on 6 components:
- installation: manages lifecycle (installation + upgrades) of all machines
- load-balancers: are in charge of load-balancing OpenStack API and database access
- controllers: are in charge of manage OpenStack management services (API, schedulers, Messaging, caching, dashboard)
- databases: are in charge of providing highly available MySQL and MongoDB databases
- compute: host virtual machines running KVM hypervisor
- storage: store virtual machines and block volumes using Ceph and RBD driver
Prepare your nodes
You will have to build your roles and deploy them by using eDeploy:
- one install-server role (Puppet, Ansible, Serverspec, eDeploy and PXEmngr).
- three openstack-full roles (OpenStack and Ceph).
Generate the configuration
To prepare the Cloud configuration, we are going to use some tools that generate all configuration files that we will need later.
First of all, you need to describe your configuration in one YAML file. There is one example here. This file will be consumed by the configuration generation tool. I would suggest you to move this file in your own git repository and keep it secret, since it contains passwords and IP addresses.
In our short term roadmap, we are going to secure this file by hiding confidential informations.
Now, you have to create a 3nodes-fullha.yml file which will help the configuration generator tool to find the informations about the setup that you want to install, containing:
module: git@github.com:enovance/puppet-openstack-cloud.git serverspec: git@github.com:enovance/openstack-serverspec.git environment: repository: git@github.com:enovance/openstack-yaml-infra-3nodes-fullha.git name: example.yml infrastructure: git@github.com:enovance/openstack-yaml-infra-3nodes-fullha.git
Please change the environment section to use your own YAML file.
You just completed the most difficult task. Now, we can provision the installation server:
git clone git@github.com:enovance/config-tools.git cd config-tools ./provision.sh H.1.3.0 3nodes-fullha.yml
provision.sh extracts the components described in the 3nodes-fullha.yml file, an place them at the H.1.3.0 git tag or using the master branch if the tag doesn’t exist. Then provision.sh transfers the files to the right location on your install server.
If you connect by SSH on the install server, you should see:
- Configuration files in /etc/config-tools
- Serverspec configuration in /etc/serverspec
- Puppet configuration in /etc/puppet
Configure OpenStack
Now, it’s time to configure our Cloud !
Just run this command and wait:
ssh root@install-server configure.sh
An installation is done in 5 steps:
- MySQL, MongoDB, RabbitMQ, Memcached, Ceph Key generation and Ceph Monitor
- Keepalived, HAproxy, Ceph OSD, Horizon
- Keystone, Ceph storage pools
- OpenStack controller + compute services
- OpenStack network services
Each step is validated by technical tests (using serverspec), and Puppet agent runs as many times as needed to have the steps working (there is a maximum though) on each node. This process is designed to ease debugging. Each step must be validated for the next to work, so stopping when a step doesn’t work makes full sense and allows to only see the problems without being flooded by other messages of services that depend on the non-working element.
For example, at step 4/5 we test if Nova API is running and if it returns the flavors lists:
describe port(8774) do it { should be_listening.with('tcp') } end describe command("nova --os-username #{property[:ks_user_name]} --os-password #{property[:ks_user_password]} --os-tenant-name #{property[:ks_tenant_name]} --os-auth-url http://#{property[:vip_public]}:5000/v2.0 flavor-list") do it { should return_exit_status 0 } end
When a step fails, make the changes to fix what is needed, and then invoke configure.sh again. It will resume the process from were it was interrupted previously.
What happens after deployment?
After each deployment of OpenStack, we need to test the whole infrastructure using Tempest.
- Upstream, Tempest is ran a first time when we get the packages from Debian / Ubuntu / Red Hat. We install an all-in-one server in a VM with all the OpenStack services inside. We are able to know what features are not functional have after tempest report. If tempest passes without new failures, packages are copied to our repository. You could compare this process to installing devstack, but using packages instead of source code directly.
- Downstream, Tempest is ran after each OpenStack deployment to ensure we have the same result as upstream. It may happens when you fail to configure some services or when our architecture introduces some bugs (of course this never happens!).
Thanks to tempest-report which is a tool to test remote installations and summarize found services and extensions, we can generate some useful reports and manage the test level that we want to run against our Cloud infrastructure.
Feedback, please!
We are very interested in getting some feedback from people using our tools. Have a look at eDeploy roles, Puppet modules and serverspec tests suite and install your own OpenStack production-ready! We are always open to hear from you about suggestions, issues and new ideas by reporting bugs or asking features in Github:
- eDeploy: Linux system provisioning tool
- eDeploy-roles: script to build eDeploy roles
- puppet-openstack-cloud: flexible Puppet implementation capable of configuring a scalable OpenStack Cloud
- Serverspec: serverspec tests for the puppet-openstack-cloud modules
- config-tool: set of tools to use puppet to configure a set of nodes with complex configuration using a step by step approach
- Tempest Report: tool for OpenStack Tempest to test remote installations and summarize found services and extensions.
Contributions are also very welcome…