Main points :
- Environment variables driven process
- 8 steps process
- Deploy OpenStack using upstream Puppet modules
- Works for both bare metal and virtualized deployments
What is devtest and how does it work ?
Devtest is the upstream way to deploy Openstack with TripleO. In simple words it takes you from a fresh bare metal server to an overcloud (understand OpenStack cloud) up and running with a single script.
All the devtest related code and components are located in the tripleo-incubator project. The one we will take a closer look at is scripts/devtest.sh.
The main devtest.sh is a wrapper around the following scripts. They can be – and we’ll see a use case later on this article – run independently or in a row via devtest.sh
- devtest_variables.sh
- devtest_setup.sh
- devtest_testenv.sh
- devtest_ramdisk.sh
- devtest_seed.sh
- devtest_undercloud.sh
- devtest_overcloud.sh
- devtest_end.sh
When working with devtest one needs to understand that it is environment variable’s driven. The behavior of each of the aforementioned scripts can be altered by an environment variable. When going through the scripts one by one the most important variables will be highlighted.
The process in details
Environment
The deployment has been tested on a Fedora21 bare metal server with 24G RAM and 12 cores.
A tripleo user is created and can run root command without being prompted for a password, tripleo-incubator is cloned into ~/tripleo as the tripleo user.
Step 0: clone the tripleo-incubator project
Command: git clone https://review.openstack.org/openstack/tripleo-incubator $TRIPLEO_ROOT/
Step 1: devtest_variables.sh
Command: source $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_variables.sh
Initially, in terms of devtest, one’s computer is like a white canvas, no devtest environment variable is set. Running any devtest related script will result in error being raised due to variables not being set.
In order to have most of the mandatory parameters set, one needs to source the $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_variables.sh. ‘Most’ is used here, because one is left mandatory to the user to specify : $TRIPLEO_ROOT.
Also the PATH environment variable should be updated so the env can pickup the command provided by TripleO.
When specifying devtest environment variables, by convention one writes them in ~/.devtestrc and then sources it before running any devtest scripts.
So let’s assume, one cloned tripleo-incubator in ~/tripleo. One needs to have a ~/.devtestrc file that looks like :
# TripleO settings export TRIPLEO_ROOT=~/tripleo export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$TRIPLEO_ROOT/diskimage-builder/bin:$PATH
After sourcing ~/.devtestrc, sourcing $TRIPLEO_ROOT/scripts/devtest_variables.sh should populate your environment with devtest related variables.
NOTE: If one is using a server specifically for devtest, it is recommended to source both ~/.devtestrc and $TRIPLEO_ROOT/scripts/devtest_variables.sh at login time via ~/.bashrc
Step 2: devtest_setup.sh
Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_setup.sh –trash-my-machine
As the name states this script sets up the bare metal machine with the packages needed.
In details it runs 4 sub-scripts :
- install-dependencies: Install all the package needed to proceed. Note that if you are using CentOS for the bare metal you need to activate EPEL.
- pull-tools: Download the tripleo components necessary. List available here
- setup_clienttools: Download all the python package necessary to interact with openstack (ie. in order to interact with the seed, undercloud, overcloud)
- set-usergroup-membership: Add the current user (ie. tripleo) to the libvirt group so it can spawn VM without the super user premissions
Once this is run, one will be prompted to login again so that adding the user to the libvirt group change can be taken in account.
One’s system is ready to fire some devtest goodness.
The –trash-my-machine is necessary as this script is destructive it requires user acknowledgement that one knows what s/he is doing.
One can add the -c option if he wants to use the cache (during a second run or so)
Step 3: devtest_testenv.sh
Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_testenv.sh $TE_DATAFILE
This script is responsible for creating the proper environment within libvirt.
It will create the required domains and networks.
This script illustrates perfectly the notion of environment variable driven behavior.
To select VM technical characteristics, the script will use the following parameters :
NODE_CNT=${NODE_CNT:-15} NODE_CPU=${NODE_CPU:-1} NODE_MEM=${NODE_MEM:-3072} NODE_DISK=${NODE_DISK:-40} NODE_ARCH=${NODE_ARCH:-i386} SEED_CPU=${SEED_CPU:-${NODE_CPU}} SEED_MEM=${SEED_MEM:-${NODE_MEM}}
Here the defaults are voluntary pasted. For example I doubt one will want to go with the NODE_ARCH specified here. Same thing for the NODE_CPU, if the server one owns
is powerful enough let’s take advantage of it.
NODE_CNT is a variable that purpose is to tell libvirt how many domains it needs to create.
So for example if one want 3 controllers + 3 compute + 1 storage, 7 here would be enough.
Unless you are trying to deploy a massive openstack cloud within your bare metal server, this value is fine. Preallocating 15 domains doesn’t mean that 15 vms will be running.
So do not worry if this number seems high.
So appending to our previous ~/.devtestrc sane values would be
# TripleO settings export TRIPLEO_ROOT=~/tripleo export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$TRIPLEO_ROOT/diskimage-builder/bin:$PATH # Libvirt settings export NODE_CPU=2 export NODE_MEM=4096 export NODE_ARCH=amd64
After one runs this script running virsh list –all should show a number of libvirt domains powered off.
Step 4 : devtest_ramdisk.sh
Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_ramdisk.sh
This command will create a special image for the seed and undercloud node. Not much to customize here it will work out of the box.
Step 5 : devtest_seed.sh
Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_seed.sh –all-nodes
This command will deploy a minimal openstack cloud (the seed) with few APIs available (heat,nova,glance,nova) to be able to deploy the overcloud.
The –all-nodes allow to register all the VMs created during step 3. This is normally done when building the undercloud, but since in our case the seed is the undercloud, it is taken care of here.
Not much to customize here it will work out of the box.
One can add the -c option to re-use existing source/images if they exist.
NOTE: In order to be able to communicate with the seed ‘cloud’ one need to source $TRIPLEO_ROOT/tripleo-incubator/seedrc. If one forgets to source
that file both devtest_undercloud.sh and devtest_overcloud.sh will fail to proceed as it cannot reach seed heat API.
Step 6 : devtest_undercloud.sh
Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_undercloud.sh
Skipping the description here because for a basic installation the use of undercloud is not mandatory. The seed node can take the role of the undercloud.
Step 7 : devtest_overcloud.sh
Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_overcloud.sh
One can add the -c option to re-use existing source/images if they exist.
This is where all the heavy processing happens. This script is in charge of several things.
It does :
- builds the various overcloud images (one per role : controller, compute, etc…)
- load them into the undercloud (here the seed) openstack
- register the overcloud node
- runs heat to provision the overcloud
- configure keystone on the overcloud and run some acceptance tests (creating flavor, images, VMs, networks, etc…)
Here we will focus on two key parts: the overcloud image building and the heat provisioning of the overcloud.
Image building
The images that will compose the overcloud are built with diskimage-builder. It takes various elements and build a filesystem out of them.
Core elements (the one that are needed for the filesystem to work) are part of the diskimage-builder project. The programs and repositories configuration are part of the tripleo-images-elements
Interesting variables here are :
- NODE_DIST: This is the distribution we want our overcloud to be. It can be any of those elements.
- DIB_RELEASE: The release of the distribution. It should default to the most recent ‘supported’ one.
- RDO_RELEASE: If one is using RDO, this will setup the RDO repositories for the specified OpenStack release.[https://github.com/openstack/tripleo-image-elements project.
- DELOREAN_REPO_URL: The URL of the DELOREAN_REPO_URL to use. Not mandatory for regular deployment, good practice if contributing to upstream tripleo-heat-template. A good value is the one that CI is currently running.
- DIB_DEFAULT_INSTALLTYPE: The way diskimage-builder will install the programs. By default it chooses source. But if you use a distro that provides the package and plan to deploy as such the recommended value is package.
- DIB_INSTALLTYPE_puppet_modules: Upstream it is preferable to always deploy the puppet modules from upstream and not from packages, hence the preferred value here is source. It can be adapted to one needs.
- DIB_COMMON_ELEMENTS: List of elements that should be present on every images.
- ELEMENTS_PATH: List of paths where diskimage-builder elements can be found
- OVERCLOUD_DISK_IMAGES_CONFIG: The overcloud elements configuration. A file that describe which elements needs to be on the overcloud images.
With that explained, amending our previous ~/.devtestrc with sane values would be:
# TripleO settings export TRIPLEO_ROOT=~/tripleo export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$TRIPLEO_ROOT/diskimage-builder/bin:$PATH # Libvirt settings export NODE_CPU=2 export NODE_MEM=4096 export NODE_ARCH=amd64 # Diskimage-builder settings export NODE_DIST='fedora selinux-permissive' export DIB_RELEASE=21 export RDO_RELEASE=kilo export DELOREAN_REPO_URL=http://trunk.rdoproject.org/f21/4d/35/4d35f1526504250cab5949414186947fadc2aade_d7937169 # TO UPDATE BASED ON TRIPLEO-CI export DIB_DEFAULT_INSTALLTYPE=package export DIB_INSTALLTYPE_puppet_modules=source export ELEMENTS_PATH=$TRIPLEO_ROOT/tripleo-puppet-elements/elements:$TRIPLEO_ROOT/heat-templates/hot/software-config/elements:$TRIPLEO_ROOT/tripleo-image-elements/elements export DIB_COMMON_ELEMENTS='stackuser os-net-config delorean-repo rdo-release' export OVERCLOUD_DISK_IMAGES_CONFIG=$TRIPLEO_ROOT/tripleo-incubator/scripts/overcloud_puppet_disk_images.yaml
Heat provisioning
Once the image are built, loaded via glance in the undercloud (here the seed) and the nodes registered. heat finally build the new stack, the overcloud.
The interesting parameters here are the following :
- OVERCLOUD_COMPUTESCALE: Number of compute nodes (Default: 1)
- OVERCLOUD_CONTROLSCALE: Number of controller nodes (Default: 1)
- OVERCLOUD_BLOCKSTORAGESCALE: Number of block storage nodes (Default: 0)
- NeutronPublicInterface: The public interface on the deployed nodes (Default: nic1)
- RESOURCE_REGISTRY_PATH: The heat registry to use. By default it will use one that does not rely on puppet for provisioning but on os-apply-config from the elements.
So easily enough if one wants a cloud with 1 controller and 3 computes it would export the following
export OVERCLOUD_COMPUTESCALE=3 export RESOURCE_REGISTRY_PATH="$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml
If one wants to try an HA setup with 3 controllers and 1 compute it would export the following
export OVERCLOUD_CONTROLSCALE=3 export OVERCLOUD_CUSTOM_HEAT_ENV="$TRIPLEO_ROOT/tripleo-heat-templates/environments/puppet-pacemaker.yaml" export RESOURCE_REGISTRY_PATH="$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml
NOTE: Heat has an interesting feature called environment that let’s one overload some aspect of the main stack by specifying an environment. It won’t get covered in this article
but just assume that if you want to deploy an HA setup you need also to export the OVERCLOUD_CUSTOM_HEAT_ENV variable mentioned above.
So if one wants to deploy a basic 1 controller 1 compute scenario nothing needs to be changed. But if someone wants to deploy an HA setup with ‘eth0’ as the NeutronPublicInterface we would amend our ~/.devtestrc to obtain:
# TripleO settings export TRIPLEO_ROOT=~/tripleo export PATH=$TRIPLEO_ROOT/tripleo-incubator/scripts:$TRIPLEO_ROOT/diskimage-builder/bin:$PATH # Libvirt settings export NODE_CPU=2 export NODE_MEM=4096 export NODE_ARCH=amd64 # Diskimage-builder settings export NODE_DIST='fedora selinux-permissive' export DIB_RELEASE=21 export RDO_RELEASE=kilo export DELOREAN_REPO_URL=http://trunk.rdoproject.org/f21/4d/35/4d35f1526504250cab5949414186947fadc2aade_d7937169 # TO UPDATE BASED ON TRIPLEO-CI export DIB_COMMON_ELEMENTS='stackuser os-net-config delorean-repo rdo-release' export DIB_DEFAULT_INSTALLTYPE=package export DIB_INSTALLTYPE_puppet_modules=source export ELEMENTS_PATH=$TRIPLEO_ROOT/tripleo-puppet-elements/elements:$TRIPLEO_ROOT/heat-templates/hot/software-config/elements:$TRIPLEO_ROOT/tripleo-image-elements/elements export DIB_COMMON_ELEMENTS='stackuser os-net-config delorean-repo rdo-release' export OVERCLOUD_DISK_IMAGES_CONFIG=$TRIPLEO_ROOT/tripleo-incubator/scripts/overcloud_puppet_disk_images.yaml # Heat settings export NeutronPublicInterface='eth0' export OVERCLOUD_CONTROLSCALE=3 export OVERCLOUD_CUSTOM_HEAT_ENV="$TRIPLEO_ROOT/tripleo-heat-templates/environments/puppet-pacemaker.yaml" export RESOURCE_REGISTRY_PATH="$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml"
During the process the installer will wait for a signal from heat that the deployment is over, else it will timeout after an hour (value by default). After the deployment is over it will run acceptance tests on the overcloud. For one to interact with the overcloud, it needs to source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc.
What’s happening?
The heat provisioning can take lot of time and one is left in the dark about where is the installation at. One can use heat resource-show to find out what is happening at the heat level. If one wants to know what is currently happening at the system level one should log on the machines it is provisioning.
On the host and after sourcing $TRIPLEO_ROOT/tripleo-incubator/seedrc, one can run nova list. This will return a list of machine currently being provisioned and their respective IP addresses. One can then ssh as the heat-admin user to those machine and review the logs (journalctl) as a superuser to know exactly what is the system doing.
Step 8 : devtest_end.sh (optional)
Command: $TRIPLEO_ROOT/tripleo-incubator/scripts/devtest_end.sh
All this script does is that it writes a certain amount of your environment variables into $TRIPLEO_ROOT/tripleorc so it can be reused for a further deployment.
Conclusion
Running devtest.sh can look like magic at a first glance, but once one takes time to decompose it, one can see that it isn’t that magic after all. As demonstrated the overcloud can be highly customized, even more can be done but this is out of the scope of this article. Now one is ready to come and contribute upstream, welcome and looking forward to your contributions !
Note: Useful debugging tutorial can be found at http://hardysteven.blogspot.cz/2015/04/debugging-tripleo-heat-templates.html
Note bis: During the liberty cycle there will be an effort to use/converge toward instack as the way to deploy undercloud and overclouds.
[…] By Yanis Guenane: From 0 to OpenStack with devtest: the process in details […]