Autoscaling with Heat, Ceilometer and Gnocchi

A while ago, I had made a quick article/demo of how to use Ceilometer instead of the built-in emulated Amazon CloudWatch resources of Heat.

To extend on the previous post, when you create a stack, instances of the stack generated notifications that were received by Ceilometer and converted into samples to be written to a database; usually MongoDB. On the other end, Heat created some alarms using the Ceilometer API to trigger the Heat autoscaling actions. These alarms defined some rules against statistics based on the previously recorded samples. These statistics were computed on the fly when the alarms were evaluated.

The main issue with this setup was that the performance for evaluating all the defined alarms was directly tied to the number of alarms and to the complexity of computing the statistics. The computation of a statistic would result in a map reduce in MongoDB. Therefore, when there were additional ceilometer-alarm-evaluator workers and nodes there would be additional MongoDB map reduce operations in parallel.

In order to reduce the time between alarm evalutions, more workers and nodes are required as well as a solid MongoDB configuration.

Starting with Kilo, Ceilometer has a new dispatcher driver: Gnocchi. Instead of writing samples directly into the database, Ceilometer converts them into Gnocchi elements (resource, metric and measurement) and posts them on the Gnocchi REST API.

Contrary to the current Ceilometer database dispatcher, Gnocchi aggregates what it receives, and doesn’t compute anything when you want to retrieve statistics. There are no more on the fly computations! You can find more information about that in Julien Danjou’s articles “Ceilometer, the Gnocchi experiment” and “Gnocchi first release”,

On the Ceilometer alarm side, the system now has some new alarm rule types dedicated to Gnocchi. Instead of describing rules that would trigger computing statistics, we define rules that will get the result of pre-computed statistics.

That makes the Ceilometer alarm evaluators much more performant. The evaluation of an alarm just result in one single HTTP call. On the Gnocchi side, when using the Swift backend, this will break down into one SQL request to check RBAC, and another HTTP call to Swift to retrieve the result. No more on-the-fly statistics computation of any kind.

The side effect of that system is that you need to tell Gnocchi how you want to pre compute/aggregate data, that’s all.

Playing with all of that with Heat

Devstack setup

Boot a VM, install devstack, configure your stack. Enable all Gnocchi/Heat/Ceilometer services in your localrc:

Enable an eager processing of the ceilometer pipeline (every 10sec):

Add the ‘last’ aggregation method to the default archive_policy of Gnocchi:

And go!

Let’s look at some important configuration done by devstack to enable Gnocchi with MySQL and file as backend.

In Ceilometer, the database dispatcher is replaced by Gnocchi with the following configuration:

Note that it configures a filter to filter out all samples generated by
Gnocchi. Otherwise each time we write to Swift that will generate samples to
write again to Swift and this will create a storm of samples that grows
indefinitely. The filter breaks this infinite loop.

Also for alarming, devstack sets the Gnocchi API endpoint:

On Gnocchi side, the file driver has been configured for the storage and the SQL database for the indexer:

If Swift have been has been chosen as storage backend you will get:

Note: The default devstack configuration of Swift can’t handle the load generated by Gnocchi and Ceilometer, The number of swift workers needs to be increased.

Heat stack setup

Once everything is up, we can create our first stack with these templates :

Obviouly you will need to change the networks ids to match your own environment.

Taking a quick look at an alarm definition in the Heat templates:

The alarm definition looks almost like the legacy Ceilometer one. The query is identical to the POST data of a search API request in Gnocchi

Also, the Gnocchi resource attributes are strictly defined, “server_group” is one of the extended attributes of an instance. And of course the ‘last CPU’ is just for demo.

Now, take a look to the created Nova instances:

Then in the terminal of the first instance (gn-qxjx-h26oilfiz4mu-ao3cn5ctyin2-server-ze4ulgwkg77y), I generated some load:

Some minutes later, in Nova, I can see the new instance booted by Heat:

The Ceilometer alarms have been created:

Gnocchi provides some basic graphing view of resources. For now this is mainly for development/debugging purpose. To access it when the keystone middleware is enabled, you can inject the token to all your requests using this:

And then point your browser to a resource URL on the port 8042 of your devstack:

9-cpu_util_example

Article written by

Mehdi Abaakouk (sileht) is a senior python developer, he mainly works on Ceilometer as Core Developer. When he doesn't work on Openstack, he does DevOps stuffs for the non-profit Tetaneutral.net ISP or dance on crazy swing rhythm.

2 Responses

  1. New guides and tips for OpenStack - Winter Internship 2015

    […] you can build applications which know when they need more resources and scale automatically. In this tutorial from Mehdi Abaakouk, get started with the […]

  2. Warley Junior
    Warley Junior at |

    I wonder which version of devstack used? I tried to enable the heat, ceilometer and gnocchi services in local.conf, but I had an error after stack.sh command: “The ‘pycadf = 0.8.0’ distribution was not found and is required by keystone “

Comments are closed.