Introducing Networking-Ansible

During the OpenStack Rocky release cycle a new OpenStack ML2 driver project was established: networking-ansible. This project integrates OpenStack with the Ansible Networking project. Ansible Networking is the part of the Ansible project that focuses on providing an Ansible interface for network operators to manage network switch configuration. By consuming Ansible Networking as the backend interface to network switch configuration we abstract the interface to communicate with the switching hardware to the Ansible layer. This provides opportunity to support multiple switching platforms in a single ML2 driver. This will reduce the maintenance overhead for OpenStack operators to integrate a heterogeneous network environment with baremetal guest deployments by only requiring a single ML2 driver to configure.

The networking-ansible team had two general goals in the Rocky release cycle. First, to establish the project. A significant amount of work was completed in the Rocky release cycle to establish OpenStack repositories and tracking tools, RDO packaging, upstream testing, and integration with Neutron, Ansible Networking, and Triple-O. We completed, and in some ways exceeded, our goals here. A big thank you to the RDO and OpenStack community members that contributed to this project’s successful establishment. Second, we intended to support a single initial use case that has a single basic feature focused on a single switch platform. We also accomplished and exceeded this goal. The Ironic project needs the ability to modify the switch port a baremetal guest is connected to to be able to have the node put onto the Ironic provisioning network for provisioning, then be moved to the Neutron assigned tenant network for guest tenant network traffic. This use case assumes a single network interface on the guest attached to a switch port in access mode. Using networking-ansible Neutron can swap the access port’s VLAN between the Ironic provisioning network and the Neutron assigned tenant network VLAN using Ansible Networking as its backend. We ended up testing on OVS and a Juniper QFX this cycle. Untested code exists for EOS and NXOS.

Looking towards the future, we have planned a set of goals for the OpenStack Stein release cycle. First, support for more platforms. There are a handful of switch platforms that we have gained access to. We plan to add support to the code base and work through as much testing as possible to expand what platforms are supported. Second, improved security and trunk port support. We are in process of adopting Ansible Vault to store switch credentials and working on offering the ability to configure a baremetal guest’s port in trunk mode to allow them to connect to multiple networks. Finally, expose a Python API. The underlying code that is interfacing with Ansible Networking does not need to have any hard dependencies on OpenStack. An API will be exposed and documented that is isolated from OpenStack dependencies. This API will be useful for use cases that would like the abstracted interface to networking hardware via Ansible Networking, but require different management needs than what OpenStack offers.

My congratulation goes out to the team and supporting community members that worked on this project for a very successful release cycle. My thanks again to the OpenStack and RDO communities for the support offered as we established this project. I look forward to adding the new features being worked on and I hope we’ll be just as successful in completing our new goals six months from now.

Article written by