Oct, 18 , 2012 – San Diego – The eNovance team
Day 3 at OpenStack Summit. Let’s continue the series of posts related to OpenStack Summit.
Multi-backend support for Cinder
http://wiki.openstack.org/Cinder/MultiVolumeBackend
During the session, they directly used etherpad.
Very informal presentation where the audience debated heavily and the conclusions went straight to the etherpad
The speaker works for Rackspace and did all the work on the current code.
Short notes :
- It needs a scheduler to find the suitable backend
- How the user can use it ? It’s not its the scope of this talk : the volume_type could be used to designate a definition that is then used by the driver to allocate resources.
- Each API operation should be submitted to the scheduler, even if it could be performed on a designated host ( such as when LVM volumes are used on a specific host )
- From the configuration file, why not run multiple managers instead of just one for HA purposes
- The scheduler has to know about the volume_types as well as the chosen backend
Question : how can we help ?
Answer : we need to agree on the blueprint and then work on the code.
Volume types, extra specs, QoS
The session was mostly a debate based on the following etherpad, modified during the discussion :
http://etherpad.openstack.org/grizzly-cinder-volumetypes
- Each volume node has to report its own capabilities.
- The nova scheduler has to know each backend
- There is a cost function and a weight to figure out which backend must be picked after all filters are satisfied
- There are two tables volume_types and the volume_types_extra_spec table (already exists)
- When relating volume types to QoS you may or may not expose the actual information of the volume node to the end user
- We are trying to define what volume type should be ? 5 people give 5 different answers
Question : what happens when the user changes the volume_type ?
Answer : modifying volume_types is not on the roadmap.
Question : what about quotas ?
Answer : this is going to be a problem.
General Sessions notes
- HP is going to provide a full stack (IaaS, PaaS, SaaS) based on OpenStack for IaaS.
- RackSpace deploys OpenStack in Production with Nova multi-cells.
- Rackspace runs Nova trunk from 7 days (Grizzly) for New Generation of Cloud Servers. They also run Nicira NVP for Networking Virtualization with Quantum. They don’t use Glance API right now, but are going to converge with OpenStack projects.
- Cisco deploys OpenStack with Puppet (you can find manifests here). I had a talk with a Cisco Engineer, and we should converge our work to provide more powerfull puppet manifests with the community.
- Cisco use Distributed Nova Volume with Affiinity (each host runs nova-volume)
VPN support in Quantum
Working per tenant, it will support many plugins. The main goal is to provide a common framework to manage virtual private networks ready for IPV4/V6.
- Implement OpenVPN flavors that provide either L2 or L3 connectivity.
- Implement nova proxy, so that nova commands still work
Ebay Use-Case with Quantum & Nicira
This session was really an interesting one since we have been able to understand how important is the networking virtualization. Ebay showed us how they’ve deployed a private Cloud for developers and customers with OpenStack Essex release. They used Quantum + NVP Plugin.
In a new era of Networking, their feedback was very important for understanding how they solve the issues we run with physical network. A lot of work has to be done since they need to upgrade from Essex to Folsom without interrupting the service.
Question : Is NVP scalable ?
Answer : Yes, it seems that NVP can really support a big infrastructure since they have tested it on real situations.
If you want to know more informations about the use case, you can download the slides here. We learnt a lot from them, thank’s to Ebay guys sharing feedback.
That’s it for today, tomorrow is the last day, and we are going to follow some workshops. I’ll give you feedbacks for sure !
Follow us on Twitter : @enovance
And check #OpenStack hashtag on Twitter.
