Introduction
eNovance‘s software engineering team is releasing the eDeploy project publicly today. A series of articles will describe the project.
eDeploy is a new generation tool to manage baremetal deployments and upgrades of Linux based systems. Upgrades have been the main focus with the possibility to rollback upgrades if needed.
For those in a hurry wanting to see the source code, eDeploy is released under the Apache release at https://github.com/enovance/edeploy.
This article is focusing on the installation feature of eDeploy.
To be able to cope with upgrades and rollbacks, unlike traditional system using packages or full system images, eDeploy is manipulating complete trees of systems prepared in advance.
Components
To use eDeploy you must configure the following components:
- a PXE server
- an HTTP server with CGI support
- an rsync server
Installation
The installation is done in 3 steps:
- Hardware detection
- Hardware configuration
- Tree copy
Here is a simplified sequence of exchanges between the system to provision and the installation server:
Phase 1: Hardware detection
The system to provision boots on special kernel and initrd via your method of choice (usually PXE or iPXE). This special initrd is launching the hardware detection which focuses on the following characteristics of your hardware:
- network cards
- RAID and disk controllers
- IPMI controller
This hardware detection is sent to the server which returns a configuration script.
Phase 2: Hardware configuration
The hardware configuration is done in 2 steps on the server:
- Hardware matching
- Configuration script generation
Hardware matching
The server has an ordered list of hardware profiles configured in the config/state
file. The config/state
is a simple Python list of tuples like that:
[('hp1', 4), ('hp2', 4), ('vm', '*')]
which means the hp1
and hp2
profiles will be installed 4 times and the vm
profile will be installed without limit.
Each profile is described by a .specs
file that lists the hardware properties to match for this profile. For example, a simple virtual machine hardware can be specified like that:
[ ('disk', '$disk', 'size', 'gt(4)'), ('network', '$eth', 'ipv4', 'network(192.168.122.0/24)'), ('network', '$eth', 'serial', '$mac'), ]
The first line matches a disk with more than 4GB, the second line a network card with an IP address configured in the network 192.168.122.0/24 and the third line matches the MAC address of the previously detected card.
The special value starting with a $ are variables that will be given to the configuration script sent back to the system to provision.
Configuration script generation
Each hardware profile must have an associated configuration script described by a .configure
file. For example, a simple configuration script for our vm example could be like that:
disk1 = '/dev/' + var['disk'] for disk, path in ((disk1, '/chroot'), ): run('parted -s %s mklabel msdos' % disk) run('parted -s %s mkpart primary ext2 0%% 100%%' % disk) run('mkfs.ext4 %s1' % disk) run('mkdir -p %s; mount %s1 %s' % (path, disk, path)) var['netmask'] = '255.255.255.0' var['gateway'] = '192.168.122.1' open('/interfaces', 'w').write(''' auto lo iface lo inet loopback auto %(eth)s allow-hotplug %(eth)s iface %(eth)s inet static address %(ip)s netmask %(netmask)s gateway %(gateway)s hwaddress %(mac)s ''' % var) set_role('mysql', 'D7-F.1.0.0', disk1)
The var
dictionary contains the variables that have been matched during the hardware matching phase. Our example configuration script is just creating a single partition taking the whole disk, formatting this partition as ext4, configuring the network interface and then setting the software role and version to install.
Phase 3: Tree copy
Once the hardware is configured using the configuration script received from the server, the installation process downloads the tree to put at the root of the filesystem according to the defined version and role. Here in our example it would have downloaded the tree of files for the mysql
software role at version D7-F.1.0.0
.
After the copy of files, a bootloader is configured and the system reboots to newly installed system.
Conclusion
eDeploy is a young project so any feedback is welcome. Don’t hesitate to fork it and to send pull request or bug reports using github.
Stay tuned for the next article about eDeploy!
Hi,
Thanks for this “bootstrap” tool
Could explain the difference with this project :
https://gforge.inria.fr/projects/kadeploy3/ ?
Best Regards
Bruno
Hi Bruno,
The main difference is on the upgrade management and the flexibility at the hardware configuration level.
Fred
Hi,
Thanks for this “bootstrap” tool
Could explain the difference with this project :
https://gforge.inria.fr/projects/kadeploy3/ ?
Best Regards
Bruno
[…] In this article, we focus on the way to build roles to be deployed by eDeploy. For a description of eDeploy, consult the following article. […]
The link to the image under “Installation” seems to be broken.