eNovance‘s software engineering team is releasing the eDeploy project publicly today. A series of articles will describe the project.
eDeploy is a new generation tool to manage baremetal deployments and upgrades of Linux based systems. Upgrades have been the main focus with the possibility to rollback upgrades if needed.
For those in a hurry wanting to see the source code, eDeploy is released under the Apache release at https://github.com/enovance/edeploy.
This article is focusing on the installation feature of eDeploy.
To be able to cope with upgrades and rollbacks, unlike traditional system using packages or full system images, eDeploy is manipulating complete trees of systems prepared in advance.
To use eDeploy you must configure the following components:
- a PXE server
- an HTTP server with CGI support
- an rsync server
The installation is done in 3 steps:
- Hardware detection
- Hardware configuration
- Tree copy
Phase 1: Hardware detection
The system to provision boots on special kernel and initrd via your method of choice (usually PXE or iPXE). This special initrd is launching the hardware detection which focuses on the following characteristics of your hardware:
- network cards
- RAID and disk controllers
- IPMI controller
This hardware detection is sent to the server which returns a configuration script.
Phase 2: Hardware configuration
The hardware configuration is done in 2 steps on the server:
- Hardware matching
- Configuration script generation
The server has an ordered list of hardware profiles configured in the
config/state file. The
config/state is a simple Python list of tuples like that:
[('hp1', 4), ('hp2', 4), ('vm', '*')]
which means the
hp2 profiles will be installed 4 times and the
vm profile will be installed without limit.
Each profile is described by a
.specs file that lists the hardware properties to match for this profile. For example, a simple virtual machine hardware can be specified like that:
('disk', '$disk', 'size', 'gt(4)'),
('network', '$eth', 'ipv4', 'network(192.168.122.0/24)'),
('network', '$eth', 'serial', '$mac'),
The first line matches a disk with more than 4GB, the second line a network card with an IP address configured in the network 192.168.122.0/24 and the third line matches the MAC address of the previously detected card.
The special value starting with a $ are variables that will be given to the configuration script sent back to the system to provision.
Configuration script generation
Each hardware profile must have an associated configuration script described by a
.configure file. For example, a simple configuration script for our vm example could be like that:
disk1 = '/dev/' + var['disk']
for disk, path in ((disk1, '/chroot'), ):
run('parted -s %s mklabel msdos' % disk)
run('parted -s %s mkpart primary ext2 0%% 100%%' % disk)
run('mkfs.ext4 %s1' % disk)
run('mkdir -p %s; mount %s1 %s' % (path, disk, path))
var['netmask'] = '255.255.255.0'
var['gateway'] = '192.168.122.1'
iface lo inet loopback
iface %(eth)s inet static
''' % var)
set_role('mysql', 'D7-F.1.0.0', disk1)
var dictionary contains the variables that have been matched during the hardware matching phase. Our example configuration script is just creating a single partition taking the whole disk, formatting this partition as ext4, configuring the network interface and then setting the software role and version to install.
Phase 3: Tree copy
Once the hardware is configured using the configuration script received from the server, the installation process downloads the tree to put at the root of the filesystem according to the defined version and role. Here in our example it would have downloaded the tree of files for the
mysql software role at version
After the copy of files, a bootloader is configured and the system reboots to newly installed system.
eDeploy is a young project so any feedback is welcome. Don’t hesitate to fork it and to send pull request or bug reports using github.
Stay tuned for the next article about eDeploy!