I’ve been using a VMware ESXi box as the foundation for a home lab for many years now, but without the benefit of the full (expensive!) vSphere/vCenter suite it’s a relatively tedious affair for managing anything beyond a tiny handful of virtual machines. I’ve recently begun using Vagrant for making bulk provisioning much faster, easier and more repeatable. It works well with VMware Fusion on the MacBook but I find the ESXi support a little lacking by comparison. It’s functional but has network configuration quirks and is very slow to provision VMs as it relies on copying the image to the ESXi host for every VM creation.
I recently had an attempt at interfacing Vagrant to libvirt/KVM/QEMU, the native virtualisation stack in Linux, and it’s a breath of fresh air! I’ve now replaced ESXi completely in favour of this setup.
Here’s a quick guide for anyone who is new to the process. While it would help to have basic familiarity with Vagrant, libvirt and Linux networking, I’ve tried to make this as easy as possible to get started.
I’m basing this example on Ubuntu 20.04 LTS. The same basic concepts apply to any Linux-based operating system, however you’ll need to adapt the instructions if you wish to use a different distro.
Networking Background
In the first instance I’ll attempt to explain some Linux networking basics to help provide some background for those relatively new to Linux virtualisation. VMware ESXi makes networking a fairly point-and-click affair; in Linux you’ll need to learn how to configure networking a bit more manually.
For the sake of keeping this article as simple as possible I won’t get into VLANs just yet; I’ll save that for a later post. In the example below I’ll be sharing the same network with the VMs and the host.
When using Linux virtualisation, the most common way to attach networks to virtual machines is to use bridges – these are essentially a software representation of a network switch (a switch is in fact a multi-port bridge!). This mimics the method that is most commonly done with VMware ESXi.
In the case of this post we’ll create a new bridge interface called br-mgmt
. You can call it anything you like (within reason) but maintaining a naming convention helps to make interfaces easier to identify when you are managing the system. I often use the br-
prefix followed by a short label for the function of the bridge. In this case I’ve used mgmt
as this is will be the interface I use for accessing and managing the host.
Because I’ll be using this bridge on the same network as the one I’m accessing the host itself, we’ll need to ensure that:
- The physical network interface of the host is attached to the bridge, and
- The IP address used for accessing the host is attached to the bridge interface, not the physical interface.
We’ll cover this in the next section.
Network Setup
As I’m using Ubuntu 20.04 we’ll configure networking using Netplan. Although Netplan lacks a significant amount of advanced functionality, it’s relatively simple to set up and comes supplied out of the box. I don’t recommend you disable Netplan unless you are experienced enough to deal with networking via another means.
If you are using a different distro you will need to consult the distro documentation for instructions instead.
First find the interface name for the NIC in your machine using the ip address
command; the exact name will depend on the hardware configuration of your machine:
ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master eno1 state UP group
link/ether 54:52:00:d3:f5:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.10.7/24 brd 192.168.10.255 scope global dynamic noprefixroute eno1
valid_lft 1227sec preferred_lft 1227sec
inet6 fe80::1cf3:edff:fe8c:3ca3/64 scope link
valid_lft forever preferred_lft forever
In this case the physical interface name is eno1
and it has the IP address 192.168.10.7. If the interface name on your own host is different then you will need to substitute it accordingly in the instructions below.
We’ll create a new Netplan configuration to:
- Remove the IP address from the physical interface.
- Disable DHCP on the physical interface so it doesn’t try to get an IP address automatically.
- Create a new bridge called
br-mgmt
. - Attach the physical interface as a member of the
br-mgmt
bridge. - Assign an IP address via DHCP to the bridge.
Back up and remove any existing configuration from Netplan:
mkdir netplan_backup
sudo mv /etc/netplan/* netplan_backup
Now save the following contents into a file called /etc/netplan/network.yaml
:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
accept-ra: false
bridges:
br-mgmt:
dhcp4: true
interfaces:
- eno1
Notes:
- If your physical interface is not
eno1
then you will need to change both entries the above config example to suit your system. - If you are using the desktop version of Ubuntu ensure you set
renderer: NetworkManager
instead onnetworkd
!
Once that’s done, apply the changes:
sudo netplan apply
And use the ip address
and bridge link
commands to verify that the new configuration has applied properly:
ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master eno1 state UP group
link/ether 54:52:00:d3:f5:eb brd ff:ff:ff:ff:ff:ff
3: br-mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 54:52:00:d3:f5:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.10.7/24 brd 192.168.10.255 scope global dynamic noprefixroute br-mgmt
valid_lft 1227sec preferred_lft 1227sec
inet6 fe80::1cf3:edff:fe8c:3ca3/64 scope link
valid_lft forever preferred_lft forever
bridge link
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br-mgmt state forwarding priority 32 cost 100
From here we can see the eno1
interface is a member of the br-mgmt
bridge and that the bridge has obtained an IP address from DHCP.
Let’s now install libvirt and Vagrant.
Packages
Install libvirt:
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients
Install Vagrant from the official repos (based on these instructions):
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt update
sudo apt install vagrant
Install necessary components for Vagrant and libvirt to play nicely together:
apt-get build-dep vagrant ruby-libvirt
apt-get install ebtables dnsmasq-base
apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
apt-get install libguestfs-tools
Ensure you are a member of the libvirt
group so you can manage VMs without sudo:
sudo usermod -a -G libvirt $USER
Log out and in to apply the group membership. At this point we should be all ready to go!
Vagrant Quick Start
Make a new directory for our project (e.g. lab1
) and change into it:
mkdir lab1
cd lab1
We’ll also need the Vagrant libvirt plugin:
vagrant plugin install vagrant-libvirt
We need to create a Vagrantfile to define how our lab will look. In this case I’m going to create a lab with two VMs, both on the br-mgmt
network with a fixed IP of our choosing. All will run Ubuntu 20.04 LTS.
Hostname | CPU Count | RAM (GB) | IP Address |
vm1 | 2 | 3 | 192.168.10.201 |
vm2 | 1 | 1 | 192.168.10.202 |
vm3 | 1 | 2 | 192.168.10.203 |
Create a Vagrantfile
to define some VMs using the template below. Note the custom network bridge to suit the bridge name we used in Netplan! The contents below should be saved as a file called Vagrantfile
in the current working directory.
# Use libvirt as the default provider
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt'
# Name of the bridge interface created in the steps above
bridge_iface = "br-mgmt"
# Define CPU count, memory (MB) and public IP addresses
# You can add any few or many lines as you like, assuming there is sufficient resource to run them!
nodes = {
"vm1" => [2, 3072, '192.168.10.201'],
"vm2" => [1, 1024, '192.168.10.202'],
"vm3" => [1, 2048, '192.168.10.203'],
}
Vagrant.configure("2") do |config|
# Use Ubuntu Focal image
config.vm.box = "generic/ubuntu2004"
# Apply config to each VM
nodes.each do | (name, cfg) |
numvcpus, memory, ipaddr = cfg
config.vm.define name do |node|
node.vm.hostname = name
node.vm.network :public_network,
:dev => bridge_iface,
:mode => bridge",
:type => "bridge",
:ip => ipaddr
node.vm.synced_folder('.', '/Vagrantfiles', type: 'rsync')
node.vm.provider :libvirt do |v|
v.memory = memory
v.cpus = numvcpus
end
end
end
end
Now to start the VMs run this command:
vagrant up
The first attempt might be a bit slow as the image will need to be downloaded and cached.
If you want to verify that the VMs are running you can use the virsh list
command:
virsh list
Id Name State
----------------------------------
1 libvirt_vm1 running
3 libvirt_vm2 running
4 libvirt_vm3 running
To shut down VMs but retain their state:
vagrant halt
If you want to completely delete the VMs run the following command.
vagrant destroy
If you modify the configuration of any existing VMs in the Vagrantfile
then you will need to destroy those VMs and rebuild then again using vagrant up
.
To access a VM for administration, use vagrant ssh <hostname>
, e.g:
vagrant ssh vm1
If you install any applications on the VMs then they should be accessible on the network via the IP addresses we set above.
Desktop App
If you are running this on a desktop machine then you can use the Virt Manager app to view and interact with the VMs. First ensure that it is installed:
sudo apt install virt-manager
From here you should be able to run the Virtual Machine Manager application from your desktop:

Conclusion
There we have it, a quick crash course in using Vagrant to drive libvirt! In coming posts I’ll describe a bit more about common tasks in libvirt and also using Vagrant and Ansible together to automatically configure the VMs after deployment, which is where the real power comes into this toolset!