Using libvirt/KVM with Vagrant

netlab uses Vagrant with the vagrant-libvirt plugin to start virtual machines in the libvirt/KVM environment.

To use libvirt/KVM environment on a Linux bare-metal server or a Linux VM:


You MUST use netlab up to start the lab to ensure the virtual machines get correct management IP addresses – netlab up creates the vagrant-libvirt management network with predefined IP address range and DHCP bindings.

Supported Versions

We tested netlab with Vagrant version 2.3.4 and vagrant-libvirt plugin version 0.11.2. These are also the versions installed by the netlab install libvirt command.

Vagrant Boxes

Vagrant starts virtual machines from prepackaged VM images called boxes. While it’s possible to download some network device images from Vagrant Cloud, you’ll have to build most of the boxes you’d want to use in your lab.

You have to use the following box names when installing or building the Vagrant boxes (see supported platforms and netlab show images for more details):

Virtual network device

Vagrant box name

Arista vEOS


Aruba CX


Cisco ASAv


Cisco Catalyst 8000v


Cisco CRS 1000v


Cisco IOSv


Cisco IOS XR


Cisco Nexus 9300v


Dell OS10


Fortinet FortiOS


Juniper vPTX (vJunos EVO)


Juniper vSRX 3.0




Mikrotik RouterOS 6


Mikrotik RouterOS 7




The following Vagrant boxes are automatically downloaded from Vagrant Cloud when you’re using them for the first time in your lab topology:

Virtual network device

Vagrant box name

Cumulus VX


Cumulus VX 5.0 (NVUE)


Generic Linux





  • Even if a new box version is available from Vagrant Cloud, Vagrant will only output a warning to let the user know an update is available. You can ignore that warning or update the box with vagrant box update.

  • Vagrant does not automatically download the updated boxes because boxes can be relatively large (See Vagrant box versioning for details).

  • We recommend that you periodically download the updated box for vyos/current

Building Vagrant Boxes

These documents contain box-building recipes using the netlab libvirt utility:


For more Vagrant details, watch the Network Simulation Tools part of Network Automation Tools webinar.

Modifying VM Settings

The following node parameters influence the VM configuration created by vagrant-libvirt:

  • cpu – number of virtual CPUs allocated to the VM

  • memory – VM memory (in MB)

  • libvirt.nic_model_type – VM NIC model (example: e1000). Default netlab settings usually work fine.

  • libvirt.nic_adapter_count – maximum number of VM NICs (default: 8)

Replacing Vagrant Boxes

If you want to rebuild and install a Vagrant box with the same version number, you must manually remove the old box. You must also delete the corresponding volume (disk image) from the libvirt storage pool (the vagrant-libvirt plugin installs new boxes but does not clean up the old ones).

To delete an old version of a Vagrant box, use a procedure similar to the one described below:

  • Use vagrant box list to list the installed boxes

  • Use vagrant box remove <box-name> --box-version=<box-version> to delete the Vagrant box[1]

  • Use virsh vol-list --pool default[2] to list the installed Vagrant boxes

  • Find the relevant volume name, for example, cisco-VAGRANTSLASH-iosxr_vagrant_box_image_7.4.2_box.img for an IOS XR 7.4.2 image

  • Delete the volume with virsh vol-delete --pool default <volume-name>

The new Vagrant box will be copied into the libvirt storage pool the next time you use the affected device in your lab.

Libvirt Networking

netlab uses libvirt networks and P2P UDP tunnels to implement topology links:

  • P2P UDP tunnels are used for links with two nodes, and link type is set to p2p (the default behavior for links with two nodes). P2P tunnels are transparent; you can run any layer-2 control-plane protocol (including LACP) over them.

  • libvirt networks are used for all other links. They are automatically created and deleted by vagrant up and vagrant down commands executed by netlab up and netlab down. netlab up sets the group_fwd_mask for all Vagrant-created Linux bridges to 0x4000 to enable LLDP passthrough.

Connecting to the Outside World

Lab networks are created as private, very-isolated libvirt networks without a DHCP server. If you want a lab network connected to the outside world, set the libvirt.public and/or libvirt.uplink link attributes (setting one of them is enough):

  • Set libvirt.public link attribute to true, or to any value supported by libvirt[3].

  • Set libvirt.uplink link attribute to the name of one of the Ethernet interfaces on your server[4] if your Linux distribution does not use eth0 as the name of the Ethernet interface[5].

Example: use the following topology to connect your lab to the outside world through r1 on a Linux server that uses enp86s0 as the name of the Ethernet interface:

defaults.device: cumulus
nodes: [ r1,r2 ]
- r1-r2
- r1:
    uplink: enp86s0

Finally, if you want to connect the management network to the outside world, create the management network based on an existing Linux bridge that is already connected to the outside world or enable port forwarding.

Using Existing Libvirt Networks

To attach lab devices to existing libvirt virtual networks:

  • Set the link bridge attribute to the name of an existing network.

  • Set the link libvirt.permanent attribute to True to tell vagrant-libvirt plugin it should not destroy the network on shutdown.

You can use this functionality to attach lab devices to public networks or networks extended with VXLAN transport.


vagrant destroy command will crash if it tries to destroy an existing non-persistent libvirt network, stopping the netlab down procedure. Rerun the netlab down command to complete the lab shutdown/cleanup process.

Libvirt Management Network

vagrant-libvirt plugin uses a dedicated libvirt network to connect the VM management interfaces to the host TCP/IP stack. netlab up command creates that network before executing vagrant up to ensure the network contains the expected DHCP mappings. The management network is automatically deleted when you execute netlab down (recommended) or vagrant destroy.

You can change the parameters of the management network in the addressing.mgmt pool:

  • ipv4: The IPv4 prefix used for the management network (default:

  • ipv6: Optional IPv6 management network prefix. It’s not set by default.

  • start: The offset of the first VM management IP address in the management network (default: 100). For example, with start set to 50, the device with set to 1 will get the 51st IP address in the management IP prefix.

  • _network: The libvirt network name (default: vagrant-libvirt)

  • _bridge: The name of the underlying Linux bridge (default: libvirt-mgmt)

  • _permanent: set to True to use an existing libvirt network as the management network. netlab up will create the network if it does not exist and tell Vagrant not to remove it when the lab is stopped.

Important caveats:

  • netlab up uses XML definition in templates/provider/libvirt/vagrant-libvirt.xml within the Python package directory (source file) to create the management network. If you’d like to change the management network parameters, create a custom XML definition in the libvirt/vagrant-libvirt.xml file in the current directory, ~/.netlab directory, or /etc/netlab directory.

  • If you want to use an existing libvirt network as the management network, make sure it has the same static DHCP mappings as the management network created by the netlab up command.

VM Management IP Addresses

The only way to assign a management IP address to a network device started as a virtual machine is through DHCP, and vagrant, together with libvirt (and dnsmasq), provides a seamless implementation.

netlab creates static DHCP mappings in the management network (see above) and asks vagrant-libvirt to set the MAC address of the VM management interface to a well-known value, ensuring that each VM gets the expected management IP address assigned by netlab based on the device node ID and the start parameter of the mgmt address pool.

To have fixed management IP addresses for your virtual machines (for example, to be accessed from an external management tool):

  • Change the addressing.mgmt parameters

  • Set node id parameters to the desired values

  • Let netlab do the rest of the work.

Port Forwarding

netlab supports vagrant-libvirt port forwarding – mapping of TCP ports on VM management IP address to ports on the host. You can use port forwarding to access the lab devices via the host’s external IP address without exposing the management network to the outside world.

Port forwarding is turned off by default and can be enabled by configuring the defaults.providers.libvirt.forwarded dictionary. Dictionary keys are TCP port names (ssh, http, https, or netconf), and dictionary values are the start values of host ports. netlab assigns a unique host port to every VM forwarded port based on the start value and VM node ID.

For example, when given the following topology…

  ssh: 2000

defaults.device: cumulus
    id: 42

netlab maps:

  • SSH port on management interface of R1 to host port 2001 (R1 gets default node ID 1)

  • SSH port on management interface of R2 to host port 2042 (R2 has static ID 42)

Starting Virtual Machines in Batches

vagrant-libvirt plugin tries to start all virtual machines specified in Vagrantfile in parallel. The resulting strain on CPU resources might cause VM boot failures in large topologies. As a workaround, you can configure libvirt virtualization provider to execute a series of vagrant up commands to start the virtual machines in smaller batches:

  • Configure the batch size with defaults.providers.libvirt.batch_size parameter (an integer between 1 and 50)

  • Configure idle interval between batches (if needed) with defaults.providers.libvirt.batch_interval parameter (between 1 and 1000 seconds).


provider: libvirt
defaults.device: cumulus
defaults.providers.libvirt.batch_size: 2
defaults.providers.libvirt.batch_interval: 10

nodes: [ a,b,c,x,z ]
module: [ ospf ]

links: [ a-x, a-z, b-x, b-z, c-x, c-z ]

Please note that the batch_size is set artificially low to ensure this small lab topology generates three batches. Realistic batch_size depends on your hardware resources (CPU, memory) and VM type.

You can use the netlab environment variables to set the batch sizes without changing lab topology or netlab defaults, for example:



The virtual machines are batched based on their order in ‌nodes list/dictionary. You might want to adjust the node order to group virtual machines with long start times (for example, Cisco Nexus OS or Juniper vSRX) into as few batches as possible.

Changing Installation Checks

netlab up executes a series of commands starting with kvm-ok to validate your KVM/libvirt/vagrant installation before trying to start the lab. These commands might fail on Linux distributions that do not include the kvm-ok command (more details).

To make the installation checks work, install the kvm-ok command into the /usr/sbin directory. You can also modify (or turn off) the installation checks. The libvirt installation checks are specified in the defaults.providers.libvirt.probe list:

  • To display them, use the netlab show defaults providers.libvirt.probe command.

  • To turn off the probes, set the defaults.providers.libvirt.probe lab topology parameter to []. Even better, change the user defaults file, adding:

providers.libvirt.probe: []