Using libvirt/KVM with Vagrant

netlab uses Vagrant with the vagrant-libvirt plugin to start virtual machines in the libvirt/KVM environment.

To use libvirt/KVM environment on a Linux bare-metal server or a Linux VM:

Warning

You MUST use netlab up to start the lab to ensure the virtual machines get correct management IP addresses – netlab up creates the vagrant-libvirt management network with predefined IP address range and DHCP bindings.

Supported Versions

We tested netlab with Vagrant version 2.3.4 and vagrant-libvirt plugin version 0.11.2. These are also the versions installed by the netlab install libvirt command.

Vagrant Boxes

Vagrant starts virtual machines from prepackaged VM images called boxes. While it’s possible to download some network device images from Vagrant Cloud, you’ll have to build most of the boxes you’d want to use in your lab.

You have to use the following box names when installing or building the Vagrant boxes (see supported platforms and netlab show images for more details):

Virtual network device

Vagrant box name

Arista vEOS

arista/veos

Aruba CX

aruba/cx

Cisco ASAv

cisco/asav

Cisco Catalyst 8000v

cisco/cat8000v

Cisco CRS 1000v

cisco/csr1000v

Cisco IOSv

cisco/iosv

Cisco IOSvL2

cisco/iosvl2

Cisco IOS XR

cisco/iosxr

Cisco Nexus 9300v

cisco/nexus9300v

Dell OS10

dell/os10

Fortinet FortiOS

fortinet/fortios

Juniper vPTX (vJunos EVO)

juniper/vptx

Juniper vSRX 3.0

juniper/vsrx3

Linux

generic/ubuntu2004

Mikrotik RouterOS 6

mikrotik/chr

Mikrotik RouterOS 7

mikrotik/chr7

Sonic

netlab/sonic

The following Vagrant boxes are automatically downloaded from Vagrant Cloud when you’re using them for the first time in your lab topology:

Virtual network device

Vagrant box name

Cumulus VX

CumulusCommunity/cumulus-vx:4.4.0

Cumulus VX 5.0 (NVUE)

CumulusCommunity/cumulus-vx:5.0.1

Generic Linux

generic/ubuntu2004

VyOS

vyos/current

NOTES:

  • Even if a new box version is available from Vagrant Cloud, Vagrant will only output a warning to let the user know an update is available. You can ignore that warning or update the box with vagrant box update.

  • Vagrant does not automatically download the updated boxes because boxes can be relatively large (See Vagrant box versioning for details).

  • We recommend that you periodically download the updated box for vyos/current

Building Vagrant Boxes

These documents contain box-building recipes using the netlab libvirt utility:

Note

For more Vagrant details, watch the Network Simulation Tools part of Network Automation Tools webinar.

Modifying VM Settings

The following node parameters influence the VM configuration created by vagrant-libvirt:

  • cpu – number of virtual CPUs allocated to the VM

  • memory – VM memory (in MB)

  • libvirt.nic_model_type – VM NIC model (example: e1000). Default netlab settings usually work fine.

  • libvirt.nic_adapter_count – maximum number of VM NICs (default: 8)

  • libvirt.uuid – sets the libvirt VM UUID (some devices use the UUID to create their serial numbers). The value of this parameter is not checked[1].

Replacing Vagrant Boxes

If you want to rebuild and install a Vagrant box with the same version number, you must manually remove the old box. You must also delete the corresponding volume (disk image) from the libvirt storage pool (the vagrant-libvirt plugin installs new boxes but does not clean up the old ones).

To delete an old version of a Vagrant box, use a procedure similar to the one described below:

  • Use vagrant box list to list the installed boxes

  • Use vagrant box remove <box-name> --box-version=<box-version> to delete the Vagrant box[2]

  • Use virsh vol-list --pool default[3] to list the installed Vagrant boxes

  • Find the relevant volume name, for example, cisco-VAGRANTSLASH-iosxr_vagrant_box_image_7.4.2_box.img for an IOS XR 7.4.2 image

  • Delete the volume with virsh vol-delete --pool default <volume-name>

The new Vagrant box will be copied into the libvirt storage pool the next time you use the affected device in your lab.

Libvirt Networking

netlab uses libvirt networks and P2P UDP tunnels to implement topology links:

  • P2P UDP tunnels are used for links with two nodes, and link type is set to p2p (the default behavior for links with two nodes). P2P tunnels are transparent; you can run any layer-2 control-plane protocol (including LACP) over them.

  • libvirt networks are used for all other links. They are automatically created and deleted by vagrant up and vagrant down commands executed by netlab up and netlab down. netlab up sets the group_fwd_mask for all Vagrant-created Linux bridges to 0x4000 to enable LLDP passthrough.

Packet Capture

The libvirt point-to-point UDP tunnels are not implemented as Linux interfaces, making it impossible to start packet capture on the VM interfaces attached to point-to-point tunnels. The VMs must be attached to Linux bridges for the netlab capture command to work.

Add type: lan to a point-to-point link between two virtual machines to change its implementation into a Linux bridge. You can also set the defaults.providers.libvirt.p2p_bridge parameter to True if you don’t want to use UDP tunnels for point-to-point links (see Changing Defaults in Lab Topology and Changing Defaults in User Defaults Files for more information on changing system defaults).

Finally, you could start the lab with the netlab up -p libvirt:p2p_bridge command to change the system default for a single lab instance.

Connecting to the Outside World

Lab networks are created as private, very-isolated libvirt networks without a DHCP server. If you want a lab network connected to the outside world, set the libvirt.public and/or libvirt.uplink link attributes (setting one of them is enough):

  • Set libvirt.public link attribute to true, or to any value supported by libvirt[4].

  • Set libvirt.uplink link attribute to the name of one of the Ethernet interfaces on your server[5] if your Linux distribution does not use eth0 as the name of the Ethernet interface[6].

Example: use the following topology to connect your lab to the outside world through r1 on a Linux server that uses enp86s0 as the name of the Ethernet interface:

defaults.device: cumulus
nodes: [ r1,r2 ]
links:
- r1-r2
- r1:
  libvirt:
    uplink: enp86s0

Finally, if you want to connect the management network to the outside world, create the management network based on an existing Linux bridge that is already connected to the outside world or enable port forwarding.

Using Existing Libvirt Networks

To attach lab devices to existing libvirt virtual networks:

  • Set the link bridge attribute to the name of an existing network.

  • Set the link libvirt.permanent attribute to True to tell vagrant-libvirt plugin it should not destroy the network on shutdown.

You can use this functionality to attach lab devices to public networks or networks extended with VXLAN transport.

Warning

vagrant destroy command will crash if it tries to destroy an existing non-persistent libvirt network, stopping the netlab down procedure. Rerun the netlab down command to complete the lab shutdown/cleanup process.

Libvirt Management Network

vagrant-libvirt plugin uses a dedicated libvirt network to connect the VM management interfaces to the host TCP/IP stack. netlab up command creates that network before executing vagrant up to ensure the network contains the expected DHCP mappings. The management network is automatically deleted when you execute netlab down (recommended) or vagrant destroy.

You can change the parameters of the management network in the addressing.mgmt pool:

  • ipv4: The IPv4 prefix used for the management network (default: 192.168.121.0/24)

  • ipv6: Optional IPv6 management network prefix. It’s not set by default.

  • start: The offset of the first VM management IP address in the management network (default: 100). For example, with start set to 50, the device with node.id set to 1 will get the 51st IP address in the management IP prefix.

  • _network: The libvirt network name (default: vagrant-libvirt)

  • _bridge: The name of the underlying Linux bridge (default: libvirt-mgmt)

  • _permanent: set to True to use an existing libvirt network as the management network. netlab up will create the network if it does not exist and tell Vagrant not to remove it when the lab is stopped.

Important caveats:

  • netlab up uses XML definition in templates/provider/libvirt/vagrant-libvirt.xml within the Python package directory (source file) to create the management network. If you’d like to change the management network parameters, create a custom XML definition in the libvirt/vagrant-libvirt.xml file in the current directory, ~/.netlab directory, or /etc/netlab directory.

  • If you want to use an existing libvirt network as the management network, make sure it has the same static DHCP mappings as the management network created by the netlab up command.

VM Management IP Addresses

The only way to assign a management IP address to a network device started as a virtual machine is through DHCP, and vagrant, together with libvirt (and dnsmasq), provides a seamless implementation.

netlab creates static DHCP mappings in the management network (see above) and asks vagrant-libvirt to set the MAC address of the VM management interface to a well-known value, ensuring that each VM gets the expected management IP address assigned by netlab based on the device node ID and the start parameter of the mgmt address pool.

To have fixed management IP addresses for your virtual machines (for example, to be accessed from an external management tool):

  • Change the addressing.mgmt parameters

  • Set node id parameters to the desired values

  • Let netlab do the rest of the work.

Port Forwarding

netlab supports vagrant-libvirt port forwarding – mapping of TCP ports on VM management IP address to ports on the host. You can use port forwarding to access the lab devices via the host’s external IP address without exposing the management network to the outside world.

Port forwarding is turned off by default and can be enabled by configuring the defaults.providers.libvirt.forwarded dictionary. Dictionary keys are TCP port names (ssh, http, https, or netconf), and dictionary values are the start values of host ports. netlab assigns a unique host port to every VM forwarded port based on the start value and VM node ID.

For example, when given the following topology…

defaults.providers.libvirt.forwarded:
  ssh: 2000

defaults.device: cumulus
nodes:
  r1:
  r2:
    id: 42

netlab maps:

  • SSH port on management interface of R1 to host port 2001 (R1 gets default node ID 1)

  • SSH port on management interface of R2 to host port 2042 (R2 has static ID 42)

Starting Virtual Machines in Batches

vagrant-libvirt plugin tries to start all virtual machines specified in Vagrantfile in parallel. The resulting strain on CPU resources might cause VM boot failures in large topologies. As a workaround, you can configure libvirt virtualization provider to execute a series of vagrant up commands to start the virtual machines in smaller batches:

  • Configure the batch size with defaults.providers.libvirt.batch_size parameter (an integer between 1 and 50)

  • Configure idle interval between batches (if needed) with defaults.providers.libvirt.batch_interval parameter (between 1 and 1000 seconds).

Example:

provider: libvirt
defaults.device: cumulus
defaults.providers.libvirt.batch_size: 2
defaults.providers.libvirt.batch_interval: 10

nodes: [ a,b,c,x,z ]
module: [ ospf ]

links: [ a-x, a-z, b-x, b-z, c-x, c-z ]

Please note that the batch_size is set artificially low to ensure this small lab topology generates three batches. Realistic batch_size depends on your hardware resources (CPU, memory) and VM type.

You can use the netlab environment variables to set the batch sizes without changing lab topology or netlab defaults, for example:

$ export NETLAB_PROVIDERS_LIBVIRT_BATCH__SIZE=2
$ export NETLAB_PROVIDERS_LIBVIRT_BATCH__INTERVAL=10

Tip

The virtual machines are batched based on their order in ‌nodes list/dictionary. You might want to adjust the node order to group virtual machines with long start times (for example, Cisco Nexus OS or Juniper vSRX) into as few batches as possible.

Changing Installation Checks

netlab up executes a series of commands starting with kvm-ok to validate your KVM/libvirt/vagrant installation before trying to start the lab. These commands might fail on Linux distributions that do not include the kvm-ok command (more details).

To make the installation checks work, install the kvm-ok command into the /usr/sbin directory. You can also modify (or turn off) the installation checks. The libvirt installation checks are specified in the defaults.providers.libvirt.probe list:

  • To display them, use the netlab show defaults providers.libvirt.probe command.

  • To turn off the probes, set the defaults.providers.libvirt.probe lab topology parameter to []. Even better, change the user defaults file, adding:

providers.libvirt.probe: []