Using Containerlab with netlab¶
Containerlab is a Linux-based container orchestration system focused on creating virtual network topologies. To use it:
Install network device container images
Create lab topology file. Use
provider: clabin lab topology to select the containerlab virtualization provider.
Start the lab with netlab up
We tested netlab with containerlab version 0.41.2. That’s also the version installed by the netlab install containerlab command.
Minimum supported containerlab version is 0.37.1 (2023-2-27) – that version introduced some changes to the location of generated certificate files.
If needed, use
sudo containerlab version upgrade to upgrade to the latest containerlab version.
Virtual network device
Cumulus VX with NVUE
Mikrotik RouterOS 7
Nokia SR Linux
Nokia SR OS
Cumulus VX, FRR, Linux, and Nokia SR Linux images are automatically downloaded from Docker Hub.
Arista cEOS image has to be downloaded and installed manually.
Nokia SR OS container image (requires a license), see also vrnetlab instructions.
You can also use vrnetlab to build VM-in-container images for Cisco CSR 1000v, Nexus 9300v and IOS XR, OpenWRT, Mikrotik RouterOS, Arista vEOS, Juniper vMX and vQFX, and a few other devices.
You might have to change the default loopback address pool when using vrnetlab images. See Using vrnetlab Containers for details.
For multi-access network topologies, netlab up command automatically creates additional standard Linux bridges.
You might want to use Open vSwitch bridges instead of standard Linux bridges (OVS interferes less with layer-2 protocols). After installing OVS, set defaults.providers.clab.bridge_type to ovs-bridge, for example:
defaults.device: cumulus provider: clab defaults.providers.clab.bridge_type: ovs-bridge module: [ ospf ] nodes: [ s1, s2, s3 ] links: [ s1-s2, s2-s3 ]
Connecting to the Outside World¶
Lab links are modeled as point-to-point veth links or as links to internal Linux bridges. If you want to have a lab link connected to the outside world, set clab.uplink to the name of the Ethernet interface on your server. The minimum containerlab release supporting this feature is release 0.43.0.
Example: use the following topology to connect your lab to the outside world through
r1 on a Linux server that uses
enp86s0 as the name of the Ethernet interface:
defaults.device: cumulus provider: clab nodes: [ r1,r2 ] links: - r1-r2 - r1: clab: uplink: enp86s0
In multi-provider topologies set the uplink parameter only for the primary provider (the one specified in topology-level provider attribute); netlab copies the uplink parameter to all secondary providers during the lab topology transformation process.
Containerlab Management Network¶
containerlab creates a dedicated Docker network to connect the container management interfaces to the host TCP/IP stack. You can change the parameters of the management network in the addressing.mgmt pool:
ipv4: The IPv4 prefix used for the management network (default:
ipv6: Optional IPv6 management network prefix. Not set by default.
start: The offset of the first management IP address in the management network (default:
100). For example, with start set to 50, the device with node.id set to 1 will get 51st IP address in the management IP prefix.
_network: The Docker network name (default:
_bridge: The name of the underlying Linux bridge (default: unspecified, created by Docker)
Container Management IP Addresses¶
netlab assigns an IPv4 (and optionally IPv6) address to the management interface of each container regardless of whether the container supports SSH access or not. That IPv4/IPv6 address is used by containerlab to configure the first container interface.
You can change the IPv4/IPv6 address of a device management interface with the mgmt.ipv4/mgmt.ipv6 node parameter, but be aware that nobody checks whether your change will result in overlapping IP addresses.
It’s much better to use the addressing.mgmt pool ipv4/ipv6/start parameters to adjust the address range used for management IP addresses, and rely on netlab to assign management IP addresses to containers based on device node ID.
netlab supports container port forwarding – mapping of TCP ports on the container management IP address to ports on the host. You can use port forwarding to access the lab devices via the host external IP address without exposing the management network to the outside world.
Some containers do not run a SSH server and cannot be accessed via SSH even if you set up port forwarding for the SSH port.
Port forwarding is disabled by default and can be enabled by configuring the defaults.providers.clab.forwarded dictionary. Dictionary keys are TCP port names (ssh, http, https, netconf), dictionary values are start values of host ports. netlab assigns a unique host port to every forwarded container port based on the start value and container node ID.
For example, when given the following topology…
defaults.providers.clab.forwarded: ssh: 2000 defaults.device: eos nodes: r1: r2: id: 42
… netlab maps:
SSH port on management interface of R1 to host port 2001 (R1 gets default node ID 1)
SSH port on management interface of R2 to host port 2042 (R2 has static ID 42)
Using vrnetlab Containers¶
vrnetlab is an open-source project that packages network device virtual machines into containers. The architecture of the packaged container requires an internal network, and it seems that vrnetlab (or the fork used by containerlab) uses IPv4 prefix 10.0.0.0/24 on that network which clashes with the netlab loopback address pool.
If you’re experiencing connectivity problems or initial configuration failures with vrnetlab-based containers, add the following parameters to the lab configuration file to change the netlab loopback addressing pool:
addressing: loopback: ipv4: 10.255.0.0/24 router_id: ipv4: 10.255.0.0/24
Container Runtime Support¶
Containerlab supports multiple container runtimes besides the default docker. The runtime to use can be configured globally or per node, for example:
provider: clab defaults.providers.clab.runtime: podman nodes: s1: clab.runtime: ignite
Using File Binds¶
You can use clab.binds to map container paths to host file system paths, for example:
nodes: - name: gnmic device: linux image: ghcr.io/openconfig/gnmic:latest clab: binds: gnmic.yaml: '/app/gnmic.yaml:ro' '/var/run/docker.sock': '/var/run/docker.sock'
You don’t have to worry about dots in filenames: netlab knows that the keys of the clab.binds and clab.config_templates dictionaries are filenames and does not expand them into hierarchical dictionaries.
Generating and Binding Custom Configuration Files¶
In addition to binding pre-existing files, netlab can also generate custom config files on the fly based on Jinja2 templates. For example, this is used internally to create the list of daemons for the frr container image:
frr: clab: image: frrouting/frr:v8.3.1 mtu: 1500 node: kind: linux config_templates: daemons: /etc/frr/daemons
netlab tries to locate the templates in the current directory, in a subdirectory with the name of the device, and within system directory
.j2 suffix is always appended to the template name.
For example, the
daemons template used in the above example could be
<netsim_moddir>/templates/provider/clab/frr/daemons.j2; the result gets mapped to
/etc/frr/daemons within the container file system.
You can use the
clab.config_templates node attribute to add your own container configuration files, for example:
provider: clab nodes: t1: device: linux clab: config_templates: some_daemon: /etc/some_daemon.cf
Faced with the above lab topology, netlab creates
some_daemon.j2 (the template could be either in current directory or
linux subdirectory) and maps it to
/etc/some_daemon.cf within the container file system.
Jinja2 Filters Available in Custom Configuration Files¶
The custom configuration files are generated within netlab and can therefore use standard Jinja2 filters. If you have Ansible installed as a Python package, netlab tries to import ipaddr family of filters, making filters like ipv4, ipv6 or ipaddr available in custom configuration file templates.
Ansible developers love to restructure stuff and move it into different directories. This functionality works with two implementations of ipaddr filters (tested on Ansible 2.10 and Ansible 7.4/ Ansible Core 2.14) but might break in the future – we’re effectively playing whack-a-mole with Ansible developers.
Using Other Containerlab Node Parameters¶
You can also change these containerlab parameters:
clab.type to set node type (used by Nokia SR OS and Nokia SR Linux).
clab.env to set container environment (used to set interface names for Arista cEOS)
clab.ports to map container ports to host ports
clab.cmd to execute a command in a container.
String values (for example command to execute specified in clab.cmd) are put into single quotes when written into
clab.yml containerlab configuration file – make sure you’re not using single quotes in your command line.
You can find the full list of supported Containerlab attributes in the system defaults or print it with the
netlab inspect defaults.providers.clab.attributes command.
To add other containerlab attributes to the
clab.yml configuration file, modify defaults.providers.clab.node_config_attributes settings, for example:
provider: clab defaults.providers.clab.node_config_attributes: [ ports, env, user ]