Arista Layer-3 Leaf-Spine Fabric with VXLAN HER: Lab Part 2

July 19, 2017
Virtual Environment Setup
by Pablo Narváez

Welcome to the second blog post in my multi-part series describing in detail how I will deploy a L3SL-V Ethernet fabric with Arista vEOS and Ubuntu/KVM. In this post, I’m going to dive into the first component of the deployment: the virtual environment and the VMs.

Virtual Machine Spreadsheet – Inventory List

As it’s shown in the inventory list above (spreadsheet), I’m going to use a single server with KVM to create multiple VMs. I’ll go with Ubuntu Desktop for the server OS, but you can choose the Server version (Since Ubuntu 12.04, there is no difference in kernel between Ubuntu Desktop and Ubuntu Server).

This guide assumes you have a Linux user graphical interface, so the Desktop version is desired.

To download and install Ubuntu, please follow these links:

NOTE: I chose a Type 2 hypervisor (run on a host OS) over bare-metal to have a flexible environment: for this kind of setups I prefer to have a base OS in order to use some traffic monitoring tools (like Wireshark) and to have a centralized repository for software images.

In addition to that, I chose KVM over VirtualBox because of the number of NIC cards (vNICs) supported: VirtualBox only supports 8 network adapters per VM. Since I will be using this lab to deploy some other unplanned functionalities, I just didn’t want to end up with such limitations in case I need to add additional vNICs.

If you want to give VirtualBox a try, you can follow these links:


The procedure described below is a summary of the official guide posted here.

Pre-Installation Checklist

To run KVM, you need a processor that supports hardware virtualization. To see if your processor supports it, you need to install cpu-checker:

$ sudo apt-get install cpu-checker

Now, you can review the output from this command:

$ kvm-ok

which may provide an output like this:

INFO: /dev/kvm exists
KVM acceleration can be used

If this is your case, you are good to go.

If you see :

INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be used

You can still run virtual machines, but it’ll be much slower without the KVM extensions.

NOTE: Running a 64 bit kernel on the host operating system is recommended but not required. On a 32-bit kernel install, you’ll be limited to 2GB RAM at maximum for a given VM.

Installation of KVM

You need to install a few packages first:

$ sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virt-viewer virt-manager
  •  qemu-kvm(kvmin Karmic and earlier) is the backend
  • libvirt-binprovides libvirtd which you need to administer qemu and kvm instances using libvirt
  • ubuntu-vm-builderis a powerful command line tool for building virtual machines
  • bridge-utilsprovides a bridge from your network to the virtual machines. This package is optional, but highly recommended in case you have multiple network adapters on the host and want to map some VMs to an external network. Another option is Open vSwitch as a replacement for the Linux Bridge.
  • virt-viewer is a tool for viewing instances. This package is optional, but strongly recommended to display a graphical console for VMs
  • virt-manager is GUI tool to manage virtual machines. This module is optional, but strongly recommended to simplify the VM life-cycle management. If not installed, you will have to manage VMs with the Virsh command-line

After the installation, you need to relogin so that your user becomes an effective member of KVM and libvirtd user groups. Only the members of this group can run virtual machines.

Verify the Installation

After you relogin, test if your installation has been completed successfully with the following command:

$ virsh list --all
 Id Name                 State

If you get something like this:

$ virsh list --all
libvir: Remote error : Permission denied
error: failed to connect to the hypervisor

Something is wrong (e.g. you did not relogin) and you probably want to fix this before you move on.

To troubleshoot any issues during or after the installation, please check the official KVM installation guide posted here.


This is what we are going to build.

Virtual Machine Network Diagram

The drawing shows where each network adapter (vNIC) is, what network it’s configured for, and how the VMs are interconnected. Every connection between two adapters represents an isolated segment which must be configured as a virtual network in KVM.

To ensure that every link is isolated, we need to configure each virtual network with an exclusive name (I configured them as “net-x”), disable IP routing, and use every virtual network only once for a unique link.

The links between each VM will act like physical cables, but the virtual network connecting the management interfaces of the Ubuntu Linux servers and the Arista switches are on a common shared network (“net-oob”). The host will also have an adapter connected to this network so we can ssh into each device through the Out-of-Band management interface (OOB).

Virtual Machine Out-of-Band Management Network Diagram

The first network adapter will always end up being the Management1 interface in each switch. To simplify things, I dedicated the first network adapter (vNIC1) for management on each VM.


We will have two different types of VMs: Ubuntu servers an Arista switches. For Linux, we are going to install the same software image that we used for the host OS. For the Arista switches, there are two files that are needed: the virtual hard drive (vmdk) and the Aboot ISO file.

You need to register at to download the software. Once you login, go to Support > Software Download to retrieve the following files:


NOTE: There are several folders and more than one image format, make sure to download the correct files from the vEOS-lab folder.

To build the VMs faster, we are going to create two base VMs (golden images, one for the servers and one for the switches), then clone them multiple times.

Creating the Base VM for Servers

The easiest way to create a virtual machine in KVM is to use the Virtual Machine Manager application. You can find it in your applications dashboard.


Or you can use the command-line:

$ virt-manager


The first thing to do is create the virtual networks for the network adapters. Look at the network drawing and the spreadsheet at the beginning of this section.

In the virt-manager main window, click the edit button on the toolbar, then click on Connection Details.


Go into the Virtual Networks tab, click the add button (“+” icon, lower left corner).

Give the network a name. This is going to be our first virtual network, so we will start with “net-1”.


We will simulate physical network connections so we don’t need to assign IP addresses for now.

NOTE: When creating the out-of-band management network (“net-oob“) you might want to enable the IP address definition so KVM adds a virtual network adapter on the host to communicate directly with the VMs (for ssh/admin purposes).

Uncheck the Enable IPv4 network address space definition option, do the same for IPv6 in the next step.


We need isolated segments to interconnect the VMs, so choose the isolated virtual network option and then click the finish button to continue.


Repeat the same steps to create the rest of the networks. Don’t forget to add the management network (“net-oob”).

NOTE: Enable the IP address space when creating “net-oob” for management. In this case, I will be using for the network and the host will receive the IP address

When this done, there must be a total of 18 networks (net-1 thru net-17, plus net-oob).


Now we need to create the actual VMs. Go back into the virt-manager main screen and click the Create New Virtual Machine icon on the toolbar to start the installation.

First, set the virtual machine’s name (“server01”) and select the installation method, select Local install media (ISO image or CDROM).

Next, we need to find and select the Linux image (Ubuntu 16.04.2 ISO file). Make sure to check the Automatically detect the operating system option.


You should now choose how much memory to allocate for the VM. Allocate 2040MB of memory and 1 CPU.

Remember: To allocate more than 2GB of memory to a virtual machine, you need to have a 64-bit CPU and a 64-bit Linux kernel.

Check the Enable storage for this virtual machine option and allocate the disk space for the vm. In my case, I will leave the default space (20 GB).

By default, KVM configures NAT for the network adapters. We need to configure the network adapters on each VM as shown in the spreadsheet.

To do so, before clicking on the Finish button, make sure to check the Customize configuration before install option to edit the VM settings.

NOTE: You can always customize the VM configuration after the installation.


Now we need to configure all of the internal networks within the VMs, I’ll show some examples.

From the left hand side menu, click on the only NIC adapter available and open the Netwok Source drop-down menu. You will see all the virtual networks we created in the previuos steps.


From the drop-down menu, choose Virtual Network net-oob” to assign the management network to the adapter and configure “e1000” for Device model.

Remember: The first NIC on all VMS will always be the management interface.


Then, click on the Add Hardware button (lower left corner) and add two NIC adapters for “net-1” and “net-2” respectively. Don’t forget to choose “e1000” for the Device model option.


You can click on the Begin Installation button on the upper left corner to start the OS installation.


Virt-manager will boot the guest operating system, you may now proceed to install the OS.

Cloning the Base VM (Ubuntu Servers)  

Now we need to clone the base VM to build the rest of the servers. The main virt-manager window will show server01, right click on it and click Clone.

NOTE: You need to power off the VM to clone it.


In the clone window, change the server name (“server02”, in this case), leave the default settings for Networking and make sure to choose the Clone this disk option for the disk storage.


Click on the Clone button to finish. Repeat the same steps to create the rest of the servers.

Finally, we need to configure the network settings for each adapter on every server (remember, every cloned VM will have server01 settings so we need to change that).

I will show you one example: In the virt-manager main window, right click on server02 and click Open.


Within the configuration menu, click on the second NIC adapter and choose “net-3” from the Netwok source drop-down menu; then, assign “net-4” to the third NIC.


Configure the remaining network adapters on each VM as shown in the spreadsheet.

Creating the Fabric Switches

Not quite done yet! We need to build the VMs to run Arista vEOS. The process will be quite different and we will have to tweak some settings to make the vEOS boot, so stick with me.

In the virt-manager main window, click on the Create a new virtual machine and select the last option: Import existing disk image.

Browse and locate the vEOS-lb-4.17.5M.vmdk file in your server, leave the default setting for OS Type and Version.


Next, allocate 2048MB of memory and 1 CPU.

NOTE: With the latest vEOS release it is now required to allocate at least 2GB of memory.

Before clicking on the Finish button, name the VM (“spine01”) and make sure to check the Customize configuration before install option.

It’s time to tweak some settings before installing the OS. First, take a look at the screen below, this is what you should have by now.


While in this window, modify the following:

  1. Remove the IDE Disk 1 – I know, I know, it’s the disk we just created a few steps back with the vmdk file, but it’s critical to build the disks from scratch
  2. Remove the sound controller (Sound: ich 6 in my case)
  3. Change the video settings from QXL to VGA
  4. Change the NIC configuration – Choose the “net-oob” virtual network for management and configure “e1000” for the Device model option.
  5. Add three additional NICs for “vnet-11”, “vnet-13” and “vnet-15” respectively,  choose “e1000” for Device model.
  6. Add two disk storage devices, one IDE disk with the vmdk file and one IDE CD device with the Aboot.iso file (see below).


Arista vEOS is very particular about how its storage is configured. Both drives need to be IDE and the Aboot.iso (boot-loader) needs to be installed as a CD. If a SCSI controller gets created, it must be deleted or vEOS will not load.


Next, we need to make the VM boot from the CD to load the Aboot.iso file. Change the Boot Options to boot from the IDE CDROM 1.


Click Apply to close the window. Go ahead and click on the Begin Installation button, you will see the boot-loader run.


If you have ever installed vEOS on other hypervisors, you will notice that it takes too long to boot in KVM with the same resources allocated. In addition to that, it’s important to be aware that you will not see the booting sequence, be patient and wait for the command-line to appear!


¡Listo! The base VM is ready. Clone the VM so you can create spine02, leaf01, leaf02 and leaf03.

Don’t forget to customize the network configuration for each VM, configure the network adapters as shown in the spreadsheet.


You should see all the VMs on the virt-manager main window. Run all the VMs and wait until they are operational.


You can also test if your installation has been completed successfully with the following command:

$ virsh list --all

All VMs must be in the running state and you should be able to access the user interface on each VM.


When this is done, the lab should be bootable and every device should be connected to every other device according to the original network diagram.

In the next post, we will configure the L2/L3 protocols for the fabric, stay tuned!

You can always check my github repository to download the configuration files.

Articles in the series:


One thought on “Arista Layer-3 Leaf-Spine Fabric with VXLAN HER: Lab Part 2

  1. Burt

    Posted this one the wrong page…..

    if anyone is doing this on ESXi, seems you have to use a separate vSwitch per net-X, set the promiscuous mode, forged transmits and mac address changes to Allow, MTU size on the switches to 9000 and the VLAN ID on the ESX port groups (networks) needs to be ALL (4095).

    Took me a while to find this out as it’s hidden away!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s