Cumulus Networks Layer-3 Leaf-Spine Fabric with EVPN as a Control Plane for VXLAN

September 26, 2017
Testing Cumulus Networks Linux
by Pablo Narváez

Welcome back! In this article I will be testing VXLAN again, but this time on Cumulus Linux. I will replace the manual VTEP flood-lists/mappings (Head End Replication) with EVPN to provide a different control plane for VXLAN.

For lab purposes, I will use a self-contained virtual environment to run Cumulus VX as a VM on top of Ubuntu 16.0.4 LTS and KVM. This is the same environment I’ve used in the past to run other network operating systems like Arista vEOS and Free Range Routing (FRR).

The Cumulus-VX Fabric will be an independent POD for now, but in a following post I will connect this POD to the FRR block so we can have a single routed virtual environment.

NETWORK DISAGGREGATION

Network disaggregation means to separate the network into its component parts. Actually, network disaggregation implies the ability to source switching hardware and network operating systems separately. We have been doing this for years now in the server space, buying a server from any manufacturer and then loading an OS of your choice.

In this context, Cumulus Linux is an open network operating system that allows you to automate, customize and scale the network. From the networking perspective, Cumulus Linux is just the NOS, so you still need an OCP-certified white-box switch to run it. Different vendors offer bright-box switches like HPE, DELL and Edgecore Networks; they all use merchant silicon ASICs, so the actual difference when selecting a hardware vendor is their software ecosystem.

CUMULUS VX OVERVIEW

Cumulus VX is not a production-ready network operating system. It has the same foundation as Cumulus Linux, including all the control plane elements, but without an actual ASIC for line rate performance or hardware acceleration.

NOTE: You can see the similarities and differences between Cumulus VX and Cumulus Linux in the product comparison table posted here.

Cumulus VX is a free virtual environment to test Cumulus Networks within your own environment. It runs in a virtual machine (VM) on a standard x86 server. The VM is a 64-bit operating system running Debian Linux Jessie 4.1, using virtio drivers for network and HDD interfaces as well as the logical volume manager (LVM).

Cumulus VX integrates with the following hypervisors:

  • VMware
    • vSphere – ESXi 5.5
    • VMware Fusion
    • VMware Workstation
  • VirtualBox
  • KVM

INSTALLING CUMULUS VX

Installing Cumulus VX involves downloading and installing the preferred hypervisor platform/development environment and downloading the relevant Cumulus VX image from the Cumulus website. Once these are downloaded, the VX image can be imported to create the necessary VMs.

Cumulus VX images for all supported platforms are available from the Cumulus Networks website.

Each disk image contains a single VM for a standalone switch. The image can be cloned to build the test network.

To provision the VMs, you can use the hypervisor wizard/deployment tools. For KVM/QEMU, I’m going to use virt-manager  but you can also use virsh.

If you need help with the VM deployment on KVM, please check my previous posts.

This section assumes that a two-leaf / two-spine network topology is being configured. Once the base VMs are ready, you need to configure the network interfaces and routing.

The drawing below shows where each network adapter (vNIC) is, what network it’s configured for, and how the VMs are interconnected. Every connection between two adapters represents an isolated segment which must be configured as a virtual network in KVM.

To ensure that every link is isolated, we need to configure each virtual network with an exclusive name (I configured them as “net-x”) and use every virtual network only once for a unique link. The links between each VM will act like physical cables.

cumulus-vx connectivy diagram
Cumulus-VX Connectivy Diagram

NOTE: For KVM, Cumulus suggests to use the virtio driver for the network adapters.

Accessing Cumulus VX

It’s time to log into the Cumulus VMs. Use the following credentials to access the CLI (NCLU):

User name: cumulus
Password: CumulusLinux!

You can change the default password with the following command:

cumulus@switch$ passwd cumulus

Also, we need to name our switch. To change the hostname, run net add hostname which modifies both the /etc/hostname and /etc /hosts files with the desired hostname.

cumulus@switch$net add hostname spine01
cumulus@switch:~$ net pending
cumulus@switch:~$ net commit

NOTE: The command prompt in the terminal doesn’t reflect the new hostname until you either log out of the switch or start a new shell.

From the commands above, notice that you have to commit any changes made to the configuration:

cumulus@switch:~$ net pending
cumulus@switch:~$ net commit

The net pending command helps you verify the changes before applying them.

Once the changes are saved, the current (active) configuration is displayed with the  following command:

cumulus@spine01$ net show configuration

Out-of-Band Management (OOB)

Switches supported in Cumulus Linux always contain at least one dedicated Ethernet management port, which is named eth0. This interface is geared specifically for out-of-band management use. The management interface uses DHCPv4 for addressing by default. You can set a static IP address with the Network Command Line Utility (NCLU).

cumulus@spine01$ net add interface eth0 ip add 10.0.0.33/24
cumulus@spine01$ net add interface eth0 alias oob-mgmt
cumulus@spine01:~$ net pending
cumulus@spine01:~$ net commit

You can use a management VRF (Virtual Routing and Forwarding) to isolate the management network and make it inaccessible outside its subnet (unless you explicitly allow it).

cumulus@spine01$ net add vrf mgmt

Once you commit the command, your current session will close; when relogin, the mgmt-vrf will be active and shown as part of the command prompt:

cumulus@spine01:mgmt-vrf:~$ 

LAYER-2 CONFIGURATION

The Cumulus Linux bridge driver supports two configuration modes, one that is VLAN-aware, and one that follows a more traditional Linux bridge model.

The VLAN-aware mode in Cumulus Linux implements a configuration model with one single instance of Spanning Tree. Each physical bridge member port is configured with the list of allowed VLANs as well as its port VLAN ID. This significantly reduces the configuration size, and eliminates the large overhead of managing the port/VLAN instances as subinterfaces.

As shown in the the diagram posted aboveserver07 and server08 will belong to the same VLAN/network subnet. Please notice you do not need to activate the network ports, just start configuring the parameters and the switch ports will be brought up.

#Leaf01
 cumulus@leaf01:mgmt-vrf:~$ net add bridge bridge ports swp1
 cumulus@leaf01:mgmt-vrf:~$ net add bridge alias layer2_bridge
 cumulus@leaf01:mgmt-vrf:~$ net add interface swp1 bridge access 14
 cumulus@leaf01:mgmt-vrf:~$ net add bridge  bridge-vlan-aware yes
 cumulus@leaf01:mgmt-vrf:~$ net pending
 cumulus@leaf01:mgmt-vrf:~$ net commit
#Leaf02
cumulus@leaf02:mgmt-vrf:~$ net add bridge bridge ports swp1
cumulus@leaf02:mgmt-vrf:~$ net add bridge alias layer2_bridge
cumulus@leaf02:mgmt-vrf:~$ net add interface swp1 bridge access 14
cumulus@leaf02:mgmt-vrf:~$ net add bridge  bridge-vlan-aware yes
cumulus@leaf02:mgmt-vrf:~$ net pending
cumulus@leaf02:mgmt-vrf:~$ net commit

 NOTE: To identify the name and select the switch ports, you can use the Tab key to display the options.

As in any Linux distro, you can verify the VLAN configuration with the “bridge” command.

cumulus@leaf01:mgmt-vrf:~$ net show bridge vlan

Interface      VLAN  Flags
-----------  ------  ---------------------
swp1             14  PVID, Egress Untagged

cumulus@leaf01:mgmt-vrf:~$

Link Layer Discovery Protocol – LLDP

Before configuring Layer-2 services and the Layer-3 routed interfaces between VMs, you can use LLDP (Link Layer Discovery Protocol) to verify the network assignments for each VM.

LLDP allows you to know which ports are neighbors of a given port. By default, lldpd runs as a daemon and is started at system boot.

To see all neighbors on all ports/interfaces use the following command:

cumulus@leaf01:mgmt-vrf:~$ sudo lldpcli show neighbors
-------------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------------
Interface: eth0, via: LLDP, RID: 1, Time: 5 days, 00:36:44
 Chassis: 
 ChassisID: mac 52:54:00:2a:f8:29
 SysName: leaf02
 SysDescr: Cumulus Linux version 3.4.0 running on QEMU Standard PC (i440FX + PIIX, 1996)
 TTL: 120
 MgmtIP: 10.0.0.25
 MgmtIP: fe80::5054:ff:fe2a:f829
 Capability: Bridge, on
 Capability: Router, on
 Port: 
 PortID: ifname eth0
 PortDescr: oob-mgmt
-------------------------------------------------------------------------------
Interface: eth0, via: LLDP, RID: 2, Time: 5 days, 00:36:36
 Chassis: 
 ChassisID: mac 52:54:00:20:8e:8a
 SysName: leaf03
 SysDescr: Arista Networks EOS version 4.17.5M running on an Arista Networks vEOS
 TTL: 120
 MgmtIP: 10.0.1.23
 Capability: Bridge, on
 Capability: Router, on
 Port: 
 PortID: ifname Management1
 PortDescr: oob-mgmt
-------------------------------------------------------------------------------
Interface: swp2, via: LLDP, RID: 5, Time: 5 days, 00:36:30
 Chassis: 
 ChassisID: mac 52:54:00:ed:c2:a7
 SysName: spine01
 SysDescr: Cumulus Linux version 3.4.0 running on QEMU Standard PC (i440FX + PIIX, 1996)
 TTL: 120
 MgmtIP: 10.0.0.33
 MgmtIP: fe80::5054:ff:feed:c2a7
 Capability: Bridge, off
 Capability: Router, on
 Port: 
 PortID: ifname swp1
 PortDescr: link_to_leaf01-swp2
-------------------------------------------------------------------------------
Interface: swp3, via: LLDP, RID: 6, Time: 5 days, 00:36:30
 Chassis: 
 ChassisID: mac 52:54:00:40:5f:b1
 SysName: spine02
 SysDescr: Cumulus Linux version 3.4.0 running on QEMU Standard PC (i440FX + PIIX, 1996)
 TTL: 120
 MgmtIP: 10.0.0.34
 MgmtIP: fe80::5054:ff:fe40:5fb1
 Capability: Bridge, off
 Capability: Router, on
 Port: 
 PortID: ifname swp1
 PortDescr: link_to_leaf01-swp3
-------------------------------------------------------------------------------
cumulus@leaf01:mgmt-vrf:~$

LAYER-3 CONFIGURATION

Configuring the SVIs (Switch VLAN Interfaces)

Bridges can be included as part of a routing topology after being assigned an IP address. This enables hosts within the bridge to communicate with other hosts outside of the bridge, via a switch VLAN interface (SVI), which provides layer 3 routing.

#Leaf01
cumulus@leaf01:mgmt-vrf:~$ net add vlan 100 ip address 192.168.14.1/24
cumulus@leaf01:mgmt-vrf:~$ net pending
cumulus@leaf01:mgmt-vrf:~$ net commit|
#Leaf02
cumulus@leaf02:mgmt-vrf:~$ net add vlan 100 ip address 192.168.14.1/24
cumulus@leaf02:mgmt-vrf:~$ net pending
cumulus@leaf02:mgmt-vrf:~$ net commit|

The VLAN SVIs will work as the server gateways on the Leaf switches. To validate the SVI functionality, try pinging from each server to its local gateway.

Also, verify that the server MAC address are added to the local MAC table on each switch:

#Leaf01
cumulus@leaf01:mgmt-vrf:~$ sudo arp -a
? (172.16.1.1) at 52:54:00:af:ae:18 [ether] on swp2
? (192.168.14.11) at 52:54:00:e8:a7:59 [ether] on vlan14
? (172.16.1.5) at 52:54:00:55:09:19 [ether] on swp3
? (10.0.0.1) at fe:54:00:00:d4:96 [ether] on eth0
cumulus@leaf01:mgmt-vrf:~$
#Leaf02
cumulus@leaf02:mgmt-vrf:~$ sudo arp -a
? (172.16.1.9) at 52:54:00:cb:11:f9 [ether] on swp2
? (10.0.0.1) at fe:54:00:00:d4:96 [ether] on eth0
? (172.16.1.13) at 52:54:00:33:db:3c [ether] on swp3
? (192.168.14.12) at 52:54:00:60:ad:92 [ether] on vlan14

Leaf-Spine Interconnects

All Leaf switches are directly connected to all spine switches. In a L3LS topology all of these interconnections are routed links. These routed interconnects can be designed as point-to-point links or as port channels. For production environments, there are pros and cons to each design, Leaf-Spine interconnects require careful consideration to ensure uplinks are not over-subscribed. Point-to-point routed links will be the focus of this guide.

Layer-3 Network Diagram
Layer-3 Network Diagram

As you can see, each Leaf has a point- to-point network between itself and each Spine. In real-life environments, you need to strike the right balance between address conservation and leaving room for the unknown. Using a /31 mask will work as will a /30, the decision will depend on your personal circumstances.

Check the configuration for Spine01, then you can configure the remaining switches as described in the diagram above.

#Spine01
cumulus@spine01:mgmt-vrf:~$ net add int swp1 ip address 172.16.1.1/30
cumulus@spine01:mgmt-vrf:~$ net add int swp1 alias link_to_leaf01-swp2
cumulus@spine01:mgmt-vrf:~$ net add int swp2 ip address 172.16.1.9/30
cumulus@spine01:mgmt-vrf:~$ net add int swp2 alias link_to_leaf02-swp2
cumulus@spine01:mgmt-vrf:~$ net commit

BORDER GATEWAY PROTOCOL (BGP)

BGP is the routing protocol that runs the Internet. It is an increasingly popular protocol for use in the data center as it lends itself well to the rich interconnections in a CLOS topology.

ECMP with BGP

If a BGP node hears a prefix p from multiple peers, it has all the information necessary to program the routing table to forward traffic for that prefix p through all of these peers. Thus, BGP supports equal-cost multipathing (ECMP).

In order to perform ECMP in BGP, you may need to configure net add bgp bestpath as-path multipath-relax (if you’re using eBGP).

In Cumulus Linux, the BGP maximum-paths setting is enabled by default, so multiple routes are already installed. The default setting is 64 paths.

eBGP vs. iBGP

There are number of reasons to choose eBGP but one of the more compelling reasons is simplicity, particularly when configuring load sharing (via ECMP) which is one of the main design goals of the L3LS. Using eBGP ensures all routes/paths are utilized with the least amount of complexity and fewest steps to configure.

I’ve tested both options and my personal choice is eBGP even on production environments. Although an iBGP implementation is technically feasible using eBGP allows for a simpler less complex design that is easier to troubleshoot.

BGP Autonomous System Number (ASN)

BGP supports several designs when assigning Autonomous System Numbers (ASN) in a L3LS topology. For this lab, the Common Spine ASN – Discrete Leaf ASN design will be used.

This design uses a single ASN for all spine nodes and discrete ASNs for each leaf nodes. Some benefits of this design are:

  • Each rack can now be identified by its ASN
  • Traceroute and bgp commands will show discrete AS making troubleshooting easier
  • Uses inherent BGP loop prevention
  • Unique AS numbers help troubleshooting and don’t require flexing the EBGP path selection algorithm

As an alternative, you can use the Common Spine ASN – Common Leaf ASN design where a common (shared) ASN will be assigned to the Spine nodes and another ASN to the Leaf nodes.

BGP Configuration

Leaf and Spine switches are interconnected with Layer-3 point-to-point links, and every Leaf is connected to all Spines with at least one interface. Also, there’s no direct dependency or interconnection between Spine switches. All the Leaf nodes can send traffic evenly towards the Spine through the use of Equal Cost Multi Path (ECMP) which is inherent to the use of routing technologies in the design.

NOTE: We have just two Spine switches in our lab, but you can add additional nodes on demand. It’s not required to have an even number of Spine switches, just make sure to have at least one link from each Leaf to every Spine.

Note that all spine switches share a common ASN while each Leaf has a different ASN, see the BGP diagram below for details.

BGP ASN Scheme
BGP ASN Scheme

For Cumulus Linux 3.4 and later releases, the routing control plane (including EVPN) is installed as part of the Free Range Routing (FRR) package, rather than the Quagga package. For more information regarding FRR, refer to the following links:

FRRouting does not start by default in Cumulus Linux. Before you run FRRouting, make sure you have enabled the relevant daemons that you intend to use.

Edit the /etc/frr/daemons file and enable the daemons.

cumulus@spine01:mgmt-vrf:~$ sudo cat /etc/frr/daemons
# This file tells the frr package which daemons to start.
#
.....
#
zebra=yes
bgpd=yes
ospfd=yes
ospf6d=no
ripd=no
ripngd=no
isisd=no
pimd=no
ldpd=no
nhrpd=no
eigrpd=no
babeld=no
.....
#
cumulus@spine01:mgmt-vrf:~$

You will also need to enable the FRR service. Please check the procedure in the configuration guide posted here.

Once FRR is active, you can proceed with the BGP configuration. First, identify each BGP node (switch) by assigning an ASN and router-id according to the following list:

leaf01 r-id:10.0.1.24 /32
leaf02 r-id:10.0.1.25 /32
spine01 r-id:10.0.1.13 /32
spine02 r-id:10.0.1.14 /32

#spine01
cumulus@spine01:mgmt-vrf:~$ net add bgp autonomous-system 65030
cumulus@spine01:mgmt-vrf:~$ net add bgp router-id 10.0.1.13
#spine02
cumulus@spine02:mgmt-vrf:~$ net add bgp autonomous-system 65030
cumulus@spine02:mgmt-vrf:~$ net add bgp router-id 10.0.1.14
#leaf01
cumulus@leaf01:mgmt-vrf:~$ net add bgp autonomous-system 65031
cumulus@leaf01:mgmt-vrf:~$ net add bgp router-id 10.0.1.24
#leaf02
cumulus@leaf02:mgmt-vrf:~$ net add bgp autonomous-system 65032
cumulus@leaf02:mgmt-vrf:~$ net add bgp router-id 10.0.1.25

 NOTE: Notice that you don’t have to configure a loopback interface to assign the router-id. In fact, Cumulux Linux has a default loopback interface (lo) which needs to be there just for system usage (127.0.0.1).

Next. specify to whom it must disseminate routing information by configuring the BGP neighbors and what prefixes are originated from each switch.

#spine01
cumulus@spine01:mgmt-vrf:~$ net add bgp neighbor 172.16.1.2 remote-as 65031
cumulus@spine01:mgmt-vrf:~$ net add bgp neighbor 172.16.1.10 remote-as 65032
cumulus@spine01:mgmt-vrf:~$ net add bgp ipv4 unicast network 10.0.1.13/32
cumulus@spine01:mgmt-vrf:~$ net commit
#spine02
cumulus@spine02:mgmt-vrf:~$ net add bgp neighbor 172.16.1.6 remote-as 65031
cumulus@spine02:mgmt-vrf:~$ net add bgp neighbor 172.16.1.14 remote-as 65032
cumulus@spine02:mgmt-vrf:~$ net add bgp ipv4 unicast network 10.0.1.14/32
cumulus@spine02:mgmt-vrf:~$ net commit
#leaf01
cumulus@leaf01:mgmt-vrf:~$ net add bgp neighbor 172.16.1.1 remote-as 65030
cumulus@leaf01:mgmt-vrf:~$ net add bgp neighbor 172.16.1.5 remote-as 65030
cumulus@leaf01:mgmt-vrf:~$ net add bgp ipv4 unicast network 10.0.1.24/32
cumulus@leaf01:mgmt-vrf:~$ net add bgp redistribute connected
cumulus@leaf01:mgmt-vrf:~$ net commit
#leaf02
cumulus@leaf02:mgmt-vrf:~$ net add bgp neighbor 172.16.1.9 remote-as 65030
cumulus@leaf02:mgmt-vrf:~$ net add bgp neighbor 172.16.1.13 remote-as 65000
cumulus@leaf02:mgmt-vrf:~$ net add bgp ipv4 unicast network 10.0.1.25/32
cumulus@leaf02:mgmt-vrf:~$ net add bgp redistribute connected
cumulus@leaf02:mgmt-vrf:~$ net commit

BGP Verification

You can verify the BGP operation by running the following commands:

  • net show bgp summary
  • net show bgp neighbors

The state for all neighbors should be ESTABLISHED.

cumulus@spine01:mgmt-vrf:~$ net show bgp summary

show bgp ipv4 unicast summary
=============================
BGP router identifier 10.0.1.13, local AS number 65030 vrf-id 0
BGP table version 27
RIB entries 15, using 2160 bytes of memory
Peers 2, using 41 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
leaf01(172.16.1.2) 4 65031 21014 21012 0 0 0 17:27:05 4
leaf02(172.16.1.10) 4 65032 21013 21011 0 0 0 17:27:05 4

Total number of neighbors 2

show bgp ipv6 unicast summary
=============================

show bgp l2vpn evpn summary
===========================
BGP router identifier 10.0.1.13, local AS number 65030 vrf-id 0
BGP table version 0
RIB entries 3, using 432 bytes of memory
Peers 2, using 41 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
leaf01(172.16.1.2) 4 65031 21014 21012 0 0 0 17:27:05 3
leaf02(172.16.1.10) 4 65032 21013 21011 0 0 0 17:27:05 3

Total number of neighbors 2
cumulus@spine01:mgmt-vrf:~$

VIRTUAL EXTENSIBLE LAN – VXLAN ROUTING

VXLAN routing, sometimes referred to as inter-VXLAN routing, provides IP routing between VXLAN VNIs in overlay networks. The routing of traffic is based on the inner header or the overlay tenant IP address.

NOTE: Cumulus Linux includes native Linux VXLAN kernel support.

When configuring VXLAN in Cumulus Linux, we need a souce IP for the each VTEP which is usually associated to a loopback interface.

For this lab, I will add a new switch port (swp4) and configure it as a loopback interface.

To configure one or more switch ports for loopback mode, edit the /etc/cumulus/ports.conf file, changing the port speed to loopback. In the example below, swp4 is configured for loopback mode:

cumulus@leaf01:mgmt-vrf:~$ sudo nano /etc/cumulus/ports.conf 
 ... 
 4=loopback 
 ...

After saving the ports.conf file, you must restart switchd for the changes to take effect.

cumulus@leaf01:mgmt-vrf:~$ sudo systemctl restart switchd

Configure the IP address for swp4 which is to be used as the VXLAN source IP for the VTEP on Leaf switches.

# leaf01
cumulus@leaf01:mgmt-vrf:~$ net add interface swp4 ip address 10.0.3.1/32
cumulus@leaf01:mgmt-vrf:~$ net commit
# leaf02
cumulus@leaf02:mgmt-vrf:~$ net add interface swp4 ip address 10.0.3.2/32
cumulus@leaf02:mgmt-vrf:~$ net commit

Now we have to advertise the swp4 IP addresses into BGP so the VTEPs are reachable across the fabric. If you already have the “redistribute connected” command on the Leaf switches, you don’t need to add the network statement in the BGP configuration.

Verify that the VTEPs are reachable across the Datacenter fabric.

cumulus@spine01:mgmt-vrf:~$ net show route

show ip route
=============
Codes: K - kernel route, C - connected, S - static, R - RIP,
 O - OSPF, I - IS-IS, B - BGP, P - PIM, E - EIGRP, N - NHRP,
 T - Table, v - VNC, V - VNC-Direct, A - Babel,
 > - selected route, * - FIB route

B>* 10.0.3.1/32 [20/0] via 172.16.1.2, swp1, 17:28:01
B>* 10.0.3.2/32 [20/0] via 172.16.1.10, swp2, 17:28:01
C>* 172.16.1.0/30 is directly connected, swp1
B>* 172.16.1.4/30 [20/0] via 172.16.1.2, swp1, 17:28:01
C>* 172.16.1.8/30 is directly connected, swp2
B>* 172.16.1.12/30 [20/0] via 172.16.1.10, swp2, 17:28:01
B>* 192.168.14.0/24 [20/0] via 172.16.1.2, swp1, 17:28:01

show ipv6 route
===============
Codes: K - kernel route, C - connected, S - static, R - RIPng,
 O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
 v - VNC, V - VNC-Direct, A - Babel,
 > - selected route, * - FIB route

C * fe80::/64 is directly connected, swp1
C>* fe80::/64 is directly connected, swp2
cumulus@spine01:mgmt-vrf:~$

When the BGP configuration is ready and the VTEPs can ping each other, it’s time to configure VXLAN.

We need to associate and map the VXLAN VNIs to the server (access) VLANs. There’s a single VLAN so we just need a single VLAN-to_VNI mapping.

#Leaf01
cumulus@leaf01:mgmt-vrf:~$ net add vxlan vni-1014 vxlan id 1014
cumulus@leaf01:mgmt-vrf:~$ net add vxlan vni-1014 bridge access 14
cumulus@leaf01:mgmt-vrf:~$ net add vxlan vni-1014 vxlan local-tunnelip 10.0.3.1
cumulus@leaf01:mgmt-vrf:~$ net add vxlan vni-1014 mtu 9152
 #Leaf02
 cumulus@leaf02:mgmt-vrf:~$ net add vxlan vni-1014 vxlan id 1014
 cumulus@leaf02:mgmt-vrf:~$ net add vxlan vni-1014 bridge access 14
 cumulus@leaf02:mgmt-vrf:~$ net add vxlan vni-1014 vxlan local-tunnelip 10.0.3.2
 cumulus@leaf02:mgmt-vrf:~$ net add vxlan vni-1014 mtu 9152

ETHERNET VIRTUAL PRIVATE NETWORK – EVPN

Ethernet Virtual Private Network (EVPN) provides a control plane for VXLANs in Cumulus Linux, with the following functionality:

  • VNI membership exchange between VTEPs using EVPN type-3 routes
  • Exchange of host MAC and IP addresses using EVPN type-2 (MAC/IP advertisement) routes
  • Support for host/VM mobility (MAC and IP moves) through exchange of the MAC Mobility Extended community
  • Support for ARP/ND suppression, which provides VTEPs with the ability to suppress ARP flooding over VXLAN tunnels
  • Support for distributed asymmetric routing between different subnets

EVPN is not the only option to provide a control plane for VXLAN. Several options are available like configuring manual VTEP flood-lists to replicate the BUM between VTEPs (known as HER – Head End Replication), you can also integrate an SDN controller to synchronize and coordinate the VTEPs, or a multicast-based approach can be implemented (not really recommended for the implications to enable multicast in the Datacenter devices and the scalability limitations).

 Enabling EVPN between BGP Neighbors

You enable EVPN between BGP neighbors by adding the address family evpn to the existing neighbor address-family activation command.

For a non-VTEP device, such as a Spine switch, that is merely participating in EVPN route exchange, activating the neighbors for the EVPN address family is the only configuration needed.

#Spine01
cumulus@spine01:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.2 activate
cumulus@spine01:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.10 activate
cumulus@spine01:mgmt-vrf:~$ net commit
#Spine02
cumulus@spine02:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.6 activate
cumulus@spine02:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.14 activate
cumulus@spine02:mgmt-vrf:~$ net commit
#Leaf01
cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.1 activate
cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.5 activate
cumulus@leaf01:mgmt-vrf:~$ net commit
#Leaf02
cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.9 activate
cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.13 activate
cumulus@leaf02:mgmt-vrf:~$ net commit

The above configuration does not result in BGP knowing about the local VNIs defined on the system and advertising them to peers.

Advertising All VNIs

A single configuration variable enables the BGP control plane for all VNIs configured on the switch. Set the variable “advertise-all-vni” to provision all locally configured VNIs to be advertised by the BGP control plane. FRR is not aware of any local VNIs and MACs associated with that VNI until advertise-all-vni is configured.

When a local VNI is learned by FRR and there is no explicit configuration for that VNI in FRR, the route distinguisher (RD) and import and export route targets (RTs) for this VNI are automatically derived — the RD uses “RouterId:VNI-Index” and both RTs use “AS:VNI”. The RD and RTs are used in the EVPN route exchange, with the former to disambiguate EVPN routes in different VNIs (as they may have the same MAC and/or IP address) while the latter describes the VPN membership for the route.

To build upon the previous example, run the following commands to advertise all VNIs.

#Leaf01cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.1 activate
cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.5 activate
cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn advertise-all-vni
cumulus@leaf01:mgmt-vrf:~$ net commit
#Leaf02cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.9 activate
cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.13 activate
cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn advertise-all-vni
cumulus@leaf02:mgmt-vrf:~$ net commit

NOTE: This configuration is only needed on Leaf switches that are VTEPs.

EVPN also supports manual configuration of RDs and RTs, if you don’t want them derived automatically.

To manually define RDs and RTs, use the vni option within NCLU to configure the switch. To build upon the previous example:

#Leaf01cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.1 activate
cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.5 activate
cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn advertise-all-vni
cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn vni 1014 rd 10.0.1.24:1014
cumulus@leaf01:mgmt-vrf:~$ net add bgp evpn vni 1014 route-target import 65031:1014
cumulus@leaf01:mgmt-vrf:~$ net commit
 #Leaf02cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.9 activate
cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn neighbor 172.16.1.13 activate
cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn advertise-all-vni
cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn vni 1014 rd 10.0.1.25:1014
cumulus@leaf02:mgmt-vrf:~$ net add bgp evpn vni 1014 route-target import 65032:1014
cumulus@leaf02:mgmt-vrf:~$ net commit

NOTE: You need to configure “advertise-all-vni” whether you use automatic or user-defined route distinguishers and route targets.

When EVPN is provisioned, data plane MAC learning should be disabled for VxLAN interfaces to avoid race conditions between control plane learning and data plane learning. Configure the bridge-learning value to off:

#Leaf01cumulus@leaf01:mgmt-vrf:~$ net add vxlan vni-1014 bridge learning off
cumulus@leaf01:mgmt-vrf:~$ net commit
#Leaf02cumulus@leaf02:mgmt-vrf:~$ net add vxlan vni-1014 bridge learning off
cumulus@leaf02:mgmt-vrf:~$ net commit

To verify the EVPN configuration, check the network devices participating in BGP/EVPN by using the following command:

cumulus@leaf01:mgmt-vrf:~$ net show bgp evpn summary 
BGP router identifier 10.0.1.24, local AS number 65031 vrf-id 0
BGP table version 0
RIB entries 3, using 432 bytes of memory
Peers 2, using 41 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
spine01(172.16.1.1) 4 65030 148951 148977 0 0 0 20:21:25 3
spine02(172.16.1.5) 4 65030 148950 148981 0 0 0 20:20:36 3

Total number of neighbors 2
cumulus@leaf01:mgmt-vrf:~$

You can display the configured VNIs on a network device participating in BGP EVPN by running the “show bgp evpn vni” command. This command works only when run on a VTEP (Leaf switch).

cumulus@leaf01:mgmt-vrf:~$ net show bgp evpn vni 
Advertise Gateway Macip: Disabled
Advertise All VNI flag: Enabled
Number of VNIs: 1
Flags: * - Kernel
 VNI Orig IP RD Import RT Export RT 
* 1014 10.0.3.1 10.0.1.24:1 65031:1014 65031:1014 
cumulus@leaf01:mgmt-vrf:~$

TESTING CONNECTIVITY BETWEEN SERVERS

With the Leaf switches announcing their respective VTEP into the underlay BGP routing topology, each Leaf switch learns two equal cost paths (via the Spine switches) to the remote VTEP.

Run the “net show bgp evpn route” command to display all EVPN routes at the same time.

cumulus@leaf01:mgmt-vrf:~$ net show bgp evpn route 
BGP table version is 0, local router ID is 10.0.1.24
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal
Origin codes: i - IGP, e - EGP, ? - incomplete
EVPN type-2 prefix: [2]:[ESI]:[EthTag]:[MAClen]:[MAC]:[IPlen]:[IP]
EVPN type-3 prefix: [3]:[EthTag]:[IPlen]:[OrigIP]

Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 10.0.1.24:1
*> [2]:[0]:[0]:[48]:[52:54:00:e8:a7:59]
 10.0.3.1 32768 i
*> [2]:[0]:[0]:[48]:[52:54:00:e8:a7:59]:[32]:[192.168.14.11]
 10.0.3.1 32768 i
*> [3]:[0]:[32]:[10.0.3.1]
 10.0.3.1 32768 i
Route Distinguisher: 10.0.1.25:1
*> [2]:[0]:[0]:[48]:[52:54:00:60:ad:92]
 10.0.3.2 0 65030 65032 i
* [2]:[0]:[0]:[48]:[52:54:00:60:ad:92]
 10.0.3.2 0 65030 65032 i
* [2]:[0]:[0]:[48]:[52:54:00:60:ad:92]:[32]:[192.168.14.12]
 10.0.3.2 0 65030 65032 i
*> [2]:[0]:[0]:[48]:[52:54:00:60:ad:92]:[32]:[192.168.14.12]
 10.0.3.2 0 65030 65032 i
*> [3]:[0]:[32]:[10.0.3.2]
 10.0.3.2 0 65030 65032 i
* [3]:[0]:[32]:[10.0.3.2]
 10.0.3.2 0 65030 65032 i

Displayed 6 prefixes (9 paths)
cumulus@leaf01:mgmt-vrf:~$

Layer-2 and Layer-3 connectivity between the servers is now possible. Below is the resultant MAC and VXLAN address table for the Leaf switches and the ping results between servers.

server07

server08

Finally, you can examine all local and remote MAC addresses for a VNI by running “net show evpn mac vni” or “net show evpn mac vni all”commands. The Servers MAC addresses must be shown either as local or remote.

cumulus@leaf01:mgmt-vrf:~$ net show evpn mac vni 1014
Number of MACs (local and remote) known for this VNI: 2
MAC Type Intf/Remote VTEP VLAN 
52:54:00:e8:a7:59 local swp1 14 
52:54:00:60:ad:92 remote 10.0.3.2 
cumulus@leaf01:mgmt-vrf:~$
cumulus@leaf02:mgmt-vrf:~$ net show evpn mac vni 1014
Number of MACs (local and remote) known for this VNI: 2
MAC Type Intf/Remote VTEP VLAN 
52:54:00:e8:a7:59 remote 10.0.3.1 
52:54:00:60:ad:92 local swp1 14 
cumulus@leaf02:mgmt-vrf:~$

You can always check my github repository to download the configuration files.

 

Advertisements

BGP Interoperability between Free Range Routing (FRR) and Arista EOS

August 17, 2017
Free Range Routing
by Pablo Narváez

Today I will test BGP between the FRR routing stack and Arista EOS. The sample configuration that I will show later in this post is just a basic integration between the two devices, nothing at all complex. Basically, I just wanted to expand my virtual environment by adding a DC/routing perimeter while testing FRR.

For this lab, I will be using the same environment I already built in my previous post so I can easily integrate FRR into the existing topology.

FREE RANGE ROUTING OVERVIEW

FRR is a routing software package that provides TCP/IP based routing services with routing protocols support such as RIPv1, RIPv2, RIPng, OSPFv2, OSPFv3, IS-IS, BGP-4, and BGP-4+.

In addition to traditional IPv4 routing protocols, FRR also supports IPv6. Since the beginning, this project has been supported by Cumulus Networks, Big Switch Networks, 6WIND, Volta Networks and Linkedin, among others.

FRR has been forked from the Quagga open-source project. For those who are not familiar with Quagga, it’s an open-source implementation of a full routing stack for Linux; it’s mostly used for WRT custom firmware, some cloud implementations, and even for control plane functionality on some network operating systems (NOS) like Cumulus Linux.

NOTE: FRR replaces Quagga as the routing suite in Cumulus Linux 3.4.0.

Quagga still exists but has a completely different development process than FRR. You can learn more about Quagga here.

ROUTING STACK VS NETWORK OPERATING SYSTEM

Just to be clear what FRR is and what it’s not, a network operating system (NOS) is the totality from Layer-1 hardware all the way up to the control plane. FRR is a full implementation of the routing control plane so it needs a base operating system to run on top.

In this regard, FRR is not a NOS that can run directly on baremetal. Instead, it´s a modern implementation of the IPv4/IPv6 routing stack that provides control plane functionality as Linux daemons.

FRR SYSTEM ARCHITECTURE

FRR is made from a collection of several daemons that work together to build the routing table.

Zebra is responsible for changing the kernel routing table and for redistribution of routes between different routing protocols. In this model, it’s easy to add a new routing protocol daemon to the entire routing system without affecting any other software.

There is no need for the FRR daemons to run on the same machine. You can actually run several instances of the same protocol daemon on the same machine and keep them apart from the rest of the daemons.

frr-architecture
FRR Architecture

Currently FRR supports GNU/Linux and BSD. The list of officially supported platforms are listed below. Note that FRR may run correctly on other platforms, and may run with partial functionality on further platforms.

  • GNU/Linux
  • FreeBSD
  • NetBSD
  • OpenBSD

FRR DOWNLOAD

FRR is distributed under the GNU General Public License and is available for download from the official FRR website.

FRR INSTALLATION

There are three steps for installing the software: configuration, compilation, and installation.

I chose Ubuntu to deploy FRR but several Linux distros are supported. If you want to install it on Ubuntu, follow these instructions. You can check the FRR webpage for any other Linux/BSD distro, it’s been pretty well documented.

When configuring FRR, there are several options to customize the build to include or exclude specific features and dependencies. You can check all the options here.

Once installed, check the FRR daemons to make sure it’s running:

ps-ef-frr

If you installed FRR from source (link above), the FRR daemon (and all the routing daemons you specify during the installation) will run as a system service after the Linux kernel is booted. As you can see in the screen capture above, the routing processes (bgpd, ospfd, ldpd, etc.) are running as part of the main FRR service.

As with any other Linux system service, you can manage the frr service with systemctl.

$ systemctl start|stop|restart frr

To access FRR, each daemon has its own configuration file and terminal interface which can be a very annoying thing. To resolve this problem, FRR provides an integrated user interface shell called vtysh.

vtysh connects to each daemon with UNIX domain socket and then works as a proxy for user input so there’s no need to connect to each daemon separately.

To access vtysh from the host OS, just type in the following command:

superadmin@frr01:~$ vtysh

Hello, this is FRRouting (version 3.1-dev-MyOwnFRRVersion-g7e4f56d).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

This is a git build of frr-3.1-dev-320-g7e4f56d
Associated branch(es):
 local:master
 github/frrouting/frr.git/master

frr01#

INTERESTING FACTS ABOUT FRR

  • If you install FRR from source and follow the instructions provided above, there’s no need to modify or even touch any of the daemon configuration files (.conf) located in /etc/frr. When you log into FRR with vtysh a single configuration file is created for all the daemons. The single conf file will be stored as frr.conf in the host
  • Don’t expect to see Ethernet/WAN interfaces, FRR will show you the actual host network adapters: ens3, ens4, ens5, ens10 (depending on the Linux distro and your setup adapter names might change).
frr01#
frr01# conf ter
frr01(config)# interface ?
 IFNAME Interface's name
 ens3 ens4 ens5 ens10 lo
frr01(config)# interface ens3
frr01(config-if)#
  • As you may have noticed by now, if you know Cisco IOS or Arista EOS you are good to go! The FRR CLI is basically the same. You can check the list of CLI commands here.

BGP INTEROPERABILITY TESTING

As shown in the diagram below, I will use my existing network setup to connect the FRR routers to the Arista Spine switches.

frr-networking
Free Range Routing Network Setup

Each network device has a loopback interface which is being announced into BGP (10.0.1.21-23, 10.0.1.31-32). When we finish the configuration, we should be able to ping all these interfaces from the FRR routers.

The interfaces between the FRR routers and the Arista switches are configured as point-to-point Layer-3 links.

frr01# show running-config
Building configuration...

Current configuration:
!
frr version 3.1-dev
frr defaults traditional
hostname frr01
username root nopassword
!
service integrated-vtysh-config
!
log syslog informational
!
interface ens4
 description link_to_spine01-eth4
 ip address 172.16.0.25/30
!
interface ens5
 description link_to_spine02-eth4
 ip address 172.16.0.29/30
!
interface ens10
 description link_to_frr02-ens10
 ip address 172.16.254.5/30
!
interface lo
 description router-id
 ip address 10.0.1.1/32
!
frr02# show running-config
Building configuration...

Current configuration:
!
frr version 3.1-dev
frr defaults traditional
hostname frr02
username root nopassword
!
service integrated-vtysh-config
!
log syslog informational
!
interface ens4
 description link_to_spine01-eth5
 ip address 172.16.0.33/30
!
interface ens5
 description link_to_spine02-eth5
 ip address 172.16.0.37/30
!
interface ens10
 description link_to_frr02-ens10
 ip address 172.16.254.6/30
!
interface lo
 description router-id
 ip address 10.0.1.2/32
!

I will configure ASN 65000 for the FRR routers; frr01 and frr02 will have iBGP peer sessions with each other and eBGP peer sessions with the Arista switches.

frr01#
router bgp 65000
 bgp router-id 10.0.1.1
 distance bgp 20 200 200 
 neighbor ebgp-to-spine-peers peer-group
 neighbor ebgp-to-spine-peers remote-as 65020
 neighbor 172.16.0.26 peer-group ebgp-to-spine-peers
 neighbor 172.16.0.30 peer-group ebgp-to-spine-peers
 neighbor 172.16.254.6 remote-as 65000
 !
 address-family ipv4 unicast
 network 10.0.1.1/32
 exit-address-family
 vnc defaults
 response-lifetime 3600
 exit-vnc
frr02#
router bgp 65000
 bgp router-id 10.0.1.2
 distance bgp 20 200 200
 neighbor ebgp-to-spine-peers peer-group
 neighbor ebgp-to-spine-peers remote-as 65020
 neighbor 172.16.0.34 peer-group ebgp-to-spine-peers
 neighbor 172.16.0.38 peer-group ebgp-to-spine-peers
 neighbor 172.16.254.5 remote-as 65000
 !
 address-family ipv4 unicast
 network 10.0.1.2/32
 exit-address-family
 vnc defaults
 response-lifetime 3600
 exit-vnc

Since BGP was already configured in the Arista switches as part of my previous labs, I just added the eBGP sessions towards FRR.

spine01#
router bgp 65020
 router-id 10.0.1.11
 distance bgp 20 200 200
 maximum-paths 2 ecmp 64
 neighbor ebgp-to-frr-peers peer-group
 neighbor ebgp-to-frr-peers remote-as 65000
 neighbor ebgp-to-frr-peers maximum-routes 12000
 neighbor 172.16.0.2 remote-as 65021
 neighbor 172.16.0.2 maximum-routes 12000
 neighbor 172.16.0.6 remote-as 65021
 neighbor 172.16.0.6 maximum-routes 12000
 neighbor 172.16.0.10 remote-as 65022
 neighbor 172.16.0.10 maximum-routes 12000
 neighbor 172.16.0.25 peer-group ebgp-to-frr-peers
 neighbor 172.16.0.33 peer-group ebgp-to-frr-peers
 network 10.0.1.11/32
 redistribute connected
spine02#
router bgp 65020
 router-id 10.0.1.12
 distance bgp 20 200 200
 maximum-paths 2 ecmp 64
 neighbor ebgp-to-frr-peers peer-group
 neighbor ebgp-to-frr-peers remote-as 65000
 neighbor ebgp-to-frr-peers maximum-routes 12000
 neighbor 172.16.0.14 remote-as 65021
 neighbor 172.16.0.14 maximum-routes 12000
 neighbor 172.16.0.18 remote-as 65021
 neighbor 172.16.0.18 maximum-routes 12000
 neighbor 172.16.0.22 remote-as 65022
 neighbor 172.16.0.22 maximum-routes 12000
 neighbor 172.16.0.29 peer-group ebgp-to-frr-peers
 neighbor 172.16.0.37 peer-group ebgp-to-frr-peers
 network 10.0.1.12/32
 redistribute connected

NOTE: The “redistribute connected” command will redistribute all the directly connected interfaces into BGP for connectivity testing purposes. In production, link addresses are not typically advertised. This is because:

  • Link addresses take up valuable FIB resources. In a large CLOS (Leaf-Spine) environment, the number of such addresses can be quite large
  • Link addresses expose an additional attack vector for intruders to use to either break in or engage in DDOS attacks

We can verify the interoperability between FRR and Arista by checking the BGP neighbor adjacencies. The output of the “show ip bgp summary” command shows the BGP state as established, which indicates that the BGP peer relationship has been established successfully.

frr-bgp-summary

Finally, we check the routing table to make sure we can reach all the loopback interfaces from the FRR routers.

frr-ip-route

You can always check my github repository to download the configuration files.

 

How to Transparently Forward LLDP Frames through Linux Bridge

August 4, 2017
by Pablo Narváez

Last night, I was working on my VXLAN lab and wanted to make sure the VM connections were setup correctly. I opted to use the Link Layer Discovery Protocol or LLDP on the switches to validate the network assignment (net-x) for each VM .

To my surprise,  the “show lldp neighbors” command displayed no information about the neighboring devices.

spine01# show lldp neighbors

LLDP neighbors:
-------------------------------------

To make sure VMs were exchanging LLDP frames, I verified LLDP was enabled globally. However, when looking at the output of “show lldp traffic” I noticed that LLDP frames were being sent out of the bridge interfaces but not received.

spine01#show lldp
LLDP transmit interval : 30 seconds
LLDP transmit holdtime : 120 seconds
LLDP reinitialization delay : 2 seconds
LLDP Management Address VRF : default

Enabled optional TLVs:
 Port Description
 System Name
 System Description
 System Capabilities
 Management Address (best)
 IEEE802.1 Port VLAN ID
 IEEE802.3 Link Aggregation
 IEEE802.3 Maximum Frame Size

Port Tx Enabled Rx Enabled
Et1 Yes Yes 
Et2 Yes Yes 
Et3 Yes Yes 
Ma1 Yes Yes 
spine01#
!

When searching for a reasonable explanation on this behavior, I found a really interesting post here:

“LLDP frames have reserved destination MAC address 01-80-C2-00-00-0E, which by default are not forwarded by 802.1d-compliant bridges.”

It turns out that Linux Bridge does not forward certain Layer-2 traffic and filters some reserved multicast addresses by default so it can be compliant with the 802.1AB/802.1D standards which makes perfect sense for an open standards-based bridge.

Take a look at the diagram below.

Linux Bridge on KVM
Linux Bridge on KVM

As you can see, the links between two VMs are, in fact, interfaces connected to some Linux Bridge ports. What we need to do is modify the Linux Bridge behavior so it doesn’t filter the LLDP frames that “intercepts” from VMs.

NOTE: For the sake of testing, we are going to break the standards by overriding the default behavior of Linux Bridge. Hope you can live with that!

It’s also important to understand why we have to deal with Linux Bridge: Every time a virtual network is added on virt-manager (please see me previous posts for details) a Linux Bridge is created automatically.

In my previous posts, when I used virt-manager to assign a virtual network (net-x) to some vNIC, we were actually attaching the vNIC to a Linux Bridge (virbr) port.

Let’s see the Linux Bridge instances on the host that were created by virt-manager.

Linux Bridge Names on KVM Host

As with any regular 802.1D Ethernet bridge, Linux Bridge uses Spanning-Tree (STP) as a loop prevention mechanism. By default, STP is enabled on Linux Bridge but I turned it off on each device.

NOTE: In production, you should NEVER disable STP. On routed interfaces STP will not run but that does not mean that you can remove it from the box.

How do we actually allow the transparent forwarding of LLDP frames through Linux Bridge?

First, check the content of the group fwd_mask:

$ cat /sys/class/net/brX/bridge/group_fwd_mask
0x0

The Linux group fwd_mask was introduced to make the bridge forward link local group addresses. The point of this group is so that users can have non-standard bridging behavior.

Notice the “0x0” value. In order to pass LLDP frames through Linux Bridge we need to change this attribute value . To do so, type the following command:

$ echo 16384 > /sys/class/net/brX/bridge/group_fwd_mask

Where “brX” is the name of the bridge you want to modify.

This is it, once you change the default sysfs attribute value from “0x0” to 16384 (0x4000 in hex) the bridge will transparently forward LLDP frames. Just remember this has to be done on a per-bridge basis, so you have to apply this command to any bridge that you want to modify.

As an alternative method to set the attribute value, you can edit the group fwd_mask:

$ sudo nano /sys/class/net/brX/bridge/group_fwd_mask

Just replace the “0x0” for either “16384” or its hex value of “0x4000” (you just need to type one of them).

Finally, you can verify the operation of LLDP with “show lldp”, “show lldp neighbors” and “show lldp traffic”.

show lldp commands

The method described above applies to Linux Bridge. If you are using Open vSwitch (OVS) the procedure is quite different and will depend on the OVS flavor that you have. Here is an example for OVS on Pica8 PicOS.

 

Arista Layer-3 Leaf-Spine Fabric with VXLAN HER: Lab Part 2

July 19, 2017
Virtual Environment Setup
by Pablo Narváez

Welcome to the second blog post in my multi-part series describing in detail how I will deploy a L3SL-V Ethernet fabric with Arista vEOS and Ubuntu/KVM. In this post, I’m going to dive into the first component of the deployment: the virtual environment and the VMs.

vm_spreadsheet3
Virtual Machine Spreadsheet – Inventory List

As it’s shown in the inventory list above (spreadsheet), I’m going to use a single server with KVM to create multiple VMs. I’ll go with Ubuntu Desktop for the server OS, but you can choose the Server version (Since Ubuntu 12.04, there is no difference in kernel between Ubuntu Desktop and Ubuntu Server).

This guide assumes you have a Linux user graphical interface, so the Desktop version is desired.

To download and install Ubuntu, please follow these links:

NOTE: I chose a Type 2 hypervisor (run on a host OS) over bare-metal to have a flexible environment: for this kind of setups I prefer to have a base OS in order to use some traffic monitoring tools (like Wireshark) and to have a centralized repository for software images.

In addition to that, I chose KVM over VirtualBox because of the number of NIC cards (vNICs) supported: VirtualBox only supports 8 network adapters per VM. Since I will be using this lab to deploy some other unplanned functionalities, I just didn’t want to end up with such limitations in case I need to add additional vNICs.

If you want to give VirtualBox a try, you can follow these links:

KVM INSTALLATION

The procedure described below is a summary of the official guide posted here.

Pre-Installation Checklist

To run KVM, you need a processor that supports hardware virtualization. To see if your processor supports it, you need to install cpu-checker:

$ sudo apt-get install cpu-checker

Now, you can review the output from this command:

$ kvm-ok

which may provide an output like this:

INFO: /dev/kvm exists
KVM acceleration can be used

If this is your case, you are good to go.

If you see :

INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be used

You can still run virtual machines, but it’ll be much slower without the KVM extensions.

NOTE: Running a 64 bit kernel on the host operating system is recommended but not required. On a 32-bit kernel install, you’ll be limited to 2GB RAM at maximum for a given VM.

Installation of KVM

You need to install a few packages first:

$ sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virt-viewer virt-manager
  •  qemu-kvm(kvmin Karmic and earlier) is the backend
  • libvirt-binprovides libvirtd which you need to administer qemu and kvm instances using libvirt
  • ubuntu-vm-builderis a powerful command line tool for building virtual machines
  • bridge-utilsprovides a bridge from your network to the virtual machines. This package is optional, but highly recommended in case you have multiple network adapters on the host and want to map some VMs to an external network. Another option is Open vSwitch as a replacement for the Linux Bridge.
  • virt-viewer is a tool for viewing instances. This package is optional, but strongly recommended to display a graphical console for VMs
  • virt-manager is GUI tool to manage virtual machines. This module is optional, but strongly recommended to simplify the VM life-cycle management. If not installed, you will have to manage VMs with the Virsh command-line

After the installation, you need to relogin so that your user becomes an effective member of KVM and libvirtd user groups. Only the members of this group can run virtual machines.

Verify the Installation

After you relogin, test if your installation has been completed successfully with the following command:

$ virsh list --all
 Id Name                 State
----------------------------------
$

If you get something like this:

$ virsh list --all
libvir: Remote error : Permission denied
error: failed to connect to the hypervisor
$

Something is wrong (e.g. you did not relogin) and you probably want to fix this before you move on.

To troubleshoot any issues during or after the installation, please check the official KVM installation guide posted here.

VM VIRTUAL NETWORKING

This is what we are going to build.

vm_network_diagram2
Virtual Machine Network Diagram

The drawing shows where each network adapter (vNIC) is, what network it’s configured for, and how the VMs are interconnected. Every connection between two adapters represents an isolated segment which must be configured as a virtual network in KVM.

To ensure that every link is isolated, we need to configure each virtual network with an exclusive name (I configured them as “net-x”), disable IP routing, and use every virtual network only once for a unique link.

The links between each VM will act like physical cables, but the virtual network connecting the management interfaces of the Ubuntu Linux servers and the Arista switches are on a common shared network (“net-oob”). The host will also have an adapter connected to this network so we can ssh into each device through the Out-of-Band management interface (OOB).

vm_oob-management_diagram
Virtual Machine Out-of-Band Management Network Diagram

The first network adapter will always end up being the Management1 interface in each switch. To simplify things, I dedicated the first network adapter (vNIC1) for management on each VM.

CREATING VMs

We will have two different types of VMs: Ubuntu servers an Arista switches. For Linux, we are going to install the same software image that we used for the host OS. For the Arista switches, there are two files that are needed: the virtual hard drive (vmdk) and the Aboot ISO file.

You need to register at arista.com to download the software. Once you login, go to Support > Software Download to retrieve the following files:

vEOS-lab-4.17.5M.vmdk
Aboot-veos-8.0.0.iso

NOTE: There are several folders and more than one image format, make sure to download the correct files from the vEOS-lab folder.

To build the VMs faster, we are going to create two base VMs (golden images, one for the servers and one for the switches), then clone them multiple times.

Creating the Base VM for Servers

The easiest way to create a virtual machine in KVM is to use the Virtual Machine Manager application. You can find it in your applications dashboard.

virt-manager_dashboard

Or you can use the command-line:

$ virt-manager

virt-manager_cli

The first thing to do is create the virtual networks for the network adapters. Look at the network drawing and the spreadsheet at the beginning of this section.

In the virt-manager main window, click the edit button on the toolbar, then click on Connection Details.

virt-manager_menu

Go into the Virtual Networks tab, click the add button (“+” icon, lower left corner).

Give the network a name. This is going to be our first virtual network, so we will start with “net-1”.

virt-manager_network

We will simulate physical network connections so we don’t need to assign IP addresses for now.

NOTE: When creating the out-of-band management network (“net-oob“) you might want to enable the IP address definition so KVM adds a virtual network adapter on the host to communicate directly with the VMs (for ssh/admin purposes).

Uncheck the Enable IPv4 network address space definition option, do the same for IPv6 in the next step.

virt-manager_ipv4

We need isolated segments to interconnect the VMs, so choose the isolated virtual network option and then click the finish button to continue.

virt-manager_isolated-net

Repeat the same steps to create the rest of the networks. Don’t forget to add the management network (“net-oob”).

NOTE: Enable the IP address space when creating “net-oob” for management. In this case, I will be using 10.0.0.0/24 for the network and the host will receive the IP address 10.0.0.1/24.

When this done, there must be a total of 18 networks (net-1 thru net-17, plus net-oob).

virt-manager_virt-nets

Now we need to create the actual VMs. Go back into the virt-manager main screen and click the Create New Virtual Machine icon on the toolbar to start the installation.

First, set the virtual machine’s name (“server01”) and select the installation method, select Local install media (ISO image or CDROM).

Next, we need to find and select the Linux image (Ubuntu 16.04.2 ISO file). Make sure to check the Automatically detect the operating system option.

virt-manager_media

You should now choose how much memory to allocate for the VM. Allocate 2040MB of memory and 1 CPU.

Remember: To allocate more than 2GB of memory to a virtual machine, you need to have a 64-bit CPU and a 64-bit Linux kernel.

Check the Enable storage for this virtual machine option and allocate the disk space for the vm. In my case, I will leave the default space (20 GB).

By default, KVM configures NAT for the network adapters. We need to configure the network adapters on each VM as shown in the spreadsheet.

To do so, before clicking on the Finish button, make sure to check the Customize configuration before install option to edit the VM settings.

NOTE: You can always customize the VM configuration after the installation.

virt-manager_customize

Now we need to configure all of the internal networks within the VMs, I’ll show some examples.

From the left hand side menu, click on the only NIC adapter available and open the Netwok Source drop-down menu. You will see all the virtual networks we created in the previuos steps.

virt-manager_net-menu

From the drop-down menu, choose Virtual Network net-oob” to assign the management network to the adapter and configure “e1000” for Device model.

Remember: The first NIC on all VMS will always be the management interface.

virt-manager_nic1

Then, click on the Add Hardware button (lower left corner) and add two NIC adapters for “net-1” and “net-2” respectively. Don’t forget to choose “e1000” for the Device model option.

virt-manager_nic2

You can click on the Begin Installation button on the upper left corner to start the OS installation.

virt-manager_os-instpng

Virt-manager will boot the guest operating system, you may now proceed to install the OS.

Cloning the Base VM (Ubuntu Servers)  

Now we need to clone the base VM to build the rest of the servers. The main virt-manager window will show server01, right click on it and click Clone.

NOTE: You need to power off the VM to clone it.

virt-manager_clone

In the clone window, change the server name (“server02”, in this case), leave the default settings for Networking and make sure to choose the Clone this disk option for the disk storage.

virt-manager_clone-conf

Click on the Clone button to finish. Repeat the same steps to create the rest of the servers.

Finally, we need to configure the network settings for each adapter on every server (remember, every cloned VM will have server01 settings so we need to change that).

I will show you one example: In the virt-manager main window, right click on server02 and click Open.

virt-manager_clone-menu

Within the configuration menu, click on the second NIC adapter and choose “net-3” from the Netwok source drop-down menu; then, assign “net-4” to the third NIC.

virt-manager_clone-nics

Configure the remaining network adapters on each VM as shown in the spreadsheet.

Creating the Fabric Switches

Not quite done yet! We need to build the VMs to run Arista vEOS. The process will be quite different and we will have to tweak some settings to make the vEOS boot, so stick with me.

In the virt-manager main window, click on the Create a new virtual machine and select the last option: Import existing disk image.

Browse and locate the vEOS-lb-4.17.5M.vmdk file in your server, leave the default setting for OS Type and Version.

virt-manager_veos-os

Next, allocate 2048MB of memory and 1 CPU.

NOTE: With the latest vEOS release it is now required to allocate at least 2GB of memory.

Before clicking on the Finish button, name the VM (“spine01”) and make sure to check the Customize configuration before install option.

It’s time to tweak some settings before installing the OS. First, take a look at the screen below, this is what you should have by now.

virt-manager_veos-conf

While in this window, modify the following:

  1. Remove the IDE Disk 1 – I know, I know, it’s the disk we just created a few steps back with the vmdk file, but it’s critical to build the disks from scratch
  2. Remove the sound controller (Sound: ich 6 in my case)
  3. Change the video settings from QXL to VGA
  4. Change the NIC configuration – Choose the “net-oob” virtual network for management and configure “e1000” for the Device model option.
  5. Add three additional NICs for “vnet-11”, “vnet-13” and “vnet-15” respectively,  choose “e1000” for Device model.
  6. Add two disk storage devices, one IDE disk with the vmdk file and one IDE CD device with the Aboot.iso file (see below).

virt-manager_veos-disks

Arista vEOS is very particular about how its storage is configured. Both drives need to be IDE and the Aboot.iso (boot-loader) needs to be installed as a CD. If a SCSI controller gets created, it must be deleted or vEOS will not load.

virt-manager_veos-cdrom

Next, we need to make the VM boot from the CD to load the Aboot.iso file. Change the Boot Options to boot from the IDE CDROM 1.

virt-manager_veos-bootseq

Click Apply to close the window. Go ahead and click on the Begin Installation button, you will see the boot-loader run.

virt-manager_veos-bootload

If you have ever installed vEOS on other hypervisors, you will notice that it takes too long to boot in KVM with the same resources allocated. In addition to that, it’s important to be aware that you will not see the booting sequence, be patient and wait for the command-line to appear!

virt-manager_veos-cli

¡Listo! The base VM is ready. Clone the VM so you can create spine02, leaf01, leaf02 and leaf03.

Don’t forget to customize the network configuration for each VM, configure the network adapters as shown in the spreadsheet.

VERIFY THE INSTALLATION

You should see all the VMs on the virt-manager main window. Run all the VMs and wait until they are operational.

virt-manager_all-vms

You can also test if your installation has been completed successfully with the following command:

$ virsh list --all

All VMs must be in the running state and you should be able to access the user interface on each VM.

virsh_all-vms

When this is done, the lab should be bootable and every device should be connected to every other device according to the original network diagram.

In the next post, we will configure the L2/L3 protocols for the fabric, stay tuned!

You can always check my github repository to download the configuration files.

Articles in the series:

 

Arista Layer-3 Leaf-Spine Fabric with VXLAN HER: Lab Part 1

July 11, 2017
Introduction
by Pablo Narváez

This is the first post in a series where I’ll go deep on how VXLAN is deployed on Arista switches and how it operates in a Layer-3 Ethernet fabric.

vxlan-fabric-netwokdiagram
Two-tier Layer-3 VXLAN Ethernet Fabric

For this lab, I will create a self-contained virtual environment with Ubuntu Linux/KVM and Arista virtual EOS (vEOS). Please note, the IP Storage, Services and Border Leafs will not be deployed yet; once we are done with VXLAN, I will add new features and functionalities including the extra Leafs.

Hardware and versions to be used in my lab:

• 2x HPE DL-360 ProLiant Gen8 server
– 2x 64bit 8-core Intel Xeon processors (E5-2650)
– 128GB RAM
– 4x 300GB SAS drives (sda, RAID 0)
– 2x 1TB SAS drivers (sdb, RAID 0)
– 1x iLO dedicated GE network port
– 4x embedded GE network ports
– 1x 10GbE dual-port network module
• Ubuntu 16.04.2 LTS with KVM
• VMware iESXi 6.5
• Arista vEOS 4.17.5M

I will provide detailed instructions for what to do (and configure) on every device. Equipment and operating systems versions may change along the way, so I will make appropriate notes wherever needed.

This was just an overview of the virtual lab I will be building. As stated before, I am going to follow this up with a series of articles focusing on the different infrastructure layers; as those articles are released, the links will be updated here:

You can always check my github repository to download the configuration files.