August 17, 2017
Free Range Routing
by Pablo Narváez
Today I will test BGP between the FRR routing stack and Arista EOS. The sample configuration that I will show later in this post is just a basic integration between the two devices, nothing at all complex. Basically, I just wanted to expand my virtual environment by adding a DC/routing perimeter while testing FRR.
For this lab, I will be using the same environment I already built in my previous post so I can easily integrate FRR into the existing topology.
FREE RANGE ROUTING OVERVIEW
FRR is a routing software package that provides TCP/IP based routing services with routing protocols support such as RIPv1, RIPv2, RIPng, OSPFv2, OSPFv3, IS-IS, BGP-4, and BGP-4+.
In addition to traditional IPv4 routing protocols, FRR also supports IPv6. Since the beginning, this project has been supported by Cumulus Networks, Big Switch Networks, 6WIND, Volta Networks and Linkedin, among others.
FRR has been forked from the Quagga open-source project. For those who are not familiar with Quagga, it’s an open-source implementation of a full routing stack for Linux; it’s mostly used for WRT custom firmware, some cloud implementations, and even for control plane functionality on some network operating systems (NOS) like Cumulus Linux.
NOTE: FRR replaces Quagga as the routing suite in Cumulus Linux 3.4.0.
Quagga still exists but has a completely different development process than FRR. You can learn more about Quagga here.
ROUTING STACK VS NETWORK OPERATING SYSTEM
Just to be clear what FRR is and what it’s not, a network operating system (NOS) is the totality from Layer-1 hardware all the way up to the control plane. FRR is a full implementation of the routing control plane so it needs a base operating system to run on top.
In this regard, FRR is not a NOS that can run directly on baremetal. Instead, it´s a modern implementation of the IPv4/IPv6 routing stack that provides control plane functionality as Linux daemons.
FRR SYSTEM ARCHITECTURE
FRR is made from a collection of several daemons that work together to build the routing table.
Zebra is responsible for changing the kernel routing table and for redistribution of routes between different routing protocols. In this model, it’s easy to add a new routing protocol daemon to the entire routing system without affecting any other software.
There is no need for the FRR daemons to run on the same machine. You can actually run several instances of the same protocol daemon on the same machine and keep them apart from the rest of the daemons.
Currently FRR supports GNU/Linux and BSD. The list of officially supported platforms are listed below. Note that FRR may run correctly on other platforms, and may run with partial functionality on further platforms.
FRR is distributed under the GNU General Public License and is available for download from the official FRR website.
There are three steps for installing the software: configuration, compilation, and installation.
I chose Ubuntu to deploy FRR but several Linux distros are supported. If you want to install it on Ubuntu, follow these instructions. You can check the FRR webpage for any other Linux/BSD distro, it’s been pretty well documented.
When configuring FRR, there are several options to customize the build to include or exclude specific features and dependencies. You can check all the options here.
Once installed, check the FRR daemons to make sure it’s running:
If you installed FRR from source (link above), the FRR daemon (and all the routing daemons you specify during the installation) will run as a system service after the Linux kernel is booted. As you can see in the screen capture above, the routing processes (bgpd, ospfd, ldpd, etc.) are running as part of the main FRR service.
As with any other Linux system service, you can manage the frr service with systemctl.
$ systemctl start|stop|restart frr
To access FRR, each daemon has its own configuration file and terminal interface which can be a very annoying thing. To resolve this problem, FRR provides an integrated user interface shell called vtysh.
vtysh connects to each daemon with UNIX domain socket and then works as a proxy for user input so there’s no need to connect to each daemon separately.
To access vtysh from the host OS, just type in the following command:
superadmin@frr01:~$ vtysh Hello, this is FRRouting (version 3.1-dev-MyOwnFRRVersion-g7e4f56d). Copyright 1996-2005 Kunihiro Ishiguro, et al. This is a git build of frr-3.1-dev-320-g7e4f56d Associated branch(es): local:master github/frrouting/frr.git/master frr01#
INTERESTING FACTS ABOUT FRR
- If you install FRR from source and follow the instructions provided above, there’s no need to modify or even touch any of the daemon configuration files (.conf) located in /etc/frr. When you log into FRR with vtysh a single configuration file is created for all the daemons. The single conf file will be stored as frr.conf in the host
- Don’t expect to see Ethernet/WAN interfaces, FRR will show you the actual host network adapters: ens3, ens4, ens5, ens10 (depending on the Linux distro and your setup adapter names might change).
frr01# frr01# conf ter frr01(config)# interface ? IFNAME Interface's name ens3 ens4 ens5 ens10 lo frr01(config)# interface ens3 frr01(config-if)#
- As you may have noticed by now, if you know Cisco IOS or Arista EOS you are good to go! The FRR CLI is basically the same. You can check the list of CLI commands here.
BGP INTEROPERABILITY TESTING
As shown in the diagram below, I will use my existing network setup to connect the FRR routers to the Arista Spine switches.
Each network device has a loopback interface which is being announced into BGP (10.0.1.21-23, 10.0.1.31-32). When we finish the configuration, we should be able to ping all these interfaces from the FRR routers.
The interfaces between the FRR routers and the Arista switches are configured as point-to-point Layer-3 links.
frr01# show running-config Building configuration... Current configuration: ! frr version 3.1-dev frr defaults traditional hostname frr01 username root nopassword ! service integrated-vtysh-config ! log syslog informational ! interface ens4 description link_to_spine01-eth4 ip address 172.16.0.25/30 ! interface ens5 description link_to_spine02-eth4 ip address 172.16.0.29/30 ! interface ens10 description link_to_frr02-ens10 ip address 172.16.254.5/30 ! interface lo description router-id ip address 10.0.1.1/32 !
frr02# show running-config Building configuration... Current configuration: ! frr version 3.1-dev frr defaults traditional hostname frr02 username root nopassword ! service integrated-vtysh-config ! log syslog informational ! interface ens4 description link_to_spine01-eth5 ip address 172.16.0.33/30 ! interface ens5 description link_to_spine02-eth5 ip address 172.16.0.37/30 ! interface ens10 description link_to_frr02-ens10 ip address 172.16.254.6/30 ! interface lo description router-id ip address 10.0.1.2/32 !
I will configure ASN 65000 for the FRR routers; frr01 and frr02 will have iBGP peer sessions with each other and eBGP peer sessions with the Arista switches.
frr01# router bgp 65000 bgp router-id 10.0.1.1 distance bgp 20 200 200 neighbor ebgp-to-spine-peers peer-group neighbor ebgp-to-spine-peers remote-as 65020 neighbor 172.16.0.26 peer-group ebgp-to-spine-peers neighbor 172.16.0.30 peer-group ebgp-to-spine-peers neighbor 172.16.254.6 remote-as 65000 ! address-family ipv4 unicast network 10.0.1.1/32 exit-address-family vnc defaults response-lifetime 3600 exit-vnc
frr02# router bgp 65000 bgp router-id 10.0.1.2 distance bgp 20 200 200 neighbor ebgp-to-spine-peers peer-group neighbor ebgp-to-spine-peers remote-as 65020 neighbor 172.16.0.34 peer-group ebgp-to-spine-peers neighbor 172.16.0.38 peer-group ebgp-to-spine-peers neighbor 172.16.254.5 remote-as 65000 ! address-family ipv4 unicast network 10.0.1.2/32 exit-address-family vnc defaults response-lifetime 3600 exit-vnc
Since BGP was already configured in the Arista switches as part of my previous labs, I just added the eBGP sessions towards FRR.
spine01# router bgp 65020 router-id 10.0.1.11 distance bgp 20 200 200 maximum-paths 2 ecmp 64 neighbor ebgp-to-frr-peers peer-group neighbor ebgp-to-frr-peers remote-as 65000 neighbor ebgp-to-frr-peers maximum-routes 12000 neighbor 172.16.0.2 remote-as 65021 neighbor 172.16.0.2 maximum-routes 12000 neighbor 172.16.0.6 remote-as 65021 neighbor 172.16.0.6 maximum-routes 12000 neighbor 172.16.0.10 remote-as 65022 neighbor 172.16.0.10 maximum-routes 12000 neighbor 172.16.0.25 peer-group ebgp-to-frr-peers neighbor 172.16.0.33 peer-group ebgp-to-frr-peers network 10.0.1.11/32 redistribute connected
spine02# router bgp 65020 router-id 10.0.1.12 distance bgp 20 200 200 maximum-paths 2 ecmp 64 neighbor ebgp-to-frr-peers peer-group neighbor ebgp-to-frr-peers remote-as 65000 neighbor ebgp-to-frr-peers maximum-routes 12000 neighbor 172.16.0.14 remote-as 65021 neighbor 172.16.0.14 maximum-routes 12000 neighbor 172.16.0.18 remote-as 65021 neighbor 172.16.0.18 maximum-routes 12000 neighbor 172.16.0.22 remote-as 65022 neighbor 172.16.0.22 maximum-routes 12000 neighbor 172.16.0.29 peer-group ebgp-to-frr-peers neighbor 172.16.0.37 peer-group ebgp-to-frr-peers network 10.0.1.12/32 redistribute connected
NOTE: The “redistribute connected” command will redistribute all the directly connected interfaces into BGP for connectivity testing purposes. In production, link addresses are not typically advertised. This is because:
- Link addresses take up valuable FIB resources. In a large CLOS (Leaf-Spine) environment, the number of such addresses can be quite large
- Link addresses expose an additional attack vector for intruders to use to either break in or engage in DDOS attacks
We can verify the interoperability between FRR and Arista by checking the BGP neighbor adjacencies. The output of the “show ip bgp summary” command shows the BGP state as established, which indicates that the BGP peer relationship has been established successfully.
Finally, we check the routing table to make sure we can reach all the loopback interfaces from the FRR routers.
You can always check my github repository to download the configuration files.