SDN

Arista Extensible API (EAPI) – Network Automation with JSON-RPC and Python Scripting

October 16, 2017
Arista EOS EAPI – Application Programmable Interface
by Pablo Narváez

NETWORK AUTOMATION

Network automation is usually associated with doing things more quickly which is true, but it’s not the only reason why we should adopt it.

Network administrators usually touch the CLI to make changes on the network. Things get messy when there’s more than one administrator in a multi-vendor environment: Chances for human error are increased when different admins try to make changes on the network using different CLI/tools at the same time.

Replacing manual changes with standardized configuration management tools for network automation help achieve more predictable behavior and minimize “the human error factor”.

Network automation is the use of IT controls to supervise and carry out every-day network management functions. These functions can range from basic network mapping and device discovery to network configuration management and the provisioning of virtual network resources.

Network automation is a powerful and flexible enabler to:

●   Efficiently automate repetitive manual operational tasks
●   Answer open questions and resolve nonfeasible tasks
●   Enable tailored solutions and architectures beyond standard features

Through a step-by-step approach and thanks to many open source examples made available, network automation is easy to adopt in your network today.

Just keep in mind:

“With network automation, the point is to start small, but think through what else you may need in the future.” – Network Programmability by Jason Edelman; Scott S. Lowe; Matt Oswalt

 ARISTA EAPI

Introduction

Arista EOS offers multiple programmable interfaces for applications. These interfaces can be leveraged by applications running on the switch, or external to EOS.

The Extensible API (eAPI) allows applications and scripts to have complete programmatic control over EOS, with a stable and easy to use syntax. It also provides access to all switch state.

Once the API is enabled, the switch accepts commands using Arista’s CLI syntax, and responds with machine-readable output and errors serialized in JSON, served over HTTP.

Configuring the Extensible API Interface

One of the benefits about working with Arista EOS eAPI is the ability to script with JSON-RPC.  A network administrator can get machine-friendly data from the switch using CLI syntax.

In this post, I will show you the use of eAPI with a simple example using Python.

First, we need to activate the eAPI in each switch. To enable it, we need to bring up the API virtual interface.

leaf01#conf ter
leaf01(config)#management api http-commands 
leaf01(config-mgmt-api-http-cmds)#no shutdown 
leaf01(config-mgmt-api-http-cmds)#

eAPI requires a username and password to be configured. This is a regular username setup on global configuration:

leaf01#conf ter
leaf01(config)#username admineapi secret arista

Default configuration for eAPI uses HTTPS on port 443. Both the port and the protocol can be changed.

leaf01#conf ter
leaf01 (config)#management api http-commands
leaf01(config-mgmt-api-http-cmds)#protocol ?
   http         Configure HTTP server options
   https        Configure HTTPS server options
   unix-socket  Configure Unix Domain Socket

leaf01(config-mgmt-api-http-cmds)#protocol http ?
   localhost  Server bound on localhost
   port       Specify the TCP port to serve on
   <cr>      

leaf01(config-mgmt-api-http-cmds)#protocol http port ?

  <1-65535>  TCP port

leaf01(config-mgmt-api-http-cmds)#protocol http port 8080
leaf01(config-mgmt-api-http-cmds)#

NOTE: When configuring a non-default http/https pot under “protocol”, that port needs to be manually added to an updated version of the switch´s control-plane access-list to permit remote access.

To verify that the eAPI is running use the following command:

leaf01#show management api http-commands
 Enabled:            Yes
 HTTPS server:       running, set to use port 443
 HTTP server:        shutdown, set to use port 80
 Local HTTP server:  shutdown, no authentication, set to use port 8080
 Unix Socket server: shutdown, no authentication
 VRF:                default
 Hits:               0
 Last hit:           never
 Bytes in:           0
 Bytes out:          0
 Requests:           0
 Commands:           0
 Duration:           0.000 seconds
 SSL Profile:        none
 QoS DSCP:           0

 URLs        
------------------------------------- 
Ethernet4   : https://172.16.0.2:443 
Ethernet5   : https://172.16.0.14:443  
Loopback0   : https://10.0.1.21:443    
Loopback1   : https://10.0.2.1:443     
Vlan11      : https://192.168.11.2:443 
Vlan4094    : https://172.16.254.1:443

In the output shown above notice the URLs, we are going to need them to access the switch eAPI through HTTP/HTTPS.

USING ARISTA EAPI

There are two methods of using the eAPI:

  • Web access
  • Programming

eAPI Web Access

The eAPI uses the lightweight, standardized protocol JSON-RPC 2.0 to communicate between your program (the client) and the switch (the server).

To explore the API, point your web browser to https://myswitch after enabling the API interface on the switch.

NOTE: “myswitch” refers to the IP address of the switch you want to configure. To select the appropriate IP address, choose one of the URLs displayed in the command output shown above.

This web-app lets you interactively explore the protocol, return values and model documentation.

eapi_web

The way it works is by sending a JSON-RPC request via an HTTP POST request to https://myswitch/command-api from the client, the request encapsulates a list of CLI commands it wishes to run and the switch replies with a JSON-RPC response containing the result of each CLI command that was executed. The commands in the request are run in order on the switch. After the switch has executed all commands, it exits back to unprivileged mode.  If any of the commands emit an error, no further commands from that request are executed, and the response from the switch will contain an error object containing the details of the error that occurred.

To test the eAPI via web browser, let’s try a common command like “show version”:

web_showversion

See the command response in the Response Viewer window.

You can try other commands available in CLI. Check the full list of CLIs supported commands and the corresponding output data entries definition in the top right corner in the “Command Documentation” tab.

Easy to use, right? While the web interface is useful for testing eAPI, it’s not really designed to be a day-to-day function. For a more robust, scalable and complete eAPI experience, the use of the Programming interface is recommended.

eAPI Programming Interface

When using the programming interface to communicate with the switches, we need to read the JSON formatted output. To do so, we are going to add JSON libraries to our environment. For this lab, we have a dedicated Ubuntu Linux server (client) to download the JSON/Python libraries.

NOTE: You don’t need to have an external PC to run the JSON/Python libraries, you can run scripts on the Arista switch itself since all the required JSON libraries are part of the base EOS build.

To enable JSON for use in Python, we need to download the libraries to the Linux server.

superadmin@server00-eapi:~$ sudo apt-get install python-pip
superadmin@server00-eapi:~$ sudo pip install jsonrpclib

This is all we need to communicate with the eAPI.

Now we need to create and run a python script to request some information to the switch. To do so, I will use a really simple example to retrieve the output of “show version”.

#!/usr/bin/python
 
from jsonrpclib import Server
 
switch = Server(http://admineapi:arista@192.168.11.2/command-api)
response = switch.runCmds(1, [“show version”])
 
print response

In order to create and run your own Python scripts the use of an IDE (Integrated Development Environment) is strongly recommended. An IDE is a software suite that consolidates the basic tools developers need to write and test software. Typically, an IDE contains a code editor, a compiler or interpreter (Python uses an interpreter) and a debugger that the developer accesses through a single graphical user interface (GUI). There are several IDEs available, please check the following link that contains a review of the most popular ones:

Python Integrated Development Environments

Let’s take a closer look at the script.

This line defines the target (switch). It is broken down as a URL with the following format:

<protocol>://<username>:<password>@<hostname or ip-address>/command-api

The “/command-api” must always be present when using eAPI.

You cannot abbreviate any CLI command and the number “1” in the command is the eAPI version which must always be 1.

Now let’s run the script.

superadmin@server00-eapi:~/scripting$ python hello.py 
[{u’memTotal’: 1893352, u’internalVersion’: u’4.17.5M-4414219.4175M’, u’serialNumber’: u”, u’systemMacAddress’: u’52:54:00:97:ea:40′, u’bootupTimestamp’: 1505842331.32, u’memFree’: 583364, u’version’: u’4.17.5M’, u’modelName’: u’vEOS’, u’isIntlVersion’: False, u’internalBuildId’: u’d02143c6-e42b-4fc3-99b6-97063bddb6b8′, u’hardwareRevision’: u”, u’architecture’: u’i386′}]

That may seem like gibberish at first glance, but it’s actually a JSON-formatted set of key-value pairs.

This is the same output, but spaced apart to line it up into more human readable format:

[{
 u'memTotal': 1893352, 
 u'internalVersion': u'4.17.5M-4414219.4175M', 
 u'serialNumber': u'', 
 u'systemMacAddress': u'52:54:00:97:ea:40', 
 u'bootupTimestamp': 1505842331.32, 
 u'memFree': 583364, 
 u'version': u'4.17.5M', 
 u'modelName': u'vEOS',
 u'isIntlVersion': False, 
 u'internalBuildId': u'd02143c6-e42b-4fc3-99b6-97063bddb6b8', 
 u'hardwareRevision': u'', 
 u'architecture': u'i386'
 }]

Now that we have the key-value pairs, we can reference them to pull out the desired information… this is where the magic happens.

Basically, we have bulk data, so we need an automated way to retrieve the information.

To so, change the script to extract just the value-pair that you need. The format is:

Response[0][“key-name”]

In the next example, I will request the system MAC Address, the EOS version and the total physical memory; all other information will not be displayed.

superadmin@server00-eapi:~/scripting$ cat hello.py  
#!/usr/bin/python
 
from jsonrpclib import Server
 
switch = Server(http://admineapi:arista@192.168.11.2/command-api)
response = switch.runCmds(1, [“show version”])
 
print “The system MAC address is:”, response[0][“systemMacAddress”]
print “The system version is:”, response[0][“version”]
print “The total physical memory is:”, response[0][“memTotal”]

This is the result of running the script:

superadmin@server00-eapi:~/scripting$ python hello.py 
The system MAC address is: 52:54:00:97:ea:40
The system version is: 4.17.5M
The total physical memory is: 1893352

Just imagine how you could use this tool compared to the closed vendor-specific monitoring apps, the eAPI provides you with the desired information the way you want it when you want it… You can even create reports and verify compliance with some advanced scripting so this is the flexibility that a programmable operating system provides.

Complex functions require a more sophisticated script. One such example is device provisioning. For deployment automation, you can send multiple commands at once to configure the switch , please see the example below.

#!/usr/bin/python
 
from jsonrpclib import Server
switch = Server(http://admineapi:arista@192.168.11.2/command-api)

for x in range (10, 19):
      response  = switch.runCmds(1, [
           “enable”,
           “configure”,
           “interface ethenet2” + str(x),
           “description [GAD Eth-“ + str(x) + “]”],
           “json”)
print “Done.”

Some commands may require input. This can be accomplished by surrounding the command with curly braces and adding the “cmd” and “input” keywords using the following format:

#!/usr/bin/python
 
 from jsonrpclib import Server
 switch = Server(http://admineapi:arista@192.168.11.2/command-api)

response  = switch.runCmds(1, [
     {“cmd”: “enable”, “input”: “arista”},
      “configure”,
      “interface ethenet2”,
      “description I can code!”],
      “json”)

The Arista eAPI (and the API of any other programmable NOS for that matter) is a tremendously powerful tool that puts the very concept of Software Defined Networking within easy reach. The ability to issue CLI commands remotely through scripts is one of the major benefits of network automation and programmable infrastructure.

You can always check my github repository to download the configuration files.

 

Advertisements

Arista Layer-3 Leaf-Spine Fabric with VXLAN HER: Lab Part 4

August 2, 2017
Configuring and Testing VXLAN
by Pablo Narváez

This is the last article in the series, we will finish this lab with the VXLAN configuration and testing connectivity between servers.

VIRTUAL EXTENSIBLE LAN (VXLAN)

The VXLAN protocol is an RFC (7348). The standard defines a MAC in IP encapsulation protocol allowing the construction of Layer-2 domains across a Layer-3 IP infrastructure. The protocol is typically deployed as a data center technology to create overlay networks across a transparent Layer-3 infrastructure:

  • Providing Layer-2 connectivity between racks or PODs without requiring an underlying Layer-2 infrastructure
  • Logical connecting geographically disperse data centers at Layer-2 as a data center Interconnect (DCI) technology
  • VLAN supports up to 16 million virtual overlay tunnels over a physical Layer-2/3 underlay network for Layer-2 network connectivity and multi-tenancy
vxlan-fabric-netwokdiagram

Two-tier Layer-3 VXLAN Ethernet Fabric

VXLAN encapsulation/decapsulation is performed by a VXLAN Tunnel End Point (VTEP), this can be either:

  • A VXLAN-enabled hypervisor such as ESXi, KVM, or XEN (software VTEP)
  • A network switch (hardware VTEP)
Software & Hardware VTEPs

Software & Hardware VTEPs

To create VXLAN overlay virtual networks, IP connectivity is required between VTEPs.

VXLAN CONTROL PLANE

When VXLAN was released, the IETF defined the VXLAN standards (RFC 7348) with a multicast-based flood&learn mechanism that acts as a really complex control plane. It was evident the RFC was incomplete (to say the least) as flooding multicast-based VXLAN in the underlay represented several challenges in the data center including scalability and complexity.

To overcome these limitations, networking vendors started to introduce control plane technologies to replace the multicast-based flooding.

Depending on the vendor, you can have more than one option to deploy a VXLAN control plane solution. To simplify things, it’s a good idea to categorize these technologies:

  1. Network-centric vs Hypervisor-based
  2. Head End Replication vs Dynamic tunnel

Network-centric vs Hypervisor-based solutions

The VXLAN control plane process implies the creation of VXLAN tables that contain the VNI/VLAN mapping information, remote MAC Addresses available per VTEP, and the VTEP/Hypervisor IP Address to establish the VXLAN tunnels.

Some networking vendors have control plane solutions based on SDN that leave the control plane process to an external software layer called Controller. The SDN Controller is responsible for replicating, synchronizing and maintaining the VXLAN tables on the Hypervisors, among other tasks. In order for the Hypervisors to speak with the Controller, VXLAN agents are installed either as part of the host kernel or as a VM inside the Hypervisor on each compute node; the agents (called VTEPs) receive the VXLAN information from the Controller so they can encapsulate/decapsulate traffic based on the instructions contained in the tables.

The use of an SDN Controller as a VXLAN control plane solution is just one option. An alternative is to deploy the VXLAN control plan directly on the Ethernet Fabric. This network-centric solution requires the Ethernet Fabric to be VXLAN-capable meaning the data center switches have to support VXLAN. In the Hypervisor-based solution, the underlay is not aware of the overlay network so the switches do not need to support VXLAN.

NOTE: Since the VXLAN data/control planes are not standardized among vendors, you should expect to find some incompatibility in a multi-vendor network.

Head End Replication vs Dynamic Tunnels Setup

If you want to deploy the VXLAN control plane on the underlay, we need to decide on how to setup the VXLAN tunnels.

VXLAN tunnels can be setup manually (Head End Replication) or dynamically (MP-BGP EVPN). Head End Replication (HER) is the static mappings of VTEPS for the management of broadcast, unicast, multicast, and unknown packets. It requires to configure each switch with the VNI/VLAN mappings and the list of VTEPs to share MAC addresses and forward BUM traffic. This options works well for small and medium-sized networks. However, scalability and human errors are the primary concerns for large networks.

To automate and simplify the VXLAN tunnels setup, Multi-Protocol Border Gateway Protocol Ethernet VPN (MP-BGP EVPN) is used as a routing protocol to coordinate the creation of dynamic tunnels. EVPN is an extension to the MP-BGP address family which allows to carry VXLAN/MAC information in the BGP routing updates.

VXLAN ROUTING AND BRIDGING

The deployment of VXLAN bridging provides Layer-2 connectivity across the Layer-3 Leaf-Spine underlay for hosts. To provide Layer-3 connectivity between the hosts VXLAN routing is required.

VXLAN routing, sometimes referred to as inter-VXLAN routing, provides IP routing between VXLAN VNIs in the overlay network. VXLAN routing involves the routing of traffic based, not on the destination IP address of the outer VXLAN header but the inner header or overlay tenant IP address.

VXLAN Routing Topologies

The introduction of VXLAN routing into the overlay network can be achieved by a direct or indirect routing model:

  • Direct Routing: The direct routing model provides routing at the first-hop Leaf node for all subnets within the overlay network.  This ensures optimal routing of the overlay traffic at the first hop Leaf switch
  • Indirect Routing: To reduce the amount of state (ARP /MAC entries and routes) each Leaf node holds, the Leaf nodes only route for a subset of the subnets

The Direct Routing model works by creating anycast IP addresses for the host subnets across each of the Leaf nodes, providing a logical distributed router. Each Leaf node acts as the default gateway for all the overlay subnets, allowing the VXLAN routing to always occur at the first-hop.

CONFIGURING VXLAN

For this lab, I’m going to use Direct Routing and Head End Replication (HER) to setup the VXLAN tunnel. In later posts, I will add a couple of SDN Controllers to demonstrate the centralized VXLAN control plane option with VXLAN agents on the compute nodes.

NOTE: As of the writing of this article, EVPN is not supported on vEOS. In fact, Arista just announced EVPN support on the latest EOS release, so it’s still a work in progress.

To provide direct routing, the Leaf nodes of the MLAG domain were configured with an IP interface for every subnet. I already covered this part in my previous post: I configured VARP with the “ip virtual router” representing the default gateway for the subnet.

VXLAN Routing with MLAG

On the other hand, Layer-2 connectivity between racks will be achieved by configuring a VXLAN VTEP on the Leaf switches. For the Dual-homed Compute Leaf, a single logical VTEP is required for the MLAG domain. We need to configure the VTEP on both MLAG peers with the same Virtual Tunnel Interface (VTI) IP address, this ensures both peers decapsulate traffic destined to the same IP address.

The logical VTEP in combination with MLAG provides an active-active VXLAN topology.

VXLAN Overlay Networks

VXLAN Overlay Networks

The logical VTEP address is configured as a new loopback interface. This IP address will be used as the VXLAN tunnel source interface.

Let’s configure the Loopback1 interface, we need to configure the same IP address on the MLAG peers.

hostname leaf01
 !
 interface loopback1
 ip address 10.0.2.1/32
 !
hostname leaf02
 !
 interface loopback1
 ip address 10.0.2.1/32
 !
hostname leaf03
 !
 interface loopback1
 ip address 10.0.2.2/32
 !

Next, we need to assign Loopback1 to the VXLAN tunnel interface (VTI).

hostname leaf01
 !
 interface vxlan1
 vxlan source-interface loopback1
 !
hostname leaf02
 !
 interface vxlan1
 vxlan source-interface loopback1
 !
hostname leaf03
 !
 interface vxlan1
 vxlan source-interface loopback1
 !

To map the hosts VLANs to the VNIs, I will use the following mapping:

vlan 11 –> vni 1011
vlan 12 –>
 vni 1012
vlan 13 –>
 vni 1013

hostname leaf01
 !
 interface vxlan1
 vxlan source-interface loopback1
 vxlan vlan11 vni 1011
 vxlan vlan 12 vni 1012
 vxlan vlan 13 vni 1013
 !
hostname leaf02
 !
 interface vxlan1
 vxlan source-interface loopback1
 vxlan vlan11 vni 1011
 vxlan vlan 12 vni 1012
 vxlan vlan 13 vni 1013
 !
hostname leaf03
 !
 interface vxlan1
 vxlan source-interface loopback1
 vxlan vlan11 vni 1011
 vxlan vlan 12 vni 1012
 vxlan vlan 13 vni 1013
 !

Now we have to configure the flood list for the VNIs so the VTEPs can send BUM traffic and learn MAC address between them.

hostname leaf01
 !
 interface vxlan1
 vxlan source-interface loopback1
 vxlan vlan11 vni 1011
 vxlan vlan 12 vni 1012
 vxlan vlan 13 vni 1013
 vxlan vlan11 flood vtep 10.0.2.2
 vxlan vlan12 flood vtep 10.0.2.2
 vxlan vlan13 flood vtep 10.0.2.2
 !
hostname leaf02
 !
 interface vxlan1
 vxlan source-interface loopback1
 vxlan vlan11 vni 1011
 vxlan vlan 12 vni 1012
 vxlan vlan 13 vni 1013
 vxlan vlan11 flood vtep 10.0.2.2
 vxlan vlan12 flood vtep 10.0.2.2
 vxlan vlan13 flood vtep 10.0.2.2
 !
hostname leaf03
 !
 interface vxlan1
 vxlan source-interface loopback1
 vxlan vlan11 vni 1011
 vxlan vlan 12 vni 1012
 vxlan vlan 13 vni 1013
 vxlan vlan11 flood vtep 10.0.2.1
 vxlan vlan12 flood vtep 10.0.2.1
 vxlan vlan13 flood vtep 10.0.2.1
 !

Finally, to provide IP connectivity between the VTEPs, the loopback IP address of the VTIs need to be advertised into BGP. We just need to announce the logical VTEP IP address into BGP when a new VTEP is added to the topology.

hostname leaf01
!
router bgp 65021
network 10.0.2.1/32
!
hostname leaf02
!
router bgp 65021
network 10.0.2.1/32
!
hostname leaf03
!
router bgp 65022
network 10.0.2.2/32
!

With the Leaf switches announcing their respective VTEP into the underlay BGP routing topology, each Leaf switch learns two equal cost paths (via the Spine switches) to the remote VTEP.

Leaf01 show ip route

Leaf02 show ip route

With the direct routing model, the host subnets exist only on the Leaf switches, there is no need to announce them into BGP; the Spine switches are transparent to the overlay subnets, they only learn the VTEP addresses.

Layer-2 and Layer-3 connectivity between the servers is now possible. Below is the resultant MAC and VXLAN address table for the Leaf switches and the ping results between servers.

show Leaf mac-address

show Leaf vxlan-address

ping server01

You can always check my github repository to download the configuration files.

Articles in the series: