LAB on
Introduction
This long-winded blog posts puts forward our technical assessment of an IP CLOS solution which is not depended to vendor proprietary solutions. Our goal is to understand the maturity of the equipment and understand how this technology may interoperate with our carrier network.
The bleeding edge topic of today is configuring EVPN
The topology used in the lab is based on
The
One important detail that is specific to our lab is that since all devices are QFX5100, L3 is
Also there are 2 links between qlab3 and 4 and qlab5 and 6. Both are implemented for
VXLAN is used as an overlay technology providing a Layer-2 network over a
Inter VXLAN routing
Overhead
The overhead in this solution sums up to a total of
64 bits
VXLAN Header 64 bits Outer UDP Datagram
160 bits Outer IP Header
208 bits Outer Ethernet Header
Inter Fabric links
All links have maximum MTU: 9216 and are configured as follows:
########## ### ##########
et-0/0/48 { } et-0/0/49 { } |
Underlay Network Configuration
We will first build the routed Layer-3 -
user@lab-3.grnet.gr# show routing-options forwarding-table { } |
In typical Juniper configuration per-packet here means actually per-flow.
user@lab-3.grnet.gr# show policy-options policy-statement load-balance-per_flow then { } |
There are a lot of alternatives in implementing the underlay network with BGP.
protocols { |
Overlay Network Configuration
The overlay network is using the loopbacks learned from the
In
|
EVPN is configured with VXLAN for data plane encapsulation. In the extended-vni list for the LAB purposes we have included all VNIs so we don’t have to explicitly allow each new VNI. In VNI options the route targets for each VNI are assigned independently. BUM traffic is also handled by EVPN by configuring ingress replication.
{master:0}[edit] user@lab-3.grnet.gr# show protocols evpn vni-options { } encapsulation vxlan; extended-vni-list all; multicast-mode ingress-replication; |
Next, we
{master:0}[edit] user@lab-3.grnet.gr# show switch-options vtep-source-interface lo0.0; route-distinguisher 10.0.0.3:1; vrf-import underlay-import; vrf-target { } |
{master:0}[edit] user@lab-3.grnet.gr# show policy-options policy-statement underlay-import term VXLAN600 { } term VXLAN481 { } term reject { } |
VXLAN configuration
Things are a bit simple
{master:0}[edit] user@lab-3.grnet.gr# show vlans PROVISIONING { } SERVER_PRIVATE_LAN1 { } |
After configuring a new VXLAN we will have to assign an extended community to be included in the EVPN advertisements.
{master:0}[edit] user@lab-3.grnet.gr# show policy-options policy-statement underlay-import term VXLAN600 { } term VXLAN481 { } term reject { } |
Interface Configuration
Access Port
This is a typical Juniper configuration:
{master:0}[edit] user@lab-3.grnet.gr# show interfaces xe-0/0/1 description VMC-04-01-RED; ether-options { } unit 0 { } |
Trunk port
Trunk port configuration works as expected. We configured a number of VLANs and issued some ICMP traffic from server 05-01 to server 07-01.
On all cases mac address learning on the Ethernet switching table was working and also EVPN NLRIs where exchanged between
{master:0}[edit] user@lab-4.grnet.gr# show groups VIMA-LAB interfaces { }
{master:0}[edit] user@lab-4.grnet.gr# show interfaces xe-0/0/1 apply-groups VIMA-LAB; description VMC-05-01-RED; {master:0}[edit] user@lab-4.grnet.gr# run show ethernet-switching interface xe-0/0/1 Routing Instance Name : default-switch Logical Interface flags (DL - disable learning, AD - packet action drop,
Logical interface xe-0/0/1.0 |
Trunk port with Native VLAN
Native vlan configuration is per Juniper ELS stanza where the native vlan id is added to the physical interface and also the VLAN name is added to the Ethernet switching options on the logical interface.
{master:0}[edit] user@lab-3.grnet.gr# show groups VIMA-LAB interfaces { } |
Unfortunately
The same does not apply when
Aggregate Interface
In this scenario we have a BME that is multi-homed to two QFX5100 devices. Both QFX5100 must be able to send/receive traffic from the server, so both server links must be used. From the server side LACP is setup so nothing different so far. From the network side the active/active topology is handled by EVPN. Namely both QFX5100 (PE routers in our case) have the same Ethernet Segment Identifier configured as well as the
Leaf-1 configuration with
user@lab-3.grnet.gr# show interfaces ae501 description VMC-05-01-AGR; esi { } aggregated-ether-options { } unit 0 { } |
Leaf-2
user@lab-4.grnet.gr# show interfaces ae501 description VMC-05-01-AGR; esi { } aggregated-ether-options { } unit 0 { } |
Verify Aggregate Interfaces
Each Leaf device has exchanged LACP packets with the server and setup its aggregate interface:
{master:0}[edit] user@lab-3.grnet.gr# run show lacp interfaces Aggregated interface: ae501
{master:0}[edit] user@lab-3.grnet.gr# run show ethernet-switching table
MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
Ethernet switching table : 8 entries, 8 learned Routing instance : default-switch |
{master:0}[edit] user@lab-4.grnet.gr# run show lacp interfaces Aggregated interface: ae501
{master:0}[edit] user@lab-4.grnet.gr# run show ethernet-switching table
MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
Ethernet switching table : 8 entries, 8 learned Routing instance : default-switch |
Verify Overlay Network
The EVPN Type 1 route
{master:0}[edit] user@lab-3.grnet.gr# show switch-options vtep-source-interface lo0.0; route-distinguisher vrf-import underlay-import; vrf-target { }
|
Therefore over the overlay EVPN signaling the address is learned by the Spine switches:
{master:0}[edit] user@lab-1.grnet.gr# run show route receive-protocol bgp
inet.0: 24 destinations, 31 routes (24 active, 0 holddown, 0 hidden)
inet.2: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
:vxlan.inet.0: 21 destinations, 21 routes (21 active, 0 holddown, 0 hidden)
inet6.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
bgp.evpn.0: 43 destinations, 195 routes (43 active, 0 holddown, 0 hidden) * * * * * * * * * * *
default-switch.evpn.0: 41 destinations, 185 routes (41 active, 0 holddown, 0 hidden) * * * * * * * * * * |
Verify Ethernet Switching Table
Those add the new entry to their Ethernet switching table:
{master:0}[edit] user@lab-1.grnet.gr# run show ethernet-switching table
MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
Ethernet switching table : 9 entries, 9 learned Routing instance : default-switch |
Verify DF/BDF status
Leaf-2
{master:0}[edit] user@lab-4.grnet.gr# show protocols evpn | match forward designated-forwarder-election-hold-time 2; |
{master:0}[edit] user@lab-4.grnet.gr# run show evpn instance esi 00:01:01:01:01:01:01:01:05:01 backup-forwarder Instance: default-switch
{master:0}[edit] user@lab-4.grnet.gr# run show evpn instance esi 00:01:01:01:01:01:01:01:05:01 designated-forwarder Instance: default-switch |
Verification and Tests
Overlay
We first confirm that
{master:0}[edit] user@lab-3.grnet.gr# run op bgp-show-summary Peer-address 10.0.0.1 10.0.0.2 10.0.0.4 10.0.0.5 10.0.0.6 192.168.0.0 192.168.0.6
{master:0}[edit] user@lab-3.grnet.gr# run show route table inet.0 | match 10.0.0 10.0.0.1/32 10.0.0.2/32 10.0.0.3/32 10.0.0.4/32 10.0.0.5/32 10.0.0.6/32 |
And
{master:0}[edit] user@lab-3.grnet.gr# run show route 10.0.0.5/32
inet.0: 24 destinations, 36 routes (24 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both
10.0.0.5/32 |
{master:0}[edit] user@lab-3.grnet.gr# run show route forwarding-table destination 10.0.0.5 Routing table: default.inet Internet: Destination 10.0.0.5/32 |
Underlay
EVPN type 2 and type 3 routes are advertised over BGP carrying the MAC addresses each device has in its Ethernet switching table:
{master:0}[edit] user@lab-3.grnet.gr# run show route advertising-protocol bgp 10.0.0.1
bgp.evpn.0: 14 destinations, 54 routes (14 active, 0 holddown, 0 hidden) * * * * * * * * * *
default-switch.evpn.0: 14 destinations, 54 routes (14 active, 0 holddown, 0 hidden) * * * |
{master:0}[edit] user@lab-3.grnet.gr# run show route receive-protocol bgp 10.0.0.1
inet.0: 24 destinations, 36 routes (24 active, 0 holddown, 0 hidden)
inet.1: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
:vxlan.inet.0: 12 destinations, 12 routes (12 active, 0 holddown, 0 hidden)
inet6.0: 2 destinations, 2 routes (2 active, 0 holddown, 0 hidden)
inet6.1: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
bgp.evpn.0: 14 destinations, 54 routes (14 active, 0 holddown, 0 hidden) * * * *
default-switch.evpn.0: 14 destinations, 54 routes (14 active, 0 holddown, 0 hidden) * * * * |
Switching Table
{master:0}[edit] user@lab-3.grnet.gr# run show ethernet-switching table
MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
Ethernet switching table : 6 entries, 6 learned Routing instance : default-switch |
Which is actually
{master:0}[edit] user@lab-3.grnet.gr# run show evpn database Instance: default-switch VLAN |
ICMP testing
The first test
# ping PING 64 bytes from 64 bytes from 64 bytes from 64 bytes from 64 bytes from
--- 5 packets transmitted, 5 packets received, 0.00% packet loss round-trip min/avg/max = 8.815/16.498/41.379 ms
|
Server bond interface with Active/Backup
This test was conducted on
{master:0}[edit] user@lab-4.grnet.gr# run show ethernet-switching table interface xe-0/0/25 MAC database for interface xe-0/0/25 MAC database for interface xe-0/0/25.0 MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static Ethernet switching table : 9 entries, 9 learned Routing instance : default-switch |
{master:0}[edit] user@lab-3.grnet.gr# run show ethernet-switching table interface xe-0/0/1
MAC database for interface xe-0/0/1
MAC database for interface xe-0/0/1.0
{master:0}[edit] user@lab-3.grnet.gr# run show interfaces xe-0/0/1 Physical interface: xe-0/0/1, Enabled, Physical link is Up
|
The
{master:0} user@lab-1.grnet.gr> show route table bgp.evpn.0 evpn-mac-address ec:f4:bb:ed:9b:18
bgp.evpn.0: 44 destinations, 200 routes (44 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both
2:10.0.0.4:1::481::ec:f4:bb:ed:9b:18/304
|
Now let us switch the active port from server side to RED:
{master:0}[edit] user@lab-3.grnet.gr# run show ethernet-switching table interface xe-0/0/1.0
MAC database for interface xe-0/0/1.0
MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
Ethernet switching table : 9 entries, 9 learned Routing instance : default-switch
{master:0}[edit] user@lab-4.grnet.gr# run show ethernet-switching table interface xe-0/0/25
MAC database for interface xe {master:0}[edit] user@lab-4.grnet.gr# run show ethernet-switching table interface xe-0/0/25
MAC database for interface xe-0/0/25
MAC database for interface xe-0/0/25.0 -0/0/25
MAC database for interface xe-0/0/25.0
|
The next step is to see if the
{master:0} user@lab-1.grnet.gr> show route table bgp.evpn.0 evpn-mac-address ec:f4:bb:ed:9b:18
bgp.evpn.0: 44 destinations, 200 routes (44 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both
2:10.0.0.3:1::481::ec:f4:bb:ed:9b:18/304
|
The same test is now conducted from the switch side, meaning that the switch interface is administratively shut.
The control plane:
{master:0} user@lab-1.grnet.gr> show route table bgp.evpn.0 evpn-mac-address ec:f4:bb:ed:9b:18
bgp.evpn.0: 44 destinations, 200 routes (44 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both
2:10.0.0.4:1::481::ec:f4:bb:ed:9b:18/304
|
We observed minimum packet loss on all cases (1-3 packets) which is to be expected and more or less the same as with production traditional Ethernet architectures.
IP migration
In this test we will be configuring an IP address on one server and then move this address to another server.
So again
Now all we need is a GARP
The above
Traffic Load Balancing in the Backbone Links
The purpose of this test is to determine if what we are seeing in the control plane, with multipath
For this test we will be using
{master:0}[edit] user@lab-3.grnet.gr# run show interfaces descriptions | match 04-01 xe-0/0/1
{master:0}[edit] user@lab-3.grnet.gr# run show ethernet-switching table interface xe-0/0/1
MAC database for interface xe-0/0/1
MAC database for interface xe-0/0/1.0
MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
Ethernet switching table : 8 entries, 8 learned Routing instance : default-switch |
{master:0}[edit] user@lab-5.grnet.gr# run show ethernet-switching vxlan-tunnel-end-point remote mac-table | match ec:f4:bb:ed:9b:18 |
lab5 already knows how to send traffic for the loopback interface of 3, that is the remote VTEP via the underlay network:
{master:0}[edit] user@lab-5.grnet.gr# run show route
inet.0: 19 destinations, 28 routes (19 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both
10.0.0.3/32 |
So all that is left is to run a couple of iperf clients from ycr0701 that will be sending traffic to ycr0401 and then monitor the backbone interfaces of
Servers’ side:
root@ycr0702:~# iperf -c WARNING: option -b implies udp testing ------------------------------------------------------------ Client connecting to Sending 1470 byte datagrams UDP buffer size: ------------------------------------------------------------ [ [ ID] Interval [ [ [ root@ycr0702:~# ^Croot@ycr0401:~# iperf -s -u ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: ------------------------------------------------------------ |
Network side:
{master:0}[edit] user@lab-3.grnet.gr# run show interfaces descriptions | match et-0/0/48 et-0/0/49 |
lab3 on interface connecting to spine switch1 (lab1):
lab-3.grnet.gr Interface: et-0/0/48, Enabled, Link is Up Encapsulation: Ethernet, Speed: 40000mbps Traffic statistics: Error statistics: Active alarms : None Active defects: None Input MAC/Filter statistics: |