Link Aggregation – LACP Protocol

EtherChannel enables bundling multiple physical links connecting same devices into a single logical link. I will try to show you how it is configured and how it works.

The issue with one uplink

I made an example with 8 clients connected to two Cisco 3850 switches. For start, those two switches are connected together with 1G copper on Interface Ge1/23. The clients are also connected to 1G ports. In this case, when all of those four clients on the left side start simultaneously sending traffic at full speed to different computers on the right side, they will congest the uplink between switches and some traffic will be dropped.

With new switches like 3850’s and with some money, we can add to both of them a C3850-NM-2-10G network service module which can take u to 4 x 1G optic SFPs or 2x 1G SFPs + 2 x 10G SFPs.
By adding those modules and one 10G SFP on each side, we can increase the interconnection to 10G and solve the congestion issue.

We have another solution and that is the main reason for this article, adding more interconnections between two switches.

With multiple interconnections added and no additional configuration, we created L2 loops between switches and Spanning-tree protocol will block out all but one of those to avoid loops, so no luck with this one. The thing to do is…

Configuring LACP

We need to go a step further and simply take those, up to 8 x 1G Interfaces, and configure link aggregation – EtherChannel. EtherChannel is enabled by the standardised vendor independent LACP protocol which creates one virtual link out of multiple physical links.

Depending on the platform and for LACP EtherChannel in case of 3850’s, you can bundle up to 16 interfaces of the same type in the EtherChannel but only up to eight ports can be active at one time.

Here’s how to do it with the example from above:

I took my 4 ports which I connected to other side (Gi 1/17, Gi 1/19, Gi 1/21, Gi 1/23) and enter into interface configuration mode and configure “channel-group”:

SW1(config)#interface range Gi 1/17, Gi 1/19, Gi 1/21, Gi 1/23
SW1(config-ra-if)#channel-group 1 mode ?
active Enable LACP unconditionally
auto Enable PAgP only if a PAgP device is detected
desirable Enable PAgP unconditionally
on Enable Etherchannel only
passive Enable LACP only if a LACP device is detected

As you can see, there are more options offered when you want to configure EtherChannel 1 mode.

Different modes can enable different LACP protocol way of working and bundling. There is another, Cisco proprietary PAgP protocol which can be used to make an EtherChannel but this one is rarely used given that we have a standard LACP which is supported widely.

I will configure LACP using the command below:

SW1(config)#interface range Gi 1/17, Gi 1/19, Gi 1/21, Gi 1/23
SW1(config-ra-if)#channel-group 1 mode active
Creating a port-channel interface Port-channel 1

Remember, you need to configure the same thing on the other switch too if you want the EtherChannel to get up:

SW2(config)#interface range Gi 1/17, Gi 1/19, Gi 1/21, Gi 1/23
SW2(config-ra-if)#channel-group 1 mode active
Creating a port-channel interface Port-channel 1

In order for EtherChannel to work, interfaces in bundle need to have equal config. If you want to be sure that the interfaces are not configured with something before bundling them, you can easily restore default factory interface configuration of each interface by issuing the default command on that interface range:

SW1(config)#default interface range Gi 1/17, Gi 1/19, Gi 1/21, Gi 1/23

I my example I configured both sides in active mode, this is usually the best way to do it so you are sure everything will work and it is easier to create configuration templates.

You can configure one side with passive mode if you want, the EtherChannel will then always be initiated by active side and it will work fine too. Only side with active mode configured will initiate LACP PDUs and a passive side will not try to bundle or send LACP PDUs until it receives some from active LACP peer.

How to check LACP status?

There are several commands that can be used but the most important is the: show etherchannel summary which will give you an overview of all EtherChannel configured and their status for that switch:

SW1#show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 1
Number of aggregators: 1

Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
1 Po1(SD) LACP Gi1/17(D) Gi1/19(D) Gi1/21(D) Gi1/23(D)

If you want to see the protocol used and check which interfaces are bundled in that port-channel next command is the one:

SW1#show etherchannel port-channel
Channel-group listing:
----------------------
Group: 1
----------
Port-channels in the group:
---------------------------

Port-channel: Po1 (Primary Aggregator)

------------

Age of the Port-channel = 0d:03h:56m:04s
Logical slot/port = 9/2 Number of ports = 4
HotStandBy port = null
Port state = Port-channel Ag-Inuse
Protocol = LACP
Port security = Disabled

Ports in the Port-channel:

Index Load Port EC state No of bits
------+------+------+------------------+-----------
0 00 Gi1/17 Active 0
0 00 Gi1/19 Active 0
0 00 Gi1/21 Active 0
0 00 Gi1/23 Active 0

Time since last port bundled: 78d:01h:24m:06s Gi1/23
Time since last port Un-bundled: 78d:01h:28m:02s Gi1/21

Why would you use passive mode?

You could be curious about what is the use case for passive LACP configuration? Depending on the platform, but in most cases, active mode configuration will keep interfaces in suspended mode and they will not forward traffic until LACP is not established/negotiated with an exchange of PDUs. A passive side will, for interfaces where is configured, fail back to switch standard access mode traffic forwarding until LACP is not initiated from another side.

This is helpful for bundled ports connecting servers with LACP support. System admin could get access to the server even when the server breaks and needs some configuration to get LACP on the server side to work.

I heard of some online server farms requesting passive LACP on switch towards the servers so that they have a failure mode that does not break connectivity. Passive LACP should enable network connectivity during an install or restore of the server and avoids server to be dependent on LACP being configured to get the system online.

About Switch Independent Server Side Config

If you have server side NIC teaming (link aggregation) configured for switch independent mode it will enable server-side load balancing across multiple server NICs without the use of LACP protocol. This is often used with servers which have bad or no LACP protocol support in their NIC drivers. Some time ago, switch independent mode on Hyper-V hosts was also recommended because of LACP bugs on that platform. VMware will give you LACP support only with vDS so in you pay more.

Example of Switch Independent NIC teaming configuration on Microsoft Hyper-V

In case of switch independent configuration on the server, the switch will not have any link aggregation config and loop prevention and load balancing will be done solely by the server on his side.

It means that network administrator will have less config to do but the outcome will not always be as expected and it will underperform in some cases.

Without LACP on both sides, a server will load balance the traffic which he sends out to the network and the switch will happily receive it on both ports and send it further. In the opposite direction, switch will only learn one MAC address (NIC teaming address) from the server and it will be able to send traffic towards the server by only one of those two or more links.

So, by using switch independent configuration for NIC teaming, we speeded up server upload towards the network but not its download from the network. If we take the view of an Internet user who is probably just downloading something from the server, this is ok, the server will be able to provide the data through both interfaces.

If a server in some other environment has a backup role and needs to save much data on itself from the network, it will do it through one interface only.

LACP Load Balance Algorithm

SW1#show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-mac

EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source MAC address
IPv4: Source MAC address
IPv6: Source MAC address

In the example above we showed that currently we use source mac address of packets to create the hash and decide which packet flow will go through which interface.

Standard mode of load balancing packets across LACP EtherChannel if per flow so each flow will always get one of the bundled links to get to the other side. This is the thing with etherchannels, they will make sure no congestion will happen but they will not speed up each specific transfer. Each transfer of IP communication (each flow) will get his own link from bundled links and still get 1G max of speed in our case where 1G interfaces are bundled.

On some switches, you can even change the load balancing so that it works per packet and not per flow, but that is not best practice.

Changing the load balancing algorithm is simple, it is done with the global configuration like this:

SW1(config)#port-channel load-balance ?
dst-ip Dst IP Addr
dst-mac Dst Mac Addr
src-dst-ip Src XOR Dst IP Addr
src-dst-mac Src XOR Dst Mac Addr
src-ip Src IP Addr
src-mac Src Mac Addr

Switch, when sending the flow of packets to its neighbour via EtherChannel, uses source MAC address, source IP, destination MAC, destination IP or combination of those packet headers to decide through which bundled interface the packet will be forwarded.

Different algorithms enable better performance in some specific case. In case like below, where 4 PCs are sending traffic to other 4 PCs, hash algorithm should be src-dst-ip or src-dst-mac so that there is a better chance that each PC will get different interface for its flows when sending to different neighbours.

Situation below, where one server is connected and sends traffic to multiple PCs on the other side, we should use the algorithm which takes the destination hosts (we have more destination hosts and only one source) in account so the LACP hash algorithm has more chance to give separate interfaces to different flows and speed thing up.

If we configured src-ip or src-mac for the load balancing algorithm every hash will end up the same as it will only have one MAC or one IP for calculation. All traffic from the server towards anywhere on the other switch will always cross the same interconnection and max summed up speed of sending will always be 1G. So any other option rather than src-ip and src-mac will do.

On some switches you can even check the output of the LACP hash algorithm like this:

SW1#test etherchannel load-balance interface port-channel 1 mac AAAA.AAAA.4444 BBBB.BBBB.3333
Would select Gi1/23 of Po1

I wanted to check which interface a packet from Server with MAC address AAAA.AAAA.4444 towards PC with MAC address BBBB.BBBB.3333 would be sent. It seems it would go through Gi1/23 in my case. Of course, if that interface is successfully bundled in the EtherChannel if not it will end up taking some other interconnection.

What Load Balance Algorithm is better when it doesn’t matter?

If you see that your situation is not very specific and that every algorithm will do good, there is a very good point for using src-dst-ip as the load balancing algorithm.

In this case, each pair of src-dst IP addresses will get its own interconnection link. By using IP addresses in LACP hash algorithm, load balancing will work similar to ECMP (equal cost multipath routing).

You will have the same packet flow management at Layer 2 like you already have at Layer 3.

 

Leave a Reply