Category: Networking

How to Enable Dot1x authentication for wired clients

If your LAN is extending to some places where unauthorised people can just plug in and gain access to your protected network, it’s time to implement some security on your access switch. The best thing to do is to implement IEEE 802.1X port-based authentication which will enable users/machine authentication and prevent unauthorized devices from getting access switch port running when connected. IEEE 802.1X port-based authentication is mostly called simply as dot1x.

In this article I will show you how to configure some basic dot1x stuff on switch side. I will also include Windows machine side of configuration as this is something most people presume it’s working out-of-the-box but of course that’s not the case. Radius server policy is fairly simple so a screenshot of the policy will get you going. So as you see, to get dot1x running you need to configure:

What is a non-blocking switch?

It is fairly common to hear about switch being non-blocking. It’s because almost all switches today are non-blocking. But what that means? When I asked people around me on what exactly non-blocking switch means, they were unable to get to the same conclusion.

I was going through a lot of different internet places and vendor documents before I wrote this here, but, do not hesitate to add something in comments if you have different view on the subject.

Line-rate switch means the same as if you would said wire-speed switch. It basically means that this switch has the forwarding capacity that supports concurrently all ports at full port capacity. It should be true for minimum packet sizes to. Non-blocking switch means the same thing. Non-blocking Switch internal bandwidth can handle all the port bandwidths, at the same time, at full capacity. Sometimes for high end switches non-blocking is also refereed to switch architecture ability to significantly reduce head-of-line blocking (HOL blocking).

HOL Head-of-line blocking

Head-of-line blocking (HOL blocking) in networking is a performance issue that occurs when a bunch of packets is blocked by the first packet in line. It can happen specially in input buffered network switches where out-of-order delivery of packets can occur. A switch can be composed of input buffered ports, output buffered ports and switch fabric.

When first-in first-out input buffers are used, only the first received packet is prepared to be forwarded. All packets received afterwards are not forwarded if the first one cannot be forwarded. That is basically what HOL blocking really is.

TCAM and CAM memory usage inside networking devices

As this is networking blog I will focus mostly on the usage of CAM and TCAM memory in routers and switches. I will explain TCAM role in router prefix lookup process and switch mac address table lookup.

However, when we talk about this specific topic, most of you will ask: how is this memory made from architectural aspect?

How is it made in order to have the capability of making lookups faster than any other hardware or software solution? That is the reason for the second part of the article where I will try to explain in short how are the most usual TCAM memory build to have the capabilities they have.

CAM and TCAM memory

When using TCAM – Ternary Content Addressable Memory inside routers it’s used for faster address lookup that enables fast routing.

In switches CAM – Content Addressable Memory is used for building and lookup of mac address table that enables L2 forwarding decisions. By implementing router prefix lookup in TCAM, we are moving process of Forwarding Information Base lookup from software to hardware.

When we implement TCAM we enable the address search process not to depend on the number of prefix entries because TCAM main characteristic is that it is able to search all its entries in parallel. It means that no matter how many address prefixes are stored in TCAM, router will find the longest prefix match in one iteration. It’s magic, right?

CEF Lookup

Image 1 shows how FIB lookup functions and points to an entry in the adjacency table. Search process goes through all entries in TCAM table in one iteration.


Router

In routers, like High-End Cisco ones, TCAM is used to enable CEF – Cisco Express Forwarding in hardware. CEF is building FIB table from RIB table (Routing table) and Adjacency table from ARP table for building pre-prepared L2 headers for every next-hop neighbour.

Solicited-node multicast address

Some time ago I was working on IPv6 implementation and in that period I wrote an article about NDP (you can read it here). After a while, I received some comments that it is not written very well so I reviewed a huge part of it. It looks my English was far worst two years ago that I was really aware of 🙂

In the reviewing process, I realised that NDP usage of Solicited-Node multicast addresses was not clearly explained. This is the follow-up article which should explain how and why Solicited-Node multicast address are used in NDP. After all, this kind of multicast addresses are there to enable IPv6 neighbour discovery function of NDP to work properly.

Let’s go!

Solicited-node multicast address is the IPv6 multicast address used on the local L2 subnet by NDP Network Discovery Protocol. NDP uses that multicast address to be able to find out L2 link-local addresses of other nodes present on that subnet.

NDP replaces ARP

As we know, NDP in IPv6 networks replaced the ARP function from IPv4 networks. In IPv4 world, ARP used broadcast to send this kind of discovery messages and find out about neighbours IPv4 addresses on the subnet. With IPv6 and NDP use of broadcast is not really a good solution so we use a special type of multicast group addresses to which all nodes join to enable NDP communication.