It is fairly common to hear about switch being non-blocking. It’s because almost all switches today are non-blocking. But what that means? When I asked people around me on what exactly non-blocking switch means, they were unable to get to the same conclusion.
I was going through a lot of different internet places and vendor documents before I wrote this here, but, do not hesitate to add something in comments if you have different view on the subject.
Line-rate switch means the same as if you would said wire-speed switch. It basically means that this switch has the forwarding capacity that supports concurrently all ports at full port capacity. It should be true for minimum packet sizes to. Non-blocking switch means the same thing. Non-blocking Switch internal bandwidth can handle all the port bandwidths, at the same time, at full capacity. Sometimes for high end switches non-blocking is also refereed to switch architecture ability to significantly reduce head-of-line blocking (HOL blocking).
Little about speed names
Wire speed or wire rate simply means that you can take two switch ports of the same “speed” and send data between them with no packet loss at maximum port supported rate. Backplane bandwidth is a measure of the internal architecture bandwidth of the switch. It is most often the measure of the total switching capacity of the system internally. Forwarding rate is usually the measure of how many 64-byte packets forwarding engine can process. Is measured in packets-per-second (pps).
Little about diversities in naming the quantities
Speed is always used but does not means anything to precise in the networking world. Speed is mistakenly used to represent bandwidth capacity of a link or application data flow. I think that speed is best defined as cross reference of ration between bandwidth and latency. Bandwidth is a measure of how many data bits can pass in a given interval between two network nodes. Measured in bps (bits per second). For downloading a file with FTP, bandwidth is a concern. You want your data to download fast. Latency in networking is a measure of how long it takes a unit of data put into the network on one end to come out on the other end. Latency is usually measured in milliseconds (ms). Usually applications that strive for low latency are time sensitive apps like VoIP calls apps. For talking over VOIP network, latency is a concern. VoIP packets are small, but you need them to arrive fast. High latency will make delays between speaker speaking and receiver hearing.