Tag: cisco

Software-defined data center and what’s the way to do it

SDDC – Software-Defined Data Centers

Times of Software Defined everything has long since arrived, the need to implement many appliances, two or more for each network function, is not so popular anymore. The possibility to manage packet forwarding, load balancing and security of network traffic inside the datacenter from one simple web console is showing finally that things can be managed in a simpler way after all. All vendors in the networking world tried to come up with their own way of centralizing data center management, as it ends up, all of them did it, some better than the others. As always, it’s not a surprise that some vendors are better in creating hardware-based forwarding solutions and some others in software solutions (in this case, software for packet forwarding).

Requirements

It seems that we have basically only a few good options when wanting to select a complete SDDC solution. The data center needs to provide a large number of server access ports in the form of networking devices that are configured and managed as simply and promptly as possible. Datacenter network needs to be configured in a way to provide robustness and stability of packet forwarding at almost line rate and all that at 10-100, even 400Gbps speeds.

Cisco Champion for 2020

 

I made it to the list of Cisco Champions for 2020 which is now the third year in a row!Cisco Chempion 2020

The primary reason I could again be selected between the first 100 Cisco champs for 2020 in the early acceptance process is the stuff that I shared through this blog and because of the contact with people that got to me directly via my blog comments or e-mail.

Again, 2019 was another year full of great projects and big challenges with new technologies. We finally break the barrier of NFV and Automation and got some great stuff done using automation within software-defined data center solutions both with Cisco and other vendors.

Having this badge is cool, but connections and sharing with the networking community is something that’s even better and it makes me create more material and share it here soon.

This year will be a blast, again!

Cisco ACI – API Calls vs JSON POST

API Calls method

The fancy way of configuring Cisco ACI Fabric is by using Python script for generating API calls. Those API calls are then used to configure Cisco ACI by pushing those calls to APIC controller using POSTMAN (or similar tool). Configuration changes done this way are those that you are doing often and without much chance of making mistakes.

You write a Python script and that script will take your configuration variables and generate API call that will configure the system quickly and correctly every time.

The thing is that you need to take the API call example and use Python to write a script that will recreate that API calls with your variables of configuration and do that correctly. You need to know to code in Python and you will need a certain amount of time to write that script.

POST JSON file method

Setting up Cisco ACI From Scratch

This Cisco ACI article describes the first few things you will do when getting ACI Fabric components in your datacenter.

Cisco ACI 3.2 version was used to try the stuff described below

So let’s see what we have here:

Get Your Gear

In this one, we will get three APIC controllers, four Leafs and two Spines to build simple ACI and few 2060 switches for OOB management:

  • 3x APICs APIC-CLUSTER-M2 – APIC Controller Medium Configuration (Up to 1000 Edge Ports)
  • 2x Spines N9K-C9364C – Nexus 9K ACI & NX-OS Spine, 64p 40/100G QSFP28
  • 2x SFP Leafs N9K-C93180YC-EX – Nexus 9300 with 48p 10/25G SFP+ and 6p 100G QSFP28
  • 2x Copper Leafs N9K-C9348GC-FXP – Nexus 9300 with 48p 100M/1GT, 4p 10/25G & 2p 40/100G QSFP28
  • 2x Catalyst 2960 OOB management switches

You need to cable Leaf and Spines in-between properly to form CLOS topology from the image below with 40G or 100G optics. Each Spine, Leaf and APIC controller needs to be connected to non-ACI OOB management network. You need then to connect redundantly APIC controllers to two Leafs with 10G optics and start the APIC initialization and fabric discovery.

Cable The Thing

Spines are all ports 40G/100G so you Choose your ports as you like, and for Leafs, each of them has last 6 ports 40G/100G so use one of those to connect to each Spine and you have your Leaf’n’Spine.

ACI Fabric with APIC

CLOS Topology

Edson Erwin invented this highly scalable and optimized way of connecting network nodes in the 1930s and Charles Clos made the telephone nodes interconnection design using that solution. It was even before we had IP networks. He invented it in order to optimize the architecture of telephony network systems back then.

It was not used in IP based network for last few decades but it experienced a big comeback with new datacenter design in the last few years. It was first invented only for scalability requirements that it solved beautifully. In new datacenter design, CLOS topology of interconnecting network devices scalability is also the first requirement that gets solved, but it also greatly helps with improving resiliency and performance.

In today’s datacenters, CLOS topology is used to create Leaf’n’Spine system of interconnecting Leaf switches (datacenter access switches or ToR switches) together through Spine switches. It is created in a way that each Leaf switch is redundantly connected to all Spine switches directly.

As it is shown in the picture below, in this way, using CLOS topology, we are interconnecting Leaf switches in a way that they always have only two hops between each other and this done redundantly as two hops through each Spine switch. Spines are not directly connected and Leafs are also not directly connected.

CLOS