This is an overview of what I think Cisco ACI actually is. It uses some examples from the lab environment to show you how the things look like when you start to work with ACI. There are other articles in the works which will be online soon and which will go in details through the real configuration of ACI and best practices while doing it.
What is this Cisco ACI Fabric?
Cisco ACI is a datacenter network Fabric. It actually means that it is a networking system of more networking L3 switches that have a modified, next-generation OS which enables them to be centrally provisioned and configured through APIC controller to work as one device from access port perspective.
APIC controller is a centralized point for provisioning and configuration where we manage complete Fabric configuration. In the picture above we are connected to Web GUI of APIC controller cluster and we see lab environment build with two Leafs and Two Spines (minimal configuration) with two APIC controllers in a cluster (production environment would have 3 or 5 of controllers in a cluster).
The way it works is this: You connect your Nexus 9K switches in Leaf-Spine CLOS topology and connect APIC controller to two of the Spines (for redundancy). You power on the switches and the controller and go through the first-time wizard on the APIC controller which will ask you for a /16 infrastructure IP address pool and some other stuff like the number of APIC controllers in the controller cluster and Fabric and APIC name.
Building ACI Fabric by using LLDP discovery from APIC
You get through the wizards and then the APIC is ready for you to connect to him with https. When connected, you go to Fabric tab and under Fabric membership, you will see that APIC (using LLDP) did find the first Leaf to which he is directly connected. You click on register the Leaf to the Fabric and give the Leaf a number (Leaf are usually numbered from 101-… and Spines are from 201-…).
When you register your first Leaf the APIC will automatically configure it as the first member of the Fabric. It will give him a hostname, configure in-band mgmt loopback and also it will give him another loopback which will later be used in IS-IS routing protocol which is used to route overlays for your data plane. IS-IS has also configured automatically in this first time Leaf configuration and you do not need to see all those things.
After the Leaf is configured, he will further use LLDP to find both Spine switches to which he is directly connected (each Leaf is connected to both Spines in CLOS topology). You continue to register the Spines as you did with the Leaf taking care that the numbering is in order with your rack switch positioning (you should take note of all serial numbers on the switches when putting them in the racks). When the Spines are provisioned by APIC they will find all other Leafs to which they are connected (all the Leafs). When you register all the Leafs on the APIC you actually build your ACI Fabric.
Now you have a big switch
At this point, the Fabric is up and running. The whole fabric is working as one big switch with all Leaf ports being access ports on that huge switch system. It is like a chassis switch with multiple line cards, but here each line card is represented by one Leaf switch. This has several advantages: There are no chassis backplane issues and each Leaf is running his own OS and with that a separate control plane (which is good for robustness, resilience, and stability when something goes to hell on one of the devices).
Now you are able to use APIC GUI or to APIC SSH terminal connection to configure ACI through centralized CLI. It is time to create VxLANs (bridge domains that represent L2 domains) aka overlay VLANs, communication (routing) between VxLANs and other security stuff that ACI enables you to do.
And it is time to make this big L3 switch work and do some bridging and routing.