Setting up Cisco ACI From Scratch

This Cisco ACI article describes the first few things you will do when getting ACI Fabric components in your datacenter.

Cisco ACI 3.2 version was used to try the stuff described below

So let’s see what we have here:

Get Your Gear

In this one, we will get three APIC controllers, four Leafs and two Spines to build simple ACI and few 2060 switches for OOB management:

  • 3x APICs APIC-CLUSTER-M2 – APIC Controller Medium Configuration (Up to 1000 Edge Ports)
  • 2x Spines N9K-C9364C – Nexus 9K ACI & NX-OS Spine, 64p 40/100G QSFP28
  • 2x SFP Leafs N9K-C93180YC-EX – Nexus 9300 with 48p 10/25G SFP+ and 6p 100G QSFP28
  • 2x Copper Leafs N9K-C9348GC-FXP – Nexus 9300 with 48p 100M/1GT, 4p 10/25G & 2p 40/100G QSFP28
  • 2x Catalyst 2960 OOB management switches

You need to cable Leaf and Spines in-between properly to form CLOS topology from the image below with 40G or 100G optics. Each Spine, Leaf and APIC controller needs to be connected to non-ACI OOB management network. You need then to connect redundantly APIC controllers to two Leafs with 10G optics and start the APIC initialization and fabric discovery.

Cable The Thing

Spines are all ports 40G/100G so you Choose your ports as you like, and for Leafs, each of them has last 6 ports 40G/100G so use one of those to connect to each Spine and you have your Leaf’n’Spine.

ACI Fabric with APIC

APIC needs to be connected redundantly with two 10G optics to two Leafs and 3 times with 1G copper towards OOB switches (left copper port with M symbol is CIMC connection and Eth1-1 and Eth1-2 are real OOB mgmt). OOB Eth1-1 and Eth1-2 should be connected to two separate OOB switches into same OOB VLAN so OOB continues to work even when one OOB switch fails:

Connecting the APIC controller

Initiating APIC Controllers

CIMC Config

In order to initiate APIC controllers, you should connect an external monitor and USB keyboard to it. Start the box up and wait for the service to start. At first, you should hit F8 to enter and configure CIMC (ILO) mgmt interface. This one can later be used to reboot, manage and access terminal of that UCS-based APIC controller server:

This is where you hit F8 to enter CIMC config:

Enter CIMC config

Enter CIMC config

And here you disable the DHCP config and enter static CIMC mgmt IP and gateway (DHCP enabled by default):

CIMC config

CIMC config

APIC Config

After this, you hit save and wait for 45 secs to exit the config and reboot the box again. After the reboot, APIC will start into APIC Initial Setup Wizard:

For this wizard you need to prepare:

  • One, at least /22 subnet for VTEP addresses
  • One multicast group, best to leave default one 225.0.0.0/15
  • OOB mgmt IP, if you have a small ACI deploy /24 subnet will be ok
  • Infrastructure VLAN ID, best to use 3967 which is suggested by Cisco (if not used anywhere in your network)

Here is how you fill it in:

Cluster configuration ...
  Enter the fabric name [ACI Fabric1]: MyACI
  Enter the fabric ID (1-128) [1]: 1
  Enter the number of active controllers in the fabric (1-9) [3]: 3
  Enter the POD ID (1-9) [1]: 1
  Is this a standby controller? [NO]: NO
  Enter the controller ID (1-3) [1]: 1
  Enter the controller name [apic1]: MyAPIC1
  Enter address pool for TEP addresses [10.0.0.0/16]: 10.0.0.0/22
  Note: The infra VLAN ID should not be used elsewhere in your environment
        and should not overlap with any other reserved VLANs on other platforms.
  Enter the VLAN ID for infra network (2-4094): 3967
  Enter address pool for BD multicast addresses (GIPO) [225.0.0.0/15]: 225.0.0.0/15

Out-of-band management configuration ...
  Enable IPv6 for Out of Band Mgmt Interface? [N]:
  Enter the IPv4 address [192.168.10.1/24]: 10.10.10.101/24
  Enter the IPv4 address of the default gateway [None]: 10.10.10.1
  Enter the interface speed/duplex mode [auto]:

admin user configuration ...
  Enable strong passwords? [Y]:
  Enter the password for admin:

  Reenter the password for admin:

admin user configuration ...
  Strong Passwords: Y
  User name: admin
  Password: ********

The above configuration will be applied ...

Warning: TEP address pool, Infra VLAN ID and Multicast address pool
         cannot be changed later, these are permanent until the
         fabric is wiped.

Would you like to edit the configuration? (y/n) [n]: n

And that’s it. You need to make the same config on all three controllers and change only the OOB mgmt IP and APIC names, everything else needs to be entered equally in order to successfully create a working APIC cluster.

 

Initial ACI Fabric Configuration

Fabric Discovery

When all steps above are done with all three APICs, you are free to open a browser and access the main APIC OOB mgmt IP with https and start the ACI Fabric configuration.

Access ACI Config

Access ACI Config

First things first, we need to initiate Fabric discovery and Leaf and Spine switches registration. After each Leaf and Spine is registered APIC cluster will push to it the underlay routing configuration effectively creating working ACI solution able to route the VxLAN overlay network across it.

When opened at first, APIC will discover (using CDP and LLDP) the first Leaf to which he is connected directly and show only him in the Fabric Membership tab:

Right-click and Register to give the Leaf his ID and name, ID starting from 101:

ACI Fabric Discovery Start

After that, it will take a minute for APIC to generate and push that config to the Leaf. That Leaf will then be able to find both Spine switches because he is directly connected to them. We need to register them as well giving them the IDs starting from 201.

After Spines are configured they will discover all other remaining Leafs so we can register them and get over with Fabric Discovery.

ACI Fabric Discovery Finished

In this registration process take care that you know what device is placed where. Collect the serial numbers before so you can properly register them with proper names and IDs. Here the serials are hidden for obvious reasons 😉

 

Basic Fabric Configuration To Start Bridging and Routing

Cisco ACI configuration uses Policy model, all below objects need to be preconfigured in order to start the Interface configuration into TRUNK/ACCESS VLANs:

ACI Policy Config Model

ACI Policy Config Model

Here we go, the real config

VLAN POOL

I will create a pool of VLANs which will be used as ENCAP VLANs on access ports of the Leafs, for the simplicity here, I’m just picking all the VLANs there are:

(Fabric -> Access Policies -> Pools -> Right click on “VLAN” -> Create VLAN pool

Select static allocation and enter a range or one VLAN at the time…..

ACI VLAN POOL CONFIG

DOMAIN

After VLAN pool is created, we configure physical domain which will use that VLAN pool. There already is a “phys” domain so we just configure the new pool to be used in it:

ACI DOMAIN CONFIG

AAEP

The domain is now ready to be used inside AAEP (Attachable Access Entity Profile). Don’t ask, this AAEP, related to configured domain in it is used in every EPG later on. In that way, interface configuration is able to be build and pushed to the switch. I see that AAEP as a pivot – policy connecting object, or maybe as a Primary Key in Relational database as the ACI config seems to be structured just like a relational database.

Create an AAEP with name AAEP and add phys domain into it:

ACI AAEP CONFIG

ACI AAEP POOL SELECT

Leaf Interface Profiles

Now we create Leaf Interface Profiles, objects that will be created only once and will represent the Interfaces of each Leaf. Later when you would need a new port configured on ACI, you will just add Interface Selector inside one of the Leafs Interface Profiles.

It’s like this, configured for first two optical Leafs and for same Leafs when vPC pair of those two will be needed:

Leaf Interface Profile

Interface Policy Group

After Interface Profiles we need to configure Interface Policy Group, this will be configured once for each type of single port configuration and once for each vPC configuration (because each vPC config needs its own ID so it cannot be reused). Note that the most important thing that you need to configure is the AAEP in the end because without it all other config done here will simply not be pushed to the Leaf:

Every other config like Link Level Policy (interface speed static config), CPDON or CDPOFF, LLDPON or LLDPOFF is created also once and is then reused in other Leaf Port Policy Groups. Here I created Access Port Policy Group for 10G and another one for 1G Interfaces:

Leaf Access Port Policy Group

vPC Domain

When configuring vPC interface teaming you first need to have vPC domain configured which is done here for each two pair of vPC Leafs:

vPC domain config

vPC Interface with LACP

After you define the vPC domain, you can go back and configure the vPC Interface Policy Group. Please remember, this one is done separately for each vPC port pair and cannot be reused later for other few ports in another vPC config.

The thing to note here is that you need Port Channel Policy inside this one, everything else is the same as for normal access port Policy Group:

vPC config

Leaf Switch Profiles

Now we are ready to create Switch selectors, objects that will be created only once and which will represent Leafs and will be a placeholder for Leaf Interface configuration.

I created one of them for each of the first Leafs and one for first vPC Leaf pair. Added into them Leaf ID and Interface Selector Profile created above.

This Looks like this:

Leaf Switch Profile

Configuring Our First Leaf Trunk Interface

Now we are all set to configure our first Leaf port as a 10G optical port with CDP on and LLDP on and Speed Configured to 10G. We just enter the Leaf Interface profile of Leaf101 and add the Port1 configuration with 1/1 selector and 10G access port Interface Policy Group. After that, the port will become active as soon as we map the first EPG to it:

ACI Port Config

In order to get the configuration pushed from APIC to that port, we still have a lot to do. We need to create ACI Application Policy which will define the port to EPG membership and define the VLANs that are allowed to cross that trunk port:

ACI Application Policy – aka – switchport mode trunk, switchport trunk allowed VLAN 10

EPG is and Endpoint group which represents a group of endpoints (VM on a hypervisor connected to ACI Leaf or a baremetal server connected the same way). Those endpoints, if placed in the same EPG are allowed to communicate between them selfs. In order for two endpoints from two different EPGs to communicate, those two EPG need to be connected with a contract which allows some IP/TCP/UDP/ICMP or some other communication between them.

In order to get some endpoints mapped inside some EPGs we need to configure the ports for it (all the config above) plus Application policy config below.

Lets look at the Policy Model again:

ACI Policy Config Model

In this model, with the configuration described above in the article, we configured everything except the EPG on the bottom left.

For that EPG box to be configured we need few more things that are like containers for that EPG, here it is what you need:

  • We need Tenant configured
  • We need VRF configured in that tenant
  • We need BD configured in that VRF
  • We need at least one EPG configured for each BD
  • We need to add phys domain to the EPG
  • We need to statically map each EPG to the port where we want his VLAN encap to be allowed – the ports where some enpoints exist that we want to map into that EPG

Okay:

Creating a Tenant

Creating the tenant: Add Tenant

ACI Add Tenant

Creating a VRF

Creating the VRF in that tenant:

ACI Creating GRT VRF

Creating a Bridge Domain (BD)

Create first Bridge Domain representing a VLAN L2 Broadcast Domain. Take care to give it a name and select the proper VRF in which it will reside, in our case the only VRF GRT (global routing table):

ACI create BD

More About Bridge Domain and EPG

About that BD (Bridge Domain), there are few things to note before we continue with out App Profile creation…

You can look at it as a VLAN in our legacy networks. Some concepts change, like you can in some more complicated configurations have more EPGs configured inside one BD, so that BD being an L2 domain it still provides the means to limit the communication between members of same BD but placed in different EPGs (part of that same BD). Something like private VLANs but with the option to define some kind of Access-List and let some traffic flow between them and some not (and everything inside same L2 domain). Strange!, but it’s a way to create microSegmented configuration later on.

Bridge Domain when you create it like described above is a L2 bridge domain and it will work as a normal L2 VLAN. If you want to let some traffic to be routed from one BD to another BD, you will then use the same place to configure the IP address (subnet and gateway) from that BD effectivelly creating a VLAN Interface. Having at least one EPG in each BD and relating them with a Contract that allows some IP traffic between them, you used ACI to create a L3 switch with ACLs on VLAN Interfaces.

Creating App Profile

Create App Profile, our first container of ACI Security Policy and port to EPG mapping configuration:

ACI APP ProfileCreating EPG

In that App Policy, we add our first Endpoint Group (EPG), in our case representing all endpoints in VLAN10.

Take care to give it a name and select the correct, above created, bridge domain:

ACI Creating EPGAdd Domain To EPG

EPG needs domain association in order to pull all the interface and Leaf configuration done above in Leaf and Interface selector profiles so it knows got to physically configure the interfaces that will be mapped to that EPG later on:

Add Domain AssociationStatic Port Map To EPG

After domain association we can continue and create our first interface to EPG mapping (Static Ports) which will effectively take whole interface configuration in ACI policy model plus the Application policy and encap VLAN ID and push that to Leaf interface:

Port Static Map to EPG VLAN

If you then go to APIC CLI, you can check what the configuration pushed to the Leaf 101 port 1/1 looks like:

APICVG1# conf t
APICVG1(config)# leaf 101
APICVG1(config-leaf)# interface eth 1/1
APICVG1(config-leaf-if)# show runn
# Command: show running-config leaf 101 interface ethernet 1 / 1
# Time: Mon Mar 11 07:32:29 2019
  leaf 101 
    interface ethernet 1/1
      # policy-group 10G-CDPON-LLDPON
      switchport trunk allowed vlan 10 tenant HP application App epg VLAN10 
      exit
    exit
APICVG1(config-leaf-if)# 

 

Summary

Creating more BDs and their EPGs, mapping them to the Leaf Interfaces through Static Ports configuration on each EPG you are effectively configuring more switchport trunk allowed vlan X on the interface.

You now have your ACI configured as an L2 switch, selecting more interfaces in Leaf Interface Selector and mapping them to EPGs you can create a working L2 and L3 configuration of the ACI Fabric. Just remember:

  • Traffic inside the same EPG will be allowed by default and it will be bridged across the fabric overlay
  • Traffic to be routed between two EPGs that belong to different BDs need to have contracts applied to EPGs in Application policy (it’s for another article)
  • Traffic to be bridged between two EPGs that belong to the same BD need to have contracts applied to EPGs in the Application policy

You can do a lot of those mappings and port selectors using APIC CLI or API calls or JSON POST much faster. But that is also for another article at some point soon enough.

This was the basic one, just for getting the things up and running. In this articles, there is a lot of steps that need to be done just once at the beginning so don’t worry too much about it being over complicated. The real complicated stuff in ACI lives at Layers 3, 4 and above.

Stay tuned!

 

READ MORE ABOUT CISCO ACI:

 

8 Comments

  1. David Yarashus May 24, 2019
    • Valter Popeskic June 3, 2019
  2. YMD August 13, 2019
  3. Sean February 6, 2020
    • Valter Popeskic February 10, 2020
  4. Sean February 10, 2020
  5. Nadeem Khawaja May 4, 2020
  6. VSNKRAO August 14, 2020

Leave a Reply