Tag: configuration

Juniper SRX Cluster Failover Tuning

If you check Juniper configuration guide for SRX firewall clustering, there will be a default example of redundancy-group weight values which are fine if you have one Uplink towards outside and multiple inside interfaces on that firewall.

set chassis cluster redundancy-group 0 node 0 priority 100
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 100
set chassis cluster redundancy-group 1 node 1 priority 1
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-5/0/5 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-5/0/4 weight 255

This is the one: https://www.juniper.net/documentation/en_US/junos/topics/topic-map/security-chassis-cluster-verification.html

But if!

If you get to a situation where you may have multiple outside interfaces which are giving you Internet access or WAN access redundancy then maybe you don’t want failover to secondary SRX box to occur when you lose one of those two uplinks. If that’s the case, you should follow this article and get your SRX cluster to behave as it should.

Juniper SRX cluster failover

ACI MultiPod – Enable Standby APIC

APIC Controller Cluster

You actually need three APIC controller servers to get the cluster up and running in complete and redundant ACI system. You can actually work with only two APICs and you will still have a cluster quorum and will be able to change ACI Fabric configuration.

Loosing One Site

In the MultiPod, those three controllers need to be distributed so that one of them is placed in the secondary site. The idea is that you still have a chance to keep your configuration on one remaining APIC while losing completely primary site with two APICs. On the other hand, if you lose secondary site, two APICs in the first site will still enable you to do configuration and management of ACI Fabric as nothing happened.

Losing DCI Interconnect

The second type of MultiPod fail is when you lose DCI (datacenter interconnect). In that case, both sites will keep working but will alert that the other side is down. The secondary site with one APIC will be in read-only mode and the primary site will be fully functional with two remaining APICs on that site. If some changes are made on the primary site, those changes will be replicated to the third controller on the secondary site when DCI recovers and configuration relating site B POD will be then pushed to POD 2 fabric.

DCI issues are not a good time for APIC replacement, just wait for DCI to start working normally and continue to use ACI APIC controllers as before the issues. You will still have the option to manage primary site if DCI fails and after DCI starts working again changes will be replicated to secondary site APICs and Fabric.

Please note that temporary DCI issues are not a good time to replace the APIC. If you are experiencing just a DCI outage the second site still works but it cannot be configured. Think about it, perhaps the best thing to do in this case is not to change the configuration of your fabric on either side while DCI doesn’t get back up and running. That way you are sure your configuration does not affect the MultiPod stability once DCI gets back up and sites start to communicate again.

Setting up Cisco ACI From Scratch

This Cisco ACI article describes the first few things you will do when getting ACI Fabric components in your datacenter.

Cisco ACI 3.2 version was used to try the stuff described below

So let’s see what we have here:

Get Your Gear

In this one, we will get three APIC controllers, four Leafs and two Spines to build simple ACI and few 2060 switches for OOB management:

  • 3x APICs APIC-CLUSTER-M2 – APIC Controller Medium Configuration (Up to 1000 Edge Ports)
  • 2x Spines N9K-C9364C – Nexus 9K ACI & NX-OS Spine, 64p 40/100G QSFP28
  • 2x SFP Leafs N9K-C93180YC-EX – Nexus 9300 with 48p 10/25G SFP+ and 6p 100G QSFP28
  • 2x Copper Leafs N9K-C9348GC-FXP – Nexus 9300 with 48p 100M/1GT, 4p 10/25G & 2p 40/100G QSFP28
  • 2x Catalyst 2960 OOB management switches

You need to cable Leaf and Spines in-between properly to form CLOS topology from the image below with 40G or 100G optics. Each Spine, Leaf and APIC controller needs to be connected to non-ACI OOB management network. You need then to connect redundantly APIC controllers to two Leafs with 10G optics and start the APIC initialization and fabric discovery.

Cable The Thing

Spines are all ports 40G/100G so you Choose your ports as you like, and for Leafs, each of them has last 6 ports 40G/100G so use one of those to connect to each Spine and you have your Leaf’n’Spine.

ACI Fabric with APIC

Check Point Firewall VM Disk Resize

It is related to Check Point MGMT VM with R80.10 in my story, but you would as well want to resize Check Point gateway firewall hardware box or VM.

I was searching for a simple solution and found different ones that didn’t work for me, so here are the steps that you need to go through when you resize your CheckPoint VM disk in vCenter and then need to expand the partition inside Check Point VM in order to use the additional space.

Of course, you did choose too small HDD for your VM when you created it and now you cannot upload some hotfixes or vSEC gateway files to it because you don’t have enough space.

Get to vCenter and shut down the VM.

vCenter VM Shutdown

Get more GB to your VM and power it back up.

vCenter VM HDD Space Increase

Source-Specific Multicast Configuration

In SSM, Source-Specific Multicast, things are done differently from standard multicast forwarding. SSM is specifying a group of hosts that are receiving same multicast stream using group IP address and additionally using stream unicast source IP.

In this article it is shown how to configure Source Specific Multicast on Cisco and Juniper equipment.

In standard multicast, forwarding is done using group IP address which is an IP from multicast dedicated range 224.0.0.0/4 (224.0.0.0 – 239.255.255.255) or FF00::/8 in IPv6. Each multicast group IP address is a single address which specifies all hosts receiving a specific stream, streamed towards that group IP address from multicast source. In standard multicast everybody can start to stream with some IP multicast group IP, becoming in that way, the multicast source.