Juniper vMX Multicast Configuration

I’m fairly new to Juniper CLI. For one of my first tries, I decided to make my life difficult by starting with multicast configuration on virtual vMX routers running as VMs on VMware ESXi.

It took a lot of investigation about some part of this configuration specially the tunnel interface which you will see below. I decided to put it here all in one place with the explanation of every step because Juniper documentation tends to assume that you know more than me. If that is not the case, this short description is for you.

Here’s how the topology looks like. I have 8 routers making this topology with the plan to source multicast streams from right to left, from PC 10.10.99.11 towards PC 10.10.98.11

Juniper vMX topology

Configuration

Example is showing configuration on Juniper Routers R4 and R5 given the fact that R4 is RP for Multicast PIM configuration. All other routers are configured exactly the same as R5.

Juniper vMX part of topology

R4 – R5 Interconnection

Initial Juniper Configuration

Before we start with IP addressing and routing configuration some things need to be configured on every device in the process of initial configuration. It is simple stuff like router hostname, root password, ability for the root to use ssh (in case of lab, not in real life) etc.

 

R4:

config
set system host-name R4
set system root-authentication plain-text-password mypass123
set system services ssh root-login allow
set system login user myusername class super-user
set system login user myusername authentication plain-text-password

commit

R5:

config
set system host-name R5
set system root-authentication plain-text-password mypass123 
set system services ssh root-login allow
set system login user myusername class super-user
set system login user myusername authentication plain-text-password

commit

Interface Configuration – IPv4 addressing

Network subnet between R4 and R5 is 10.10.4.0/24 taking first address from R4 and the second for R5. In this topology interfaces are configured in a way that router on the left uses always the first address and the one on the right the second.

R4:

[edit] 
root@R4#
set interfaces ge-0/0/0 unit 0 family inet address 10.10.3.2/24
set interfaces ge-0/0/1 unit 0 family inet address 10.10.4.1/24
set interfaces fxp0 unit 0 family inet address 192.168.11.4/24
commit

R5:

[edit] 
root@R5#
set interfaces ge-0/0/0 unit 0 family inet address 10.10.4.2/24
set interfaces ge-0/0/1 unit 0 family inet address 10.10.7.2/24
set interfaces ge-0/0/2 unit 0 family inet address 10.10.9.1/24
set interfaces fxp0 unit 0 family inet address 192.168.11.5/24
commit

Verification:

We are listing Gigabit Ethernet NICs only because those are the ones we are configuring for interconnection with other routers in the topology.

You see that I am using “run” before show commands. It is because show commands are usually  used from Operational mode ( where: root@R4> ). If you want same commands to be used from Configuration mode ( where: root@R4# ) you need to prepend “run”.

root@R4# run show interfaces terse ge-0/0/*      
Interface               Admin Link Proto    Local         Remote
ge-0/0/0                up    up
ge-0/0/0.0              up    up   inet     10.10.3.2/24    
                                   multiservice
ge-0/0/1                up    up
ge-0/0/1.0              up    up   inet     10.10.4.1/24    
                                   multiservice
ge-0/0/2                up    down
ge-0/0/3                up    down
ge-0/0/4                up    down
ge-0/0/5                up    down
ge-0/0/6                up    down
ge-0/0/7                up    down
ge-0/0/8                up    down
ge-0/0/9                up    down

[edit]
root@R4# run show interfaces terse fxp0           
Interface               Admin Link Proto    Local           Remote
fxp0                    up    up
fxp0.0                  up    up   inet     10.23.8.84/16   

[edit]
root@R4#

Interface fxp0 is something like out-of-band management. In our VMware setup it is the interface connected to a separate vSwitch which has connectivity to outside world. In this way we can use it to access our routers with ssh directly from local LAN network.

[edit]
root@R4# run show interfaces terse fxp0 
Interface Admin Link Proto Local Remote
fxp0 up up
fxp0.0 up up inet 192.168.11.4/24

Static route is also needed to get MGMT access to routers in the topology. It is needed because my PC is not directly on 192.168.11.0/24 network which is actually in my case WMware VM mgmt network.

set routing-options static route 192.168..121.0/24 next-hop 192.168.11.1

IGP Routing Protocol – OSPF Configuration

OSPF is selected to be IGP in this case so we configured it on all 8 routers with the configuration like the one below.

R4:

set routing-options router-id 4.0.0.0
set protocols ospf area 0.0.0.0 interface ge-0/0/0.0
set protocols ospf area 0.0.0.0 interface ge-0/0/1.0
commit

R5:

set routing-options router-id 4.0.0.0
set protocols ospf area 0.0.0.0 interface ge-0/0/0.0
set protocols ospf area 0.0.0.0 interface ge-0/0/1.0
set protocols ospf area 0.0.0.0 interface ge-0/0/2.0
commit

Verification:

Check your OSPF neighboors:

root@R4# run show ospf neighbor    
Address          Interface              State     ID               Pri  Dead
10.10.3.1        ge-0/0/0.0             Full      3.0.0.0          128    36
10.10.4.2        ge-0/0/1.0             Full      5.0.0.0          128    38

If the neighbours are there, did they exchange the routes they know about:

[edit]
root@R4# run show ospf route       
Topology default Route Table:

Prefix             Path  Route      NH       Metric NextHop       Nexthop      
                   Type  Type       Type            Interface     Address/LSP
3.0.0.0            Intra Router     IP            1 ge-0/0/0.0    10.10.3.1
5.0.0.0            Intra Router     IP            1 ge-0/0/1.0    10.10.4.2
6.0.0.0            Intra Router     IP            2 ge-0/0/0.0    10.10.3.1
7.0.0.0            Intra Router     IP            2 ge-0/0/1.0    10.10.4.2
8.0.0.0            Intra Router     IP            2 ge-0/0/1.0    10.10.4.2
10.10.2.0/24       Intra Network    IP            2 ge-0/0/0.0    10.10.3.1
10.10.3.0/24       Intra Network    IP            1 ge-0/0/0.0
10.10.4.0/24       Intra Network    IP            1 ge-0/0/1.0
10.10.5.0/24       Intra Network    IP            3 ge-0/0/0.0    10.10.3.1
10.10.6.0/24       Intra Network    IP            3 ge-0/0/0.0    10.10.3.1
                                                    ge-0/0/1.0    10.10.4.2
10.10.7.0/24       Intra Network    IP            2 ge-0/0/1.0    10.10.4.2
10.10.8.0/24       Intra Network    IP            2 ge-0/0/0.0    10.10.3.1
10.10.9.0/24       Intra Network    IP            2 ge-0/0/1.0    10.10.4.2
10.10.44.44/32     Intra Network    IP            0 lo0.0
10.10.99.0/24      Intra Network    IP            3 ge-0/0/1.0    10.10.4.2

[edit]
root@R4#

Be sure that you do not enable routing protocol on MGMT interface of your virtual routers. You don’t want to start sending OSPF (or any other) control traffic from your lab out of VMware to your real network.

Multicast Routing – PIM Configuration & Static RP

PIM – Protocol Independent Multicast will enable Multicast traffic routing. We are enabling PIM Sparse mode and static rendezvous point (RP) configuration. R4 will be out RP in this topology.

Router which is configured to be root of the shared tree is called RP – rendezvous point. Multicast Packets from the stream source and join messages from devices wanting to connect to that stream “rendezvous” at this Router – the RP.

R4:

set protocols pim rp local family inet address 10.10.3.2
set protocols pim interface ge-0/0/0.0 mode sparse
set protocols pim interface ge-0/0/1.0 mode sparse

set chassis fpc 0 pic 0 tunnel-services bandwidth 1g
R5:
set protocols pim rp static address 10.10.3.2
set protocols pim interface ge-0/0/0.0 mode sparse
set protocols pim interface ge-0/0/1.0 mode sparse
set protocols pim interface ge-0/0/2.0 mode sparse

set chassis fpc 0 pic 0 tunnel-services bandwidth 1g

Please not that we configured tunnel-services on each router because Juniper MX series will need that service to run multicast. There are other tunnel configuration requirements if you have hardware MX Series Router or some other Juniper series but here this is enough.

Verification:

show pim join
show pim join extensive
show multicast route
show multicast route extensive

Testing Multicast

The best way to test multicast traffic is by sending Multicast Stream from PC 10.10.99.11 towards PC on the left 10.10.98.11 using VLC media player to source the multicast on one side and catch it on the other side. I wanted to be sure that the thing work with multicast so I additionally used Singlewire Multicast tester app that sends and receives multicast stream across the network.

All the steps done in testing are listed in the proceedings ->

One Response

  1. Por December 29, 2019

Leave a Reply