I’m fairly new to Juniper CLI. For one of my first tries, I decided to make my life difficult by starting with multicast configuration on virtual vMX routers running as VMs on VMware ESXi.
It took a lot of investigation about some part of this configuration specially the tunnel interface which you will see below. I decided to put it here all in one place with the explanation of every step because Juniper documentation tends to assume that you know more than me. If that is not the case, this short description is for you.
Here’s how the topology looks like. I have 8 routers making this topology with the plan to source multicast streams from right to left, from PC 10.10.99.11 towards PC 10.10.98.11
Example is showing configuration on Juniper Routers R4 and R5 given the fact that R4 is RP for Multicast PIM configuration. All other routers are configured exactly the same as R5.
Initial Juniper Configuration
Before we start with IP addressing and routing configuration some things need to be configured on every device in the process of initial configuration. It is simple stuff like router hostname, root password, ability for the root to use ssh (in case of lab, not in real life) etc.
config set system host-name R4 set system root-authentication plain-text-password mypass123 set system services ssh root-login allow set system login user myusername class super-user set system login user myusername authentication plain-text-password commit
config set system host-name R5 set system root-authentication plain-text-password mypass123 set system services ssh root-login allow set system login user myusername class super-user set system login user myusername authentication plain-text-password commit
Interface Configuration – IPv4 addressing
Network subnet between R4 and R5 is 10.10.4.0/24 taking first address from R4 and the second for R5. In this topology interfaces are configured in a way that router on the left uses always the first address and the one on the right the second.
 [email protected]# set interfaces ge-0/0/0 unit 0 family inet address 10.10.3.2/24 set interfaces ge-0/0/1 unit 0 family inet address 10.10.4.1/24 set interfaces fxp0 unit 0 family inet address 192.168.11.4/24 commit
 [email protected]# set interfaces ge-0/0/0 unit 0 family inet address 10.10.4.2/24 set interfaces ge-0/0/1 unit 0 family inet address 10.10.7.2/24 set interfaces ge-0/0/2 unit 0 family inet address 10.10.9.1/24 set interfaces fxp0 unit 0 family inet address 192.168.11.5/24 commit
We are listing Gigabit Ethernet NICs only because those are the ones we are configuring for interconnection with other routers in the topology.
You see that I am using “run” before show commands. It is because show commands are usually used from Operational mode ( where: [email protected]> ). If you want same commands to be used from Configuration mode ( where: [email protected]# ) you need to prepend “run”.
[email protected]# run show interfaces terse ge-0/0/* Interface Admin Link Proto Local Remote ge-0/0/0 up up ge-0/0/0.0 up up inet 10.10.3.2/24 multiservice ge-0/0/1 up up ge-0/0/1.0 up up inet 10.10.4.1/24 multiservice ge-0/0/2 up down ge-0/0/3 up down ge-0/0/4 up down ge-0/0/5 up down ge-0/0/6 up down ge-0/0/7 up down ge-0/0/8 up down ge-0/0/9 up down  [email protected]# run show interfaces terse fxp0 Interface Admin Link Proto Local Remote fxp0 up up fxp0.0 up up inet 10.23.8.84/16  [email protected]#
Interface fxp0 is something like out-of-band management. In our VMware setup it is the interface connected to a separate vSwitch which has connectivity to outside world. In this way we can use it to access our routers with ssh directly from local LAN network.
 [email protected]# run show interfaces terse fxp0 Interface Admin Link Proto Local Remote fxp0 up up fxp0.0 up up inet 192.168.11.4/24
Static route is also needed to get MGMT access to routers in the topology. It is needed because my PC is not directly on 192.168.11.0/24 network which is actually in my case WMware VM mgmt network.
set routing-options static route 192.168..121.0/24 next-hop 192.168.11.1
IGP Routing Protocol – OSPF Configuration
OSPF is selected to be IGP in this case so we configured it on all 8 routers with the configuration like the one below.
set routing-options router-id 22.214.171.124 set protocols ospf area 0.0.0.0 interface ge-0/0/0.0 set protocols ospf area 0.0.0.0 interface ge-0/0/1.0 commit
set routing-options router-id 126.96.36.199 set protocols ospf area 0.0.0.0 interface ge-0/0/0.0 set protocols ospf area 0.0.0.0 interface ge-0/0/1.0 set protocols ospf area 0.0.0.0 interface ge-0/0/2.0 commit
Check your OSPF neighboors:
[email protected]# run show ospf neighbor Address Interface State ID Pri Dead 10.10.3.1 ge-0/0/0.0 Full 188.8.131.52 128 36 10.10.4.2 ge-0/0/1.0 Full 184.108.40.206 128 38
If the neighbours are there, did they exchange the routes they know about:
 [email protected]# run show ospf route Topology default Route Table: Prefix Path Route NH Metric NextHop Nexthop Type Type Type Interface Address/LSP 220.127.116.11 Intra Router IP 1 ge-0/0/0.0 10.10.3.1 18.104.22.168 Intra Router IP 1 ge-0/0/1.0 10.10.4.2 22.214.171.124 Intra Router IP 2 ge-0/0/0.0 10.10.3.1 126.96.36.199 Intra Router IP 2 ge-0/0/1.0 10.10.4.2 188.8.131.52 Intra Router IP 2 ge-0/0/1.0 10.10.4.2 10.10.2.0/24 Intra Network IP 2 ge-0/0/0.0 10.10.3.1 10.10.3.0/24 Intra Network IP 1 ge-0/0/0.0 10.10.4.0/24 Intra Network IP 1 ge-0/0/1.0 10.10.5.0/24 Intra Network IP 3 ge-0/0/0.0 10.10.3.1 10.10.6.0/24 Intra Network IP 3 ge-0/0/0.0 10.10.3.1 ge-0/0/1.0 10.10.4.2 10.10.7.0/24 Intra Network IP 2 ge-0/0/1.0 10.10.4.2 10.10.8.0/24 Intra Network IP 2 ge-0/0/0.0 10.10.3.1 10.10.9.0/24 Intra Network IP 2 ge-0/0/1.0 10.10.4.2 10.10.44.44/32 Intra Network IP 0 lo0.0 10.10.99.0/24 Intra Network IP 3 ge-0/0/1.0 10.10.4.2  [email protected]#
Be sure that you do not enable routing protocol on MGMT interface of your virtual routers. You don’t want to start sending OSPF (or any other) control traffic from your lab out of VMware to your real network.
Multicast Routing – PIM Configuration & Static RP
PIM – Protocol Independent Multicast will enable Multicast traffic routing. We are enabling PIM Sparse mode and static rendezvous point (RP) configuration. R4 will be out RP in this topology.
Router which is configured to be root of the shared tree is called RP – rendezvous point. Multicast Packets from the stream source and join messages from devices wanting to connect to that stream “rendezvous” at this Router – the RP.
set protocols pim rp local family inet address 10.10.3.2 set protocols pim interface ge-0/0/0.0 mode sparse set protocols pim interface ge-0/0/1.0 mode sparse set chassis fpc 0 pic 0 tunnel-services bandwidth 1g
R5: set protocols pim rp static address 10.10.3.2 set protocols pim interface ge-0/0/0.0 mode sparse set protocols pim interface ge-0/0/1.0 mode sparse set protocols pim interface ge-0/0/2.0 mode sparse set chassis fpc 0 pic 0 tunnel-services bandwidth 1g
Please not that we configured tunnel-services on each router because Juniper MX series will need that service to run multicast. There are other tunnel configuration requirements if you have hardware MX Series Router or some other Juniper series but here this is enough.
show pim join show pim join extensive show multicast route show multicast route extensive
The best way to test multicast traffic is by sending Multicast Stream from PC 10.10.99.11 towards PC on the left 10.10.98.11 using VLC media player to source the multicast on one side and catch it on the other side. I wanted to be sure that the thing work with multicast so I additionally used Singlewire Multicast tester app that sends and receives multicast stream across the network.
All the steps done in testing are listed in the proceedings ->