This article describes the strange workaround of switching VMware NSX-T enabled cluster from using vSphere Enterprise Plus license to vSphere Standard license with vDS licensed through NSX-T. I really hope that you will not need to go through this as it is quite like bringing the whole environment up from scratch. But if you have two clusters with enough resources it will enable you to do it without downtime.
Environment on which this was tested is vSphere 7.0.2 and NSX-T 3.1.2
NSX-T as a network and security platform enables network functions to be virtualised on your vSphere cluster. The way it does this is to implement additional features of network traffic steering and packaging inside its vSphere Distributed Switch (vDS).
Before NSX-T 3.1.1 the only way to get your cluster equipped with vDS was to have a vSphere Enterprise Plus license. From NSX-T 3.1.1 version onwards, VMware gives you the possibility to use vDS without vSphere Enterprise Plus license and license it using NSX-T license. This enabled users with a standard vSphere license to be able to deploy NSX-T on all editions of vCenter Server and vSphere.
After this started to be possible there were some customers who realised that in some cases the only reason they have vSphere Enterprise Plus license for a specific cluster is to be able to use NSX-T since that was needed in the past. So they decided that they will transfer those Enterprise Plus licenses to some other (new) cluster that needs those licenses for more different features.
As it ended up being, this transfer was not so easy to make. vSphere Enterprise Plus license cannot be removed and vSphere Standard license cannot be applied to a cluster that has a vDS deployed on it. Making it impossible to move from using vSphere Enterprise Plus license for vDS to NSX-T license for vDS seamlessly. The process required us to get rid of the vDS in the environment in order to switch to vSphere Standard, and then recreate the vDS which will then be allowed to exist by using NSX-T license.
So I wanted to give you an example of the steps needed to do this in the current versions of VMware environment:
Table of Contents
Prepare for NSX-T Manager and EDGE VMs eviction
In the beginning, you need to know that you cannot remove NSX-T from the cluster while some VMs are connected to his Logical Segments on the cluster.
You will need to have at least one more VMware cluster to which you will be able to move VMs in the process, especially if you want to make this change without shutting down the whole environment and have it online through the whole process. Preferably it would be another NSX-T enabled cluster within the same transport zone so VMs can be moved there and keep functioning without change.
License refresh from NSX-T manager
In order to be sure your Center gets the license from NSX-T we need to go to NSX-T manager and reenter the Computer Manager credentials so that NSX-T license appears under Administration > licenses > Assets > Solutions on the vCenter side.
In our case, this didn’t actually work as respected so we needed to enter NSX-T Advanced license manually by adding it as a key on the vCenter Licenses tab.
Remove 1 out of two vDS Uplinks
Our environment had servers with two physical NICs and both were connected to our only vDS. We needed to disconnect on of them so we can add it to new standard switch which we need to configure so we still get the ESXi connected when we delete vDS.
Create a standard switch
Connect that one uplink to it
Standard stuff, > Manage physical adapters and so on..
Create a port group on standard SW for MGMT VLAN
You need that because you will need to migrate MGMT vmk adapter to it so you don’t lose ESXi connectivity to manage it after you delete vDS.
Of course, it needs to be VLAN tagged to proper MGMT VLAN as it was on vDS
Create a port group on standard SW for vMotion VLAN
You need that because you will need to migrate vMotion vmk adapter to it so you don’t lose vMotion capability.
Create a dummy port group on standard SW for all VMs
You could use MGMT Port Group for that but it is better to use a dummy one so you don’t get all that traffic and wrong IPs from VMs inside MGMT VLAN.
Migrate vmk adapters (except NSX 10,11,50) to standard switch
In order not to lose ESXi connectivity to vCenter (vmk0 to MGMT port group and vMotion vmk to vMotion port group).
Note on which port groups are all VMs connected (VMs connected to NSX-T created port groups only).
You will need to disconnect VMs from their NSX-T backed port groups and bring them back later so it will be good to know where were they connected.
Change all VMs network settings (VMs from above) to dummy port groups on a standard switch.
You will need to disconnect VMs from their NSX-T backed port groups so you will then be able to temporarily remove NSX deployment from that vSphere cluster. For the NSX-T Managers, it will need to be done by moving them outside the cluster and keeping them available in their management VLAN as you need them to be accessible. The best way of all is to do it is 11b.
An easier way
.. is to have two NSX-T enabled clusters within the same environment and same transport zones and just vMotion all VMs to the other cluster.
Uninstall NSX-T from the cluster
Backup vDS Configuration
vDS > Settings > Export Configuration
Remove Enterprise Plus licence from the cluster
..by applying vSphere Standard license
Create vDS from backup zip
Create vDS by restoring from backup. Without the use of Enterprise licensed ESXi hosts, it will use NSX-T license.
Migrate vmk adapters back to vDS
Disconnect uplink from the standard switch
Connect that uplink back to vDS
.. to bring back redundancy
Install NSX-T back to that cluster
Bring back VMs to the cluster
By vMotioning them back to the cluster or if you kept them on this cluster but simply disconnected them from vDS port groups, take that note on where all VMs were connected to logical segments and reconfigure them back where they belong.
Delete standard switch