In this blog, I’m to cover the main configurations required for NSX-T to for vCenter 7. X and above adding the steps that are needed to ensure vSAN connectivity as well.
Choosing Node type – Converged vs nVDS Option
The choice of Node Switch type depends on the installation you are attempting. If your esx host is on version 6.7 and distributed switch is version 6. X, the route you will be taking is the nVDS route, and if the ESXi host is on version 7.0 with vDS 7.0 the preferred route will be converged vDS.
Converged Option is only applicable to vCenter 7.0 and hosts running on ESX7.0. In converged VDS, NSX-T networks can be deployed on top of active uplinks so we don’t need a spare NIC as we did previously.
Installing NSX-T as a Converged option, requires no dedicated pNIC thus it can use the existing uplinks on your distributed switch.
We have four active 10g adapters on this host and we can just continue creating converged vds without taking an uplink as we did before but we will stick to the best practices used in production.
Prerequisites
- Hosts runnings on ESX 7.X and vSphere 7.X
- MTU > 1600
- All vnic should be attached to vDS
The current config is as below and we will make some changes to this to configure to our used case.
All my uplinks are in use.
All my vmk adapters are here
We will now put our vmnic 3 and vmnic4 on our existing vmkernel adapters (mgmt,vsan, VMotion) as unused so we can use this later during NSX config
Adapters | Use | Uplink |
VMNIC0 | Active – Mgmt+vSAN+vMotion | vmnic0 |
VMNIC1 | Standby – Mgmt+vSAN+vMotion | vmnic1 |
VMNIC2 | left for NSX | vmnic2 |
VMNIC3 | left for NSX | vmnic3 |
Our configuration for Mgmt vmk will be
Our configuration for vmotion vmk will be
Our configuration for vSAN vmk will be our uplink 2 will be active because we need dedicated bandwidth for vSAN so just a reverse of what’s on mgmt and vSAN vmkernel.
Mgmt
vMotion
vSAN
Change the vDS MTU to 9000 as well
Configuring Transport Node Profile
Go to Fabric> Profiles > Transport Node Profile > Add Profile
1 2 3 4 |
<strong>Give transport node profile a name</strong> - prod-esx-host-transport-node-overlay <strong>VDS Name</strong>: select the distributed switch <strong>Transport Zone:</strong> prod-overlay-tz01 <strong>Uplink Profile</strong>: esx-host-uplink-1634. ( this was created previously) |
1 |
<strong>Choose IP Assignment</strong>: use ipool as host-tep-ippool and finally associate <strong>our Physical NICs</strong>: <strong>uplink3</strong> and <strong>uplink4</strong> we created earlier for NSX |
Installing NSX-T on the vSphere cluster
For us to allocate the transport node profile to our vSphere cluster,Go to Fabric> Nodes> Configure NSX
Choose the transport node profile we created earlier and click Apply
NSX vibs are being pushed to the ESXi host
After a few moments, our NSX installation will show a state as success.
Back on the ESX host, we can now see three more vmk adapters being created
vmk10 and vmk11 which are for our VXLAN and in addition to that vmk50 which is used by NSXT internally for docker communication.
Verify if NSX vibs are installed correctly on host
1 2 3 |
[root@esx02:~] esxcli network ip interface ipv4 get |
Verify a ping to vmk10 on our peer host to check if its reachable
1 2 3 |
[root@esx02:~] esxcli network ip interface ipv4 get - |