Part 7 – Prepare the ESXi Hosts as transport nodes for Converged vDS on vSphere 7.X

Loading

In this blog, I’m to cover the main configurations required for NSX-T to for vCenter 7. X and above adding the steps that are needed to ensure vSAN connectivity as well.

Choosing Node type – Converged vs nVDS Option

The choice of Node Switch type depends on the installation you are attempting. If your esx host is on version 6.7 and distributed switch is version 6. X, the route you will be taking is the nVDS route, and if the ESXi host is on version 7.0 with vDS 7.0 the preferred route will be converged vDS.

Converged Option is only applicable to vCenter 7.0 and hosts running on ESX7.0. In converged VDS, NSX-T networks can be deployed on top of active uplinks so we don’t need a spare NIC as we did previously.

Installing NSX-T as a Converged option, requires no dedicated pNIC thus it can use the existing uplinks on your distributed switch.

We have four active 10g adapters on this host and we can just continue creating converged vds without taking an uplink as we did before but we will stick to the best practices used in production.

Prerequisites

  • Hosts runnings on ESX 7.X and vSphere 7.X
  • MTU > 1600
  • All vnic should be attached to vDS

The current config is as below and we will make some changes to this to configure to our used case.

This image has an empty alt attribute; its file name is image-195-1024x171.png

All my uplinks are in use.

This image has an empty alt attribute; its file name is image-196-1024x345.png

All my vmk adapters are here

We will now put our vmnic 3 and vmnic4 on our existing vmkernel adapters (mgmt,vsan, VMotion) as unused so we can use this later during NSX config

AdaptersUseUplink
VMNIC0Active – Mgmt+vSAN+vMotionvmnic0
VMNIC1Standby – Mgmt+vSAN+vMotionvmnic1
VMNIC2left for NSXvmnic2
VMNIC3left for NSXvmnic3

Our configuration for Mgmt vmk will be

Our configuration for vmotion vmk will be

Our configuration for vSAN vmk will be our uplink 2 will be active because we need dedicated bandwidth for vSAN so just a reverse of what’s on mgmt and vSAN vmkernel.

Mgmt

vMotion

vSAN

Change the vDS MTU to 9000 as well

Configuring Transport Node Profile

Go to Fabric> Profiles > Transport Node Profile > Add Profile

Installing NSX-T on the vSphere cluster

For us to allocate the transport node profile to our vSphere cluster,Go to Fabric> Nodes> Configure NSX

Choose the transport node profile we created earlier and click Apply

NSX vibs are being pushed to the ESXi host

After a few moments, our NSX installation will show a state as success.

Back on the ESX host, we can now see three more vmk adapters being created

vmk10 and vmk11 which are for our VXLAN and in addition to that vmk50 which is used by NSXT internally for docker communication.

Verify if NSX vibs are installed correctly on host

Verify a ping to vmk10 on our peer host to check if its reachable

(Visited 401 times, 1 visits today)

By Ash Thomas

Ash Thomas is a seasoned IT professional with extensive experience as a technical expert, complemented by a keen interest in blockchain technology.

Leave a Reply