VMware networking setup for vMotion/iSCSI & VM traffic

VMware ESX/ESXi network setup.

In the following post I will show you some networking setup regarding to VMware servers.
This will involve Cisco switches(2960/3750 series) and HP or Dell servers setup.
I got these configurations in production running for quite some times now(2+years) without any issues.

As we know the networking setup for VMware servers, got much more complicated, than any other regular server setup earlier we had with “classic” Linux or Windows physical boxes.
Classic only one uplink connection with regular vlan is not enough for vmware anymore.
You must separate virtual machine traffic from the management traffic and also you must separate the storage and vmotion traffic.
Although VMware says you can have separated vswitches for all physical connections with different vlans, but the failover to other physical connections is more complicated, than if you have one or two vswitches. VMware server needs minimum 2 network uplinks for VM traffic and management traffic, but VMware recommends 4 uplinks for the physical servers.

The following picture shows briefly the current setup.

7layer

So let’s take a look the 4 uplink configuration in the VMware ESXi host:

two

We got all 4 uplinks connected to the same vswitch. With this configuration is very easy to create the failover for the management traffic and to separate the storage and vmotion traffic as well. Let’s take a look the vswitch properties:

vswitch0-1

Also take a look the NIC teaming for the vswitch.
As you can see all adapters are active in this vswitch:

nic-teaming

 

Now take a look the management uplink settings.
The management network has one active adapters and two standby adapters.
If the active vnic0 adapter physical connection fails(switch issue or cable connection issue), then VMware kernel will activate one of the other standby adapters.
With this setup the management network will always be available and you cannot lose the connection to the VMware box.
       mgmt-ipstorage

Now we check the vMotion settings.
Here we have an added VMkernel port with vMotion and IP storage which contains extra IP address for the vMotion.
As you can see here we have one active adapters and three unused adapters. To properly separate this kind of traffic by the kernel you must tick the failover order and move down the adapters, that you don’t want to use in the kernel. This settings is the same with iSCSI storage.
vmotionvmotion-ip

Now take a look the Storage IP kernel settings.
Here we have also an extra added VMkernel port with extra IP address.
In this setup also the extra active vswitch adapters have been disconnected and unused as you can see on the picture.
Without this you won’t be able to add properly the iSCSI software storage. The VMkernel IP settings creates a point to point 1 to 1 connection to the storage and therefore only one active adapters should be enabled in any VMkernel port groups. With this setup you can have more than one path to the iSCSI storage, but for this you need to enable this feature in iSCSI setup.

storageiscsi

So now take a brief look to the Virtual Machine Port Group settings regarding to the Vlan settings.
You can add new vlans here to the kernel and create load balance and failover for the virtual machines.
I used two adapters from the physical adapters for the virtual machines and they are activated as vnic5/vnic1 and vnic1/vnic5 opposite to each other.
But if you have 4 or 6 uplink adapters, then you could active 3-4 adapters for the virtual machines, it’s up to you.
Also this depends on how heavily loaded your virtual boxes, obviously if the boxes are pretty loaded then, it’s better if you separate the loads and leave out vmotion and management traffic from the physical uplink connections.
1-x-network-vlan

4-x-network-vlan

 

I know it’s getting a bit confusing, so here we are again some binding regarding to the VLAN, management traffic and vMotion traffic:
mgmt-vlan

vmotion-vlan

The storage traffic is not added to any of those traffics, it is just connection via the VMkernel port group IP address as a one to one connection:

storage-vlan

In this setup I use the same vlan for the vMotion, because this is only used for maintenance, but if you use heavily the vMotion then it is better to be separeted into a different vlan.You might as well create a new physical uplink for traffic, which could help you to separate this traffic not just on a vlan level, but on the physical level also.

And finally the physical uplink ports to the Cisco switch:

interface GigabitEthernet1/0/22
description 10.0.4.92 vnic0
switchport trunk allowed vlan 100,200,300,400
switchport trunk native vlan 999
switchport mode trunk
switchport nonegotiate
speed 1000
duplex full

interface GigabitEthernet1/0/23
description 10.0.4.92 vnic2
switchport trunk allowed vlan 100,200,300,400
switchport trunk native vlan 999

switchport mode trunk
switchport nonegotiate
speed 1000
duplex full

The native vlan 999 command is used to change the default untagged vlan traffic which is vlan1.
With this command you can avoid unnecessary layer 2 traffic to the VMware server, like flooding and broadcast.
Also if you have a system already configured with vCenter, then sometimes you cannot change the management vlan, because vCenter won’t be able to reach the box anymore and the changes goes into error or the box could get dropped from vSphere. In that case you would need to disconnect the connected server from vCenter and create a second VMkernel interface with a different IP subnet with different physical interface, than the currently running one and connect to the box via that KMkernel. With this you can do any major changes to the main interface. (native vlan, vlan tagging etc) I have seen few times, when I wanted to do changes, then I lost the connection to the server and I needed to either reset the VMkernel management or rollback the switch configuration or change the native vlan on the switch. So you need to be careful with this changes, if you cannot reach your physical box for any reason (server is in a data-center or a different office)

So now let’s take a look the Cisco switch side, after the native vlan configuration and the trunking configuration:

Port        Mode             Encapsulation  Status        Native vlan
Gi1/0/22    on               802.1q         trunking      999

Port        Vlans allowed on trunk
Gi1/0/22    100,200,400

Port        Vlans allowed and active in management domain
Gi1/0/22    100,200,400

 

References:

https://www.vmware.com/files/pdf/support/landing_pages/Virtual-Support-Day-Best-Practices-Virtual-Networking-June-2012.pdf
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038869
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2045040

 
Show Buttons
Hide Buttons