Monday, August 5, 2013

VMware vSphere Networking Best Practices

In a virtualized environment, there is a direct correlation between network throughput and CPU performance. To promote higher throughput, ensure that CPUs are not overworked. Monitor host and VM CPU usage regularly.

To guarantee noncompeting workloads, VMware recommends creating separate virtual switches for each physical adapter. Each adapter (or teams of adapters) can be dedicated to various traffic types (Virtual Machine, vMotion, IP Storage, etc). Implementing this design strategy will ease contention between VMs and the vmkernel. Note that this recommendation is more prevalent in environments where 1 GB adapters are in use predominantly. As datacenters shift to 10 GigE, more modern resource control methods such as Network I/O Control should be implemented.

Use Next Generation Physical Adapters

By using the latest and greatest physical network adapters in ESXi hosts, an administrator is able to take advantage of a variety of performance and offloading enhancements including:

TCP Checksum Offload - Checksum operations of network packets performed by the network adapter.

TCP Segmentation Offload (TSO) – Packet segmentation is performed by the network adapter. Promotes larger MTU sized frames. Reduces CPU strain on the VM.

Jumbo Frames - Support for ethernet frames with an MTU size up to 9,000 bytes, reducing the number of frames transmitted and received.

Jumbo Frames

Since there is a direct correlation between host and VM CPU performance and network bandwidth responsiveness. Decreasing CPU load will improve overall system performance and reduce latency. Enabling jumbo frames network wide can have that impact. By default, an ethernet MTU (maximum transmission unit) is 1,500 bytes.

The system is required to package and transmit each packet. As network speeds and application I/O increase, the impact of transmitting packets becomes more pronounced. Implementing jumbo frames enables an administrator to adjust the MTU size up to 9,000 bytes. Increasing the packet size unburdens the CPU by lessening packet transmission frequency. Note that the physical network must support jumbo frames end to end; network adapter, switch, router, etc.

After jumbo frames have been configured at the physical level, follow the steps below to extend the configuration to your virtual environment:

Enable Jumbo Frames on a vSphere Standard Switch (vSS)

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to the Hosts and Clusters view (Ctrl+Shift+H).
  3. Select the appropriate ESXi host, click the Configuration tab.
  4. Ensure that the vSphere Standard Switch view is selected.
  5. Select Properties (next to the Standard Switch).
  6. On the Properties screen, select the Ports tab.
  7. Under Configuration, select the vSwitch, click Edit.
  8. Under Advanced Properties, adjust the MTU value (range is 1500 to 9000).
  9. Click OK, Close.

Enable Jumbo Frames on a vSphere Distributed Switch (vDS)

  1. Navigate to the Networking view (Ctrl+Shift+N).
  2. Right-click the appropriate vDS, select Edit Settings.
  3. On the vDS Settings screen, ensure that the Properties tab is selected.
  4. Select Advanced.
  5. Adjust the Maximum MTU value.
  6. Click OK.

To enable jumbo frames on the virtual machine, follow guest OS specific documentation.

VMXNET Virtual Network Adapters

Whenever possible, configure vmxnet as the virtual network adapter type for virtual machines. Using the latest vmxnet driver can improve performance in a few different ways.  The paravirtualized driver shares a ring buffer between the virtual machine and the vmkernel, supports transmission and interrupt coalescing and offloads TCP checksum calculations to the physical cards. These optimizations improve performance by reducing CPU cycles on the host and resident VMs. The vmxnet 3 adapter is the latest generation and was designed with performance in mind. In addition to the optimizations mentioned previously, it also provides multi-queue support and IPv6 offloads. Configuring the vmxnet 3 adapter requires at least hardware version 7 and a supported OS.

10 Gigabit Ethernet

To maximize the consolidation ratios of bandwidth-intensive VMs, use a 10 GigE infrastructure. Implementing 10 GigE increases the benefits of the performance enhancements listed above including TSO and jumbo frames. Using 10 GigE adapters reduces the the number of ESXi host slots and physical switch ports required to support intensive VM workloads.

One of the cool new features which 10 GigE enables is NetQueue. With NetQueue, multiple transmit and receive queues are used, so I/O load can be spread across multiple CPUs, increasing performance and reducing system latency.

Network I/O Control

Network I/O Control (NIOC) was released with vSphere 4.1. NIOC allows an administrator to enable network resource pools to control network utilization. NIOC extends the configurability of shares and limits to network bandwidth. This flexibility can be extremely valuable in 10 GigE environments. Note that NIOC only applies to outgoing (egress) traffic and is limited to distributed switches.

When NIOC is enabled on a vSphere 5.0 distributed switch, seven (7) predefined network resource pools are created for the following traffic types; vMotion, Virtual Machine, NFS, Management, iSCSI, Fault Tolerance (FT) and Host Based Replication traffic. An administrator can manipulate settings in the system defined network resource pools or create user defined network resource pools for even further flexibility.

SplitRx Mode

SplitRx mode is an ESXi feature new to version 5 which enables multiple CPUs to handle network packets coming from the same queue. SplitRx Mode can be beneficial for a couple different workload scenarios. For example, if multiple VMs on the same host receive multicast traffic from the same source. SplitRx mode is only supported on VMs configured with the vmxnet 3 virtual adapter. The feature can be manipulated by editing the VM’s configuration (.vmx) file. To enable SplitRx mode, set the value of ethernetX.emuRxMode to 1.

References

  1. Determine use cases for and configure VMware DirectPath I/O in vSphere 5
  2. Performance Best Practices for VMware vSphere 5.0
  3. Mastering VMware vSphere 5.0 (Chapter 11) – Lowe

No comments:

Post a Comment