This document provides design guidance for implementing 10 Gigabit Ethernet networking with VMware vSphere 4.0 (including VMware ESXi 4.0 and ESX 4.0 and associated updates) in a Cisco® network environment. The document covers considerations, approaches, and best practices for configuration of the following:
• Virtual network based on the Cisco Nexus® 1000V Switch, VMware vNetwork Standard Switch (vSS), and VMware vNetwork Distributed Switch (vDS).
• Physical network (access and distribution layers) based on Cisco Nexus 5000 Series Switches at the access layer.
• Some additional information about advanced configuration using rate limiting is provided later in this document.
The configurations that follow are designed according to the following design goals:
• Availability: The design should be capable of recovery from any single points of failure in the network outside the VMware ESX or ESXi server. Traffic should continue to flow if a single access or distribution switch, cable, or network interface fails.
• Isolation: Each traffic type should be logically isolated from every other traffic type.
• Performance: The design should provide the capability to impose limits on some traffic types to reduce the effects on other traffic types.
VMware ESX and ESXi Network Adapter Configurations
In 10 Gigabit Ethernet environments, the most common configurations are as follows:
• Two 10 Gigabit Ethernet interfaces (converged network adapter [CNA], network interface card [NIC], or LAN on motherboard [LOM]).
• Two 10 Gigabit Ethernet interfaces (CNA or NIC) plus two Gigabit Ethernet LOM ports.
Although more adapters and configurations are possible, this guide focuses on the most common design scenario, with all traffic is converged to two 10 Gigabit Ethernet interfaces. The configuration using an additional two Gigabit Ethernet interfaces for management is a valid design for all virtual switch alternatives and is discussed in the Cisco Nexus 1000V Switch section as a design variant.
Traffic Types in a VMware vSphere 4.0 Environment
A VMware vSphere 4.0 environment involves the following traffic types:
• Management: Management traffic goes through the vswif interface on VMware ESX or the vmkernel management interface on VMware ESXi. This is the port used for all management and configuration and is the port by which VMware ESX or ESXi communicates with VMware vCenter Server. This port generally has very low network utilization, but it should always be available and isolated from other traffic types through a management VLAN.
• VMware VMotion: The vmkernel port is used for migrating a running virtual machine from one VMware ESX or ESXi host to another. With VMware ESX or ESXi 4.0, a single VMware VMotion migration through this port can use up to approximately 2.6 Gbps of network bandwidth, with up to two VMware VMotion migrations running concurrently. This traffic typically is implemented on a separate VLAN specific to VMware VMotion, with no outside communication required.
• Fault-tolerant logging: The vmkernel port for fault-tolerant logging is used to transfer the input network I/O for the fault-tolerant virtual machine plus the read disk traffic to the secondary fault-tolerant virtual machine. Traffic will vary according to the network and storage behavior of the application. End-to-end latency between the fault-tolerant virtual machines should be less than 1 millisecond (ms). This traffic typically is implemented on a separate VLAN specific to fault-tolerant logging, with no outside communication required.
• Small Computer Interface over IP (iSCSI): The vmkernel port is used for the software iSCSI initiator in VMware ESX or ESXi. In VMware ESX or ESXi 4.0, two iSCSI vmkernel ports can be bonded to allow iSCSI traffic over both physical network interfaces. Traffic varies according to I/O. This traffic typically is implemented on an iSCSI-specific VLAN common to iSCSI initiators and targets, although targets may reside on another VLAN accessible through a Layer 3 gateway.
• Network File System (NFS): The vmkernel port is used for communication with NFS files in VMware ESX or ESXi. Traffic varies according to I/O. This traffic typically is implemented on an NFS-specific VLAN, although filers may reside on another VLAN accessible through a Layer 3 gateway.
• Virtual Machines: Guest virtual machines will vary in number and may be distributed over more than one VLAN and be subject to different policies defined in port profiles and distributed virtual port groups.
Cisco Nexus 1000V 10 Gigabit Ethernet Network Design
This section describes two network design approaches when implementing the Cisco Nexus 1000V virtual switch with 10 Gigabit Ethernet network adapters in a VMware vSphere 4.0 environment.
Design Choices: MAC Pinning or Virtual PortChannel?
Network architects can use two different approaches for incorporating the Cisco Nexus 1000V into the data center network environment: virtual PortChannel (vPC) and MAC pinning. Both design approaches provide protection against single-link and physical-switch failures, but they differ in the way that the virtual and physical switches are coupled and the way that the VMware ESX or ESXi server traffic is distributed over the 10 Gigabit Ethernet links.