Implement jumbo frames end-to-end in a data center


Configuring Jumbo Frames on a Catalyst switch

Configure in CatOS

Cat6509≶ (enable) set port jumbo 
Usage: set port jumbo <mod/port> <enable|disable>
Cat6509> (enable) set port jumbo 1/1 enable 
Jumbo frames enabled on port  1/1. 
Cat6509> (enable) 2002 May 29 12:34:35 %PAGP-5-PORTFROMSTP:
Port 1/1 left bridge port 1/1 
2002 May 29 12:34:38 %PAGP-5-PORTTOSTP:Port 1/1 joined bridge port 1/1

Verify in CatOS

Cat6509> (enable) show port jumbo 
Jumbo frames MTU size is 9216 bytes. 
Jumbo frames enabled on port(s) 1/1,9/1.

Configure Native IOS

7609(config)#int gigabitEthernet 1/1 
7609(config-if)#mtu ? 
  <1500-9216>  MTU size in bytes 

7609(config-if)#mtu 9216

Verify in Native IOS

7609#show interfaces gigabitEthernet 1/1 
GigabitEthernet1/1 is up, line protocol is up (connected) 
  Hardware is C6k 1000Mb 802.3, address is 0007.0d0e.640a (bia 0007.0d0e.640a) 
  MTU 9216 bytes, BW 1000000 Kbit, DLY 10 usec, 
  reliability 255/255, txload 1/255, rxload 1/255


Configuring Jumbo Frames on the Nexus switch

!— You can enable the Jumbo MTU
!— for the whole switch by setting the MTU
!— to its maximum size (9216 bytes) in
!— the policy map for the default
!— Ethernet system class (class-default).

switch(config)#policy-map type network-qos jumbo
switch(config-pmap-nq)#class type network-qos class-default
switch(config-pmap-c-nq)#mtu 9216
switch(config)#system qos
switch(config-sys-qos)#service-policy type network-qos jumbo


Enable Jumbo frames in ESX and ESXi

1. Enable jumbo frames on the virtual switch (set the MTU on the uplinks/physical NICs)

  • For vSS (standard vSwitch) you need to use the vSphere cli.  For example, this cli command will set the MTU to 9000 bytes for the vSS named “vswitch0”:
    vicfg-vswitch –m 9000 vswitch0
    Use “vicfg-vswitch –l” to list the vswitches and their properties
  • For vDS (vNetwork Distributed Switch), you can set the MTU via the vSphere Client UI. From the Networking inventory menu, select the vDS and then “Edit Settings”. Set the “Maximum MTU” to the desired MTU (e.g. 9000B is most likely for jumbo).

2.  Enable jumbo frames on the vmkernel ports

  • Use the esxcfg-vmknic command to delete and then add a vmkernel interface with an MTU of 9000. On ESXi, there seems to be a glitch in creating a vmkernel port on a vDS through the vcli, so the workaround is to create a vmkernel interface with MTU 9000 on a standard switch and then migrate it over to the vDS through the vSphere Client.You can get the status (name/address/mask/MAC addr/MTU) of the vmkernel interfaces via
    esxcfg-vmknic -l

Configuring MTU in the UCS GUI

1) Configure System Classes

2) Configure the MTU – it is a QoS System Classes property



Step 1 In the Navigation pane, click the LAN tab.
Step 2 In the LAN tab, expand LAN > LAN Cloud.
Step 3 Select the QoS System Class node.
Step 4 In the Work pane, click the General tab.
Step 5 Update the following properties for the system class you want to configure to meet the traffic management needs of the system:

Note    Some properties may not be configurable for all system classes.
Name Description
Enabled check box If checked, the associated QoS class is configured on the fabric interconnect and can be assigned to a QoS policy.If unchecked, the class is not configured on the fabric interconnect and any QoS policies associated with this class default to Best Effort or, if a system class is configured with a Cos of 0, to the Cos 0 system class.

Note    This field is always checked for Best Effort and Fibre Channel.
CoS field The class of service. You can enter an integer value between 0 and 6, with 0 being the lowest priority and 6 being the highest priority. We recommend that you do not set the value to 0, unless you want that system class to be the default system class for traffic if the QoS policy is deleted or the assigned system class is disabled.

Note    This field is set to 7 for internal traffic and to any for Best Effort. Both of these values are reserved and cannot be assigned to any other priority.
Packet Drop check box If checked, packet drop is allowed for this class. If unchecked, packets cannot be dropped during transmission.This field is always unchecked for the Fibre Channel class, which never allows dropped packets, and always checked for Best Effort, which always allows dropped packets.

MTU drop-down list The maximum transmission unit for the channel. This can be one of the following:

  • An integer between 1500 and 9216. This value corresponds to the maximum packet size.
  • fc—A predefined packet size of 2240.
  • normal—A predefined packet size of 1500.
Note    This field is always set to fc for Fibre Channel.
Multicast Optimized check box If checked, the class is optimized to send packets to multiple destinations simultaneously.

Note    This option is not applicable to the Fibre Channel.
Step 6 Click Save Changes.Enabling a QoS System Class

The Best Effort or Fibre Channel system classes are enabled by default.


Step 1 In the Navigation pane, click the LAN tab.
Step 2 In the LAN tab, expand LAN > LAN Cloud.
Step 3 Select the QoS System Class node.
Step 4 In the Work pane, click the General tab.
Step 5 Check the Enabled check box for the QoS system that you want to enable.
Step 6 Click Save Changes.


Example CLI configuration of class based policy enabling Jumbo MTU:

policy-map type network-qos system_nq_policy

  class type network-qos class-platinum

    mtu 9000

    pause no-drop

  class type network-qos class-gold

    mtu 9000

  class type network-qos class-fcoe

    pause no-drop

    mtu 2158

  class type network-qos class-default

    mtu 9000

system qos

  service-policy type network-qos system_nq_policy



Configuring Jumbo Frames on the Nexus 1000v Virtual Distributed Switch

MTU can only be configured for uplink, Ethernet type port profiles.

CSCtk05901 If you configure MTU for an Ethernet port profile, your ESX host may generate the following error:

2010 Nov 15 04:35:27 my-n1k %VEM_MGR-SLOT3-1-VEM_SYSLOG_ALERT: vssnet : 
sf_platform_set_mtu: Failed setting MTU for VMW port with portID 33554475.

 In this case, the MTU value you have set is not supported by the VEM physical NIC. See your VMware documentation for more information about supported MTU for PNIC.

Creating a System Port Profile

You can use this procedure to configure a system port profile for critical ports.


Before beginning this procedure, you must know or do the following:

You are logged in to the CLI in EXEC mode.

The VSM is connected to vCenter server.

You have configured the following:

Port admin status is active (no shutdown).

Port mode is access or trunk.

VLANs that are to be used as system VLANs already exist.

VLANs are configured as access VLANs or trunk-allowed VLANs.

A system port profile must be of the Ethernet type because it is used for physical ports. This procedure configures the Ethernet type.

In an installation where multiple Ethernet port profiles are active on the same VEM, it is recommended that they do not carry the same VLAN(s). The allowed VLAN list should be mutually exclusive. Overlapping VLANs can be configured but may cause duplicate packets to be received by virtual machines in the network.

Once a port profile is created, you cannot change its type (Ethernet or vEthernet).

The MTU size you set must be less than or equal to the fixed system jumbomtu size of 9000.

For more information, see the Cisco Nexus 1000V Interface Configuration Guide, Release 4.2(1)SV1(4a).

The MTU configured on an interface takes precedence over the MTU configured on a port profile.

For more information, see the Cisco Nexus 1000V Interface Configuration Guide, Release 4.2(1)SV1(4a).


1. config t

2. port-profile type ethernet profilename

3. description profiledescription

4. switchport mode trunk

5. switchport trunk allowed vlan vlan-id-list

6. no shutdown

7. system vlan vlan-id-list

8. (Optional) mtu mtu-size

9. show port-profile [brief | expand-interface | usage] [name profilename]

10. copy running-config startup-config

Not too bad right?