[ Browse previous post –  VDCA550 OBJECTIVE 1.1 – 1.3 (IMPLEMENT AND MANAGE STORAGE) ]

Here are my notes for the Networking section of the blue print, after tons of reading and lab time.  Again – I am heavily relying on the VCAP5-DCA Official Cert Guide (OCG), and the vSphere 5.5 Documentation Center. 


Objective 2.1 Implement and Manage Virtual Standard Switch (VSS) Networks

 Create and Manage VSS Components – OCG page 48


# Managing the VSS in the GUI – Options mapped out

VC > Host > Configuration > Networking > vSphere Standard Switch

ALL Available Options:

Networking - Refresh _____________________ Refreshes the Networking View
Networking - Add Networking… ____________ Opens the Add Network Wizard, options below
 - Virtual Machine Portgroup ___________ Choose/create vSwitch, label, vlan-id,
 - VMkernel - choose/create vSwitch ____ Label, vlan-id, mark for vmotion/ft/mgmt, 
                                         ip/ipv6/both, IP assignment
Networking - Properties…_____________________ Checkbox, Enables IPv6 Support on the host system
 vSwitch Port/Portgroup bubbles ___________ Displays the properties: General, Security,
                                         Traffic-Shaping, Failover-LB/NIC
 vSwitch - Remove… _______________ Deletes the vswitch
 vSwitch - Properties…
 * Network Adapters tab
 - Add… _________________ Add an unused physical network adapter (vmnic) to the vswitch.
 - Edit… ________________ Set the NIC speed/duplex settings.
 - Remove…_______________ Unassigns the vmnic from the vswitch
 * Ports Tab
 - Add…____________ Opens the Add Network Wizard [minus the vSwitch selection, same options]
 - Remove…_________ Delete the selected port/portgroup
 - Edit vSwitch… Opens the properties for the selected port/portgroup [4 tabs listed below]
    - General
        Number of Ports ___ Drop-down options: 24, 56, 120, 248, 504, 1016, 2040, 4088
        MTU _______________ 1500 - 9000
    - Security
        Promiscuous Mode ____ Accept - VM adapter receives all traffic on the wire. 
                              Reject - default operation
        MAC Addr Changes ____ Reject disables rx-vm traffic on init/effective MAC mismatch.                               Sw iSCSI initiator requires accept.
        Forged Transmits ____ Reject - Host drops tx traffic on init/effective MAC mismatch.
                              Accept - host says I accept whatevs
    - Traffic Shaping
        Status ______________ Enabled = Applied to each virtual network adapter 
        Avg Bandwidth _______ Bps allowed across a port, averaged over time.
        Peak Bandwidth ______ In Kbits/sec; Allowed range is 1 to 9223372036854775 Kbits 
                              That is ~ 1Million Terabytes
        Burst Bandwidth _____ Burst bonus gained when not all allocated bandwidth is used
    - NIC Teaming
        Load Balancing ___________ Dropdown: Originating Virtual Port ID / IP Hash / Source                                    MAC hash / explicit failover order
        Network Failover Detect __ Dropdown: Link status only / Beacon probing
        Notify Switches __________ Yes / No
        Failback _________________ Yes / No
         Failover Order __________ NIC Failover Function: Active/Standby/Unused Adapters

 - Edit Portgroup/VMKnet… Configurations here override the vSwitch-level configurations.
    - General
        Network Label _______________ Network Name
        Vlan ID _____________________ Specify the VLAN
        VMkernel Int-only settings __ Checkboxes for vMotion, Fault Tolerance Logging, Mana-                                      gement/iSCSI Port Binding/MTU
    - Security
        Promiscuous Mode ____ Accept - VM adapter receives all traffic on the wire. 
                              Reject - default operation
        MAC Addr Changes ____ Reject disables rx-vm traffic on init/effective MAC mismatch.
                              Sw iSCSI initiator requires accept.
        Forged Transmits ____ Reject - Host drops tx traffic on init/effective MAC mismatch.
                              Accept - host says I accept whatevs
    - Traffic Shaping
        Status __________ Enabled - Applied to each virtual network adapter / Disabled
        Avg Bandwidth ___ Bps allowed across a port, averaged over time.
        Peak Bandwidth __ In Kbits/sec; Allowed range is 1 to 9223372036854775 Kbits
                          that is, ~ 1Million Terabytes
        Burst Bandwidth _ Burst bonus gained when not all allocated bandwidth is used
    - NIC Teaming
        Load Balancing ___________ Dropdown: Originating Virtual Port ID / IP Hash / 
                                   Source MAC hash / explicit failover order
        Network Failover Detect __ Dropdown: Link status only / Beacon probing
        Notify Switches __________ Yes / No
        Failback _________________ Yes / No
        Failover Order ___________ Active/Standby/Unused Adapters ; Select vmnic, Move Up / Move Down

 Create and Manage Vmkernel Ports on Standard Switches

# Configuration/Management in the GUI (details in first section)
VC > Host > Configuration > Networking

# Managing Vmkernel ports in the CLI (commands with sample output)
# Query the tags on a vmknic

 ~# esxcli network ip interface tag get -i vmk4
 Tags: Management, VMotion, faultToleranceLogging

# Query the ipv4 summarized information for all vmkernel interfaces

~ # esxcli network ip interface ipv4 get
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type DHCP DNS
---- ------------ ------------- -------------- ------------ --------
vmk0 STATIC false
vmk1 STATIC false
vmk2 STATIC false

# Add a vmkernel interface to a vswitch’s port group

~ esxcli network ip interface add --portgroup-name

# Set the ipv4 information on an existing vmkernel interface

~ # esxcli network ip interface ipv4 set -i vmk4 -I -N -P false
~ # esxcli network ip interface ipv4 get
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type DHCP DNS
---- ------------ ------------- -------------- ------------ --------
vmk4 STATIC false

# Edit the enabled status & MTU of an existing vmkernel interface; e=enabled , i=interface-name , m=MTU

~ # excli network ip interface set -e [true|false] -i vmk# -m 1500

 Configure advanced vSS Settings – OCG Page 66

# Configuration/Management in the GUI (details in first section)
VC > Host > Configuration > Networking

# Managing vSwitches in the CLI (commands with sample output)
# Query all standard vswitch commands

~ # esxcli esxcli command list | grep vswitch.standard
 network.vswitch.standard add
 network.vswitch.standard list
 network.vswitch.standard remove
 network.vswitch.standard set
 network.vswitch.standard.policy.failover get
 network.vswitch.standard.policy.failover set
 network.vswitch.standard.policy.security get
 network.vswitch.standard.policy.security set
 network.vswitch.standard.policy.shaping get
 network.vswitch.standard.policy.shaping set
 network.vswitch.standard.portgroup add
 network.vswitch.standard.portgroup list
 network.vswitch.standard.portgroup remove
 network.vswitch.standard.portgroup set
 network.vswitch.standard.portgroup.policy.failover get
 network.vswitch.standard.portgroup.policy.failover set
 network.vswitch.standard.portgroup.policy.security get
 network.vswitch.standard.portgroup.policy.security set
 network.vswitch.standard.portgroup.policy.shaping get
 network.vswitch.standard.portgroup.policy.shaping set
 network.vswitch.standard.uplink add
 network.vswitch.standard.uplink remove

# Query global settings

~ # esxcli network vswitch standard list
Name: vSwitch0
Class: etherswitch
Num Ports: 1536
Used Ports: 11
Configured Ports: 128
MTU: 1500
CDP Status: listen
Beacon Enabled: false
Beacon Interval: 1
Beacon Threshold: 3
Beacon Required By:
Uplinks: vmnic0
Portgroups: vmk1-iscsi, VM Network, Management Network

# Query vswitch policy details

~ # esxcli network vswitch standard policy failover get -v vSwitch0
Load Balancing: srcport
Network Failure Detection: link
Notify Switches: true
Failback: true
Active Adapters: vmnic0
Standby Adapters:
Unused Adapters:

~ # esxcli network vswitch standard policy security get -v vSwitch0
Allow Promiscuous: false
Allow MAC Address Change: true
Allow Forged Transmits: true

~ # esxcli network vswitch standard policy shaping get -v vSwitch0
Enabled: false
Average Bandwidth: -1 Kbps
Peak Bandwidth: -1 Kbps
Burst Size: -1 Kib

# Query vswitch portgroups

~ # esxcli network vswitch standard portgroup list
Name Virtual Switch Active Clients VLAN ID
------------------ -------------- -------------- -------
Management Network vSwitch0 1 0
My VMK Interface vSwitch3 1 1234
Prod-201 vSwitch3 1 201
VM Network vSwitch0 4 0

# Query switch port group policy details [works with failover/security/shaping policies]

~ # esxcli network vswitch standard portgroup policy security get -p 'VM Network'
Allow Promiscuous: true
Allow MAC Address Change: true
Allow Forged Transmits: true
Override Vswitch Allow Promiscuous: true
Override Vswitch Allow MAC Address Change: false
Override Vswitch Allow Forged Transmits: false

# Add Standard vSwitch named uber-vswitch with 2000 ports (default to128 configured ports, maximum 4096)

~ # esxcli network vswitch standard add -P 2000 -v uber-vswitch

# add two uplinks to uber-vswitch

~ # esxcli network switch standard uplink add -u vmnic0 -v uber-vswitch
~ # esxcli network switch standard uplink add -u vmnic1 -v uber-vswitch

# Set the MTU on uber-vswitch to 9000

~ # esxcli network switch standard set -m 9000 -v uber-vswitch

# Add a portgroup named uber-PG to uber-vswitch, configure the pg to tag with Vlan 100

~ # esxcli network switch standard portgroup add -p uber-PG -v uber-vswitch
~ # esxcli network switch standard portgroup set -p uber-PG -v 100

# Configure iphash policy with disabled switch notifications, and traffic shaping ~100mb on the uber-PG port group

~ # esxcli network switch standard portgroup policy failover set -p uber-PG -l iphash -n false
~ # esxcli network switch standard portgroup policy shaping set -p uber-PG -e true -b 100000 -k 150000 -t 200000

# About vSwitch NIC Teaming LB Options

explicit ______ Always use the highest order uplink from the list of active adapters which pass failover criteria.
iphash _______ Route based on hashing the src and destination IP addresses
mac Route ___ based on the MAC address of the packet source.
portid Route __ based on the originating virtual port ID.



Objective 2.2 Implement and Manage Virtual Distributed Switch (VDS) Networks

Determine use cases for and applying VMware DirectPath I/O – OCG Page 61


DirectPath I/O “Passthrough”

Use case: Supporting extremely heavy network activity within a VM, when no other methods are sufficient.

 Migrate a vSS Network to a Hybrid or Full vDS Solution – OCG Page 62

#1 Create vDS, don’t migrate hosts or adapters
VC > Networking > Right Click DC > New vSphere Distributed Switch

#2 Prepare destination PortGroups for any existing networks
VC > Networking > vDS > Configuration > New Port Group...

#3 Connect Hosts
VC > Networking > vDS > Add Host…

#4 Select adapters
- Select the physical adapters
- For each VMkernel interfaces, choose the Destination port groups prepared.

#5 Migrate VM networking
- Check “Migrate virtual machine networking
- Select the Destination port group for each vm-network

#6 Click Finish

 Configure vSS and vDS Settings Using Command Line Tools – OCG Page 80

Not a lot regarding this.here are the available(mostly read) CLI commands for the DVS

~ # esxcli esxcli command list | grep network.vswitch.dvs
network.vswitch.dvs.vmware.lacp.config get
network.vswitch.dvs.vmware.lacp.stats get
network.vswitch.dvs.vmware.lacp.status get
network.vswitch.dvs.vmware.lacp.timeout set
network.vswitch.dvs.vmware list
network.vswitch.dvs.vmware.vxlan.config.stats get
network.vswitch.dvs.vmware.vxlan.config.stats set
network.vswitch.dvs.vmware.vxlan get
network.vswitch.dvs.vmware.vxlan list
network.vswitch.dvs.vmware.vxlan.network.arp list
network.vswitch.dvs.vmware.vxlan.network.arp reset
network.vswitch.dvs.vmware.vxlan.network list
network.vswitch.dvs.vmware.vxlan.network.mac list
network.vswitch.dvs.vmware.vxlan.network.mac reset
network.vswitch.dvs.vmware.vxlan.network.mtep list
network.vswitch.dvs.vmware.vxlan.network.port list
network.vswitch.dvs.vmware.vxlan.network.port.stats list
network.vswitch.dvs.vmware.vxlan.network.port.stats reset
network.vswitch.dvs.vmware.vxlan.network.stats list
network.vswitch.dvs.vmware.vxlan.network.stats reset
network.vswitch.dvs.vmware.vxlan.stats list
network.vswitch.dvs.vmware.vxlan.stats reset
network.vswitch.dvs.vmware.vxlan.vmknic list
network.vswitch.dvs.vmware.vxlan.vmknic.multicastgroup list
network.vswitch.dvs.vmware.vxlan.vmknic.stats list
network.vswitch.dvs.vmware.vxlan.vmknic.stats reset

 Analyze Command Line Output to Identify vSS and vDS Configuration Details

# Config detail from esxcli

~ # esxcli network vswitch dvs vmware list
Name: grosas-lab-dvs0
VDS ID: 01 2f 16 50 eb 4a 7d 3d-d6 5a 7d 55 05 27 76 5b
Class: etherswitch
Num Ports: 1536
Used Ports: 1
Configured Ports: 512
MTU: 1500
CDP Status: listen
Beacon Timeout: -1
VMware Branded: true
DVPortgroup ID: dvportgroup-77
In Use: false
Port ID: 0

# Config detail from net-dvs

~# net-dvs-l
switch 01 2f 16 50 eb 4a 7d 3d-d6 5a 7d 55 05 27 76 5b (etherswitch)
 max ports: 1536
 global properties:
 com.vmware.common.version = 0x 3. 0. 0. 0
 propType = CONFIG
 idle timeout = 15 seconds
 active timeout = 60 seconds
 sampling rate = 0
 collector =
 internal flows only = false
 propType = CONFIG
 propType = CONFIG
 propType = CONFIG
 com.vmware.common.alias = grosas-lab-dvs0 , propType = CONFIG
 propType = CONFIG
 com.vmware.etherswitch.mtu = 1500 , propType = CONFIG
 com.vmware.etherswitch.cdp = CDP, listen
 propType = CONFIG
 host properties:
 com.vmware.common.host.portset = DvsPortset-0 , propType = CONFIG
 com.vmware.common.host.volatile.status = green , propType = RUNTIME
 com.vmware.common.portset.opaque = false , propType = RUNTIME
 propType = CONFIG
 port 0:
 com.vmware.common.port.alias = dvUplink1 , propType = CONFIG
 com.vmware.common.port.connectid = 0 , propType = CONFIG
 com.vmware.common.port.volatile.status = free
 com.vmware.common.port.volatile.vlan = VLAN 0
 com.vmware.common.port.portgroupid = dvportgroup-77 , propType = CONFIG
 com.vmware.common.port.block = false , propType = CONFIG
 com.vmware.common.port.dvfilter = filters (num = 0):
 propType = CONFIG
 com.vmware.common.port.ptAllowed = 0x 0. 0. 0. 0
 propType = CONFIG
 load balancing = source virtual port id
 link selection = link state up;
 link behavior = notify switch; best effort on failure; shotgun on failure;
 active =
 standby =
 propType = CONFIG
 com.vmware.etherswitch.port.security = deny promiscuous; deny mac change; allow forged frames
 propType = CONFIG
 com.vmware.etherswitch.port.vlan = Guest VLAN tagging
 ranges = 0-4094
 propType = CONFIG
 com.vmware.etherswitch.port.txUplink = normal , propType = CONFIG
 pktsInUnicast = 0
 bytesInUnicast = 0
 pktsInMulticast = 0
 bytesInMulticast = 0
 pktsInBroadcast = 0
 bytesInBroadcast = 0
 pktsOutUnicast = 0
 bytesOutUnicast = 0
 pktsOutMulticast = 0
 bytesOutMulticast = 0
 pktsOutBroadcast = 0
 bytesOutBroadcast = 0
 pktsInDropped = 0
 pktsOutDropped = 0
 pktsInException = 0
 pktsOutException = 0
 propType = RUNTIME
 propType = CONFIG

 Configure Netflow – OCG Page 68


WC > DVS > Right click > All vCenter Actions - Edit Netflow > Provide collector IP/Port > Give DVS Switch IP Address

– Optional: Active flow export timeout
– Optional: Idle flow export timeout
– Sampling Rate

The sampling rate represents the number of packets that NetFlow drops after every collected packet. A sampling rate of xinstructs NetFlow to drop packets in a collected packets:dropped packets ratio 1:x. If the rate is 0, NetFlow samples every packet, that is, collect one packet and drop none. If the rate is 1, NetFlow samples a packet and drops the next one, and so on.

Determine Appropriate Discovery Protocol – OCG Page 68


Use CDP for Cisco Switches / LLDP for everything else…

WC > DVS > Manage > Settings > Properties > Edit > Advanced > Type: CDP/LLDP | Operation: Listen/Advertise/Both

 Determine Use Cases for, and Configure PVLANs – OCG Page 69


WC > DVS > Manage > Settings > Private VLAN > Edit

– Define the Primary VLAN ID (VLAN Type Promiscuous)
– Define the Secondary VLANs (VLAN Type Community or Isolated)

Use Case: Private VLANs are used to solve VLAN ID limitations and waste of IP addresses for certain network setups.
A private VLAN is identified by its primary VLAN ID. A primary VLAN ID can have multiple secondary VLAN IDs associated with it. Primary VLANs are Promiscuous, so that ports on a private VLAN can communicate with ports configured as the primary VLAN. Ports on a secondary VLAN can be either Isolated, communicating only with promiscuous ports, or Community, communicating with both promiscuous ports and other ports on the same secondary VLAN.

 Use Command Line Tools to Troubleshoot and Identify VLAN Configurations – OCG Page 73

# Check Vlan IDs for portgroups

~ # esxcli network vswitch standard portgroup list
 Name Virtual Switch Active Clients VLAN ID
 ------------------ -------------- -------------- -------
 Management Network vSwitch0 1 0
 My VMK Interface vSwitch3 1 1234
 Prod-201 vSwitch3 1 300

# Change a Vlan ID on portgroup Prod-201

~ # esxcli network vswitch standard portgroup set -p Prod-201 -v 201



Objective 2.3 Troubleshoot Virtual Switch Solutions

 Understand the NIC Teaming failover types and related physical network settings – OCG Page 74

Edit Teaming and Failover Policy for a vSphere Standard Switch in the vSphere Web Client
Edit the Teaming and Failover Policy on a Standard Port Group in the vSphere Web Client
Edit the Teaming and Failover Policy on a Distributed Port Group in the vSphere Web Client
Edit Distributed Port Teaming and Failover Policies with the vSphere Web Client

Route based on Originating Virtual Port ID
– This is the default policy.
– The vSwitch assigns the VM’s virtual network adapter to a port number and uses the port number to determine which path will be used to route all network I/O sent from that adapter.
– This implementation does not require any changes on the connected physical switches.
– The vSwitch performs a modulo function, where the Port number is divided by the number of NICs in the team, and the remainder indicates the path to place the outbound I/O.
– If the path fails, the outbound I/O is automatically re-routed to a surviving path.
– This policy does not permit outbound data from a single virtual adapter to be distributed across all active paths on the vSwitch.

The Route based on Originating Virtual Port ID algorithm does not consider load into its calculation for traffic placement

Route based on Source MAC Hash
– This policy uses the MAC address of the virtual adapter to select the path, rather than the port number.
– The vSwitch performs a modulo function, where the MAC address is divided by the number of NICs in the team, and the remainder indicates the path to place the outbound I/O.

The Route based on Source MAC Hash algorithm does not consider load into its calculation for traffic placement.

Route based on IP Hash
– This is the only option that permits outbound data from a single virtual adapter to be distributed across all active paths.
– This option requires that the physical switch be configured for IEEE802.3ad “Link Aggregation”
– The vSwitch must be configured for IP Hash for inbound load balancing.
– The outbound data from each virtual adapter is distributed across the active paths using the calculated IP hash.
– If a virtual adapter is concurrently sending data to two or more clients, the I/O to one client can be placed on one path and the I/O to another client can be placed on a separate path.
– The outbound traffic from a virtual adapter to a specific external client is based on the most significant bits of the IP address of both the virtual adapter and the client. The combined value is used by the vSwitch to place the associated outbound traffic on a specific path.

The Route based on IP Hash algorithm does not consider load into its calculation for traffic placement. But the inbound traffic is truly load balanced by the physical switch.

Route based on Physical NIC Load (DVS Only)
– Factors the load of the physical NIC when determining traffic placement.
– Does not require special settings on the physical switch
– Initially, outbound traffic is placed on a specific path. Activity is monitored.
– When I/O through a specific vmnic adapter reaches a consistent 75% capacity, then one or more virtual adapters are automatically remapped to other paths.
– This is a good choice when Etherchannel on the physical switch is not feasible.

 Determine and Apply Failover Settings – OCG Page 77


WC > Manage > Networking > Virtual Switches > Edit Settings > Teaming and Failover
WC > DVS > Manage > Ports > Edit Distributed Port Settings

Network Failover Detection

# Link status Only
Relies only on the link status that the network adapter provides.
– Detects removed cables & physical switch port failures.
– Does not detect a physical switch port that is blocked by spanning tree or is misconfigured.
– Does not detect a pulled cable that connects a physical switch to another device.

# Beacon Probing
Sends out and listens for beacon probes on all NICs in the team and uses this information, in addition to link status, to determine link failure. ESX/ESXi sends beacon packets every second.
– Useful with teams of more than 3 nice, allows n-2 failures
– NICs must be in active/active or active/standby, NICs in unused state do not participate in beacon probing.

Notify Switches Yes/No – If Yes, a notification is sent over the network to update the lookup tables on the physical switches.
Set to No for features like Microsoft NLB in unicast mode.

 Configure Explicit Failover to Conform with VMware Best Practices – OCG Page 77

Override switch failover order to manually specify which NICs are Active / Standby / Unused.


Configure Port Groups to Properly Isolate Network Traffic – OCG Page 79

– VMware recommends that each type of network traffic is separated by VLANs.
– Separate VLANs for Management, vMotion, VMs, iSCSI, NAS, VMware HA Heartbeat, Fault Tolerance logging.
– Trunk the VLANs on the physical switch.


Given a Set of Network Requirements, Identify the Appropriate Distributed Switch Technology to Use – OCG Page 81

# VDS features



Switch/Network Discovery [CDP / LLDP]

Network Rollback and Recovery

Port Mirroring
   Switched Port Analyzer[SPAN]
   Remote Switched Port Analyzer [RSPAN]
   Enhanced Remote Switched Port Analyzer (ERSPAN)
Port Security

TCP Segmentation Offload / Jumbo Frames

Single-Root I/O Virtualization (SR-IOV)

Traffic Filtering [ACL]


Configure and Administer vSphere Network I/O Control – OCG Page 83

Conveniently I have blogged about this one, and deployed it in production… and I’m running out of steam.




Use Command Line Tools to Troubleshoot and Identify Configuration Items From an Existing vDS

Already covered under Analyze Command Line Output to Identify vSS and vDS Configuration Details

VDCA550 Objective 1.1 – 1.3 (Implement and Manage Storage) in One Dense Post

Because sharing is caring.  Here are my notes after tons of reading and lab time.  Heavily using the VCAP5-DCA Official Cert Guide (OCG), the vSphere 5.5 Documentation Center.  Supplementing with blogs and youtube anywhere my main sources fall short.

Extra special thanks to Chris Wahl for his Study Sheets.  They are helping me tons with managing my time.  I’m using the VDCA550 version.

Objective 1.1 Implement Complex Storage Solutions


VMware DirectPath I/O – OCG Page 101


“VM access to PCI devices”

# Configuring in GUI – Video Demo

# Pre-reqs
– Intel VT-d or AMD IOMMU enabled in BIOS
– Devices connected and marked as available for passthrough
– VM Hardware version 7

# Enabling in the GUI
VC > Host > Configuration > Hardware > Advanced Settings > Configure Passthrough (add a PCI device)
VC > VM > Edit Settings > Add > Add the PCI device.


N-Port Virtualization (NPIV) – OCG Page 99


“WWN at VM level”

# Pre-reqs
– Only on VMs with RDM disks (VMs with reg disks use WWN of the Host’s HBAs.
– HBA on host must support NPIV
– Fabric switches must be NPIV-aware

# Capabilities & Limitations
– vMotion supported; vmkernel reverts to physical hba if destination host ors not support NPIV.
– Concurrent I/O supported.
– Requires FC switch
– Clones do not retain WWN
– Does not support Storage vMotion
– Disabling and Re-enabling NPIC capability on FC Switch while VM running can cause FC link to fail and I/O to stop.

# Configuring in the GUI
VC > VM > Edit Settings > Options Tab > Advanced – Fibre Channel NPIV
WC > VM > Edit Settings > VM Options > Expand FC NPIV triangle> Deselect “Temporarily Disable NPIV for this VM > Generate new WWN


Raw Device Mappings (RDM) – OCG Page 98


“An RDM allows a VM to directly utilize a LUN”

# Considerations & Limitations
– RDM is not available for directly attached block devices.
– Snapshots are not supported in

# Configuring in GUI
VC > VM > Edit Settings > Hardware – Add > Hard Disk > Type: Raw Device Mappings > Select LUN > Select datastore


Configure vCenter Server Storage Filters (Storage Profiles) – OCG Page 102


“vCenter Server provides storage filters to help you avoid storage device corruption or performance degradation that can be caused by an unsupported use of storage devices.”

# Configuring in the GUI
VC > Administration > vCenter Server Settings > Advanced Settings
WC > VC Server > Manage > Settings > Advanced Settings > Edit

(filters by default are not listed and are TRUE)

Add the key – In the Value box, type False > Add > OK


VMFS re-signaturing – OCG Page 104


“When resignaturing a VMFS copy, ESXi assigns a new UUID and a new label to the copy, and mounts the copy as a datastore distinct from the original.”

# Resignaturing in the GUI

# Checking UUID
esxcli storage vmfs extent list
vmkfstools -P -h [datastoreName]

# Checking UUID in the GUI
VC > Datastores and DS Clusters > Configuration > Datastore Details > Location

# Resignaturing with GUI
VC > Host > Configuration > Storage > Add Storage… > Select Disk/LUN > Select Datastore > Mount options > Assign New Signature
VC > Host > Configuration > Storage > Add Storage… > Select Disk/LUN > Select Datastore > Mount options > Keep Existing Signature.

# Resignaturing with esxcli
esxcli storage vmfs snapshot list
esxcli storage vmfs snapshot mount -l ‘datastore-volume-label’
esxcli storage vmfs snapshot resignature -l ‘datastore-volume-lable’


Understand and apply LUN masking using PSA-related commands – OCG Page 127, Page 191

# Applying LUN Masking


# Changing the Path Selection Plugin for a Storage Array Type Plugin
/vmfs/volumes # esxcli storage nmp satp set -s VMW_SATP_CX -P VMW_PSP_RR
Default PSP for VMW_SATP_CX is now VMW_PSP_RR

# List devices
esxcli storage vmfs extent list

# List paths
esxcli storage nmp path list

# List all claim rules
esxcli storage core claimrule list

# Claimrule based on Fiber Channel
esxcli storage core claimrule add -u -P MASK_PATH -t transport -R fc

# Claimrule rule #333 masking on adapter , channel, target, lun
esxcli storage core claim rule add -r 333 -P MASK_PATH -t location -A vmhba32 -C 0 -T 0 -L 0

# Load the claim rules into runtime
esxcli storage core claimrule load
esxcli storage core claimrule run

# Reclaim a lun
esxcli storage core claiming reclaim -d

# Remove a rule
esxcli storage core claimrule remove -r 333
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t location -A vmhba2 -C 0 -T 0 -L 2

# LUN Masking in the GUI
No GUI method exactly matches the commands above.
– Native Multipathing (NMP) paths can be enabled/disabled
– Path Selection Policy (PSP) can be configured (Fixed, Most Recently Used, Round Robin)

VC > Hosts and Clusters > Host > Configuration > Storage > View Devices > Manage Paths
Web Client > Storage > Datastore > Manage > Settings > Connectivity and Multipathing


Configure iSCSI Port Binding – OCG Page 123

# Configuring iSCSI Port Binding Video Demo

# Adding the Software iSCI adapter
host > configuration > storage adapters > Add > Select “Add Software iSCSI adapter” > OK

# Adding the iSCI vmkernel interface
VC > host > configuration > networking > vSphere Standard Switch > Add Networking… VMkernel > Select vSwitch

#Configure the Storage Adapter
VC > Hosts and Clusters > host > configuration > storage adapters > select the iSCSI Software Adapter > Properties


vSphere Flash Read Cache (Not covered in printed OCG – Covered in supplemental Appendix C)


“Performance enhancement of read-intensive applications by providing a write-through cache for virtual disks. It uses the Virtual Flash Resource, which can be built on Flash-based, solid-state drives (SSDs) that are installed locally in the ESXi hosts.”

# Configuring vFRC On the host
Web Client > Hosts & Clusters > Host > Manage > Settings > Virtual Flash > Virtual Flash Resource Management > Add Capacity

# Configure a VM with vFRC
Web Client > VM > Edit Settings > Select/Expand Hard Disk > Enter qty or FRC > OK

# Configure Host Cache in GUI
WC > Host > Manage > Storage > Host Cache Configuration > Select DS > Allocate space for host cache.


Configure Datastore Cluster – OCG Page 120

# Configure DS Cluster in the GUI
VC > Storage > New DS Cluster > Storage DRS Automation level > Select Runtime settings/IO inclusion > Select Clusters/Hosts > Select Datastores
WC > Right click DC Object > New DS Cluster


Upgrade VMware Storage Infrastructure – OCG Page 115

# Upgrade datastores in the GUI
VC > Datastore > Configuration > Upgrade (Option does not appear if running latest)
WC > Datastore > Manage > Settings > General > Properties (Option does not appear if running latest)



Objective 1.2 Manage Complex Storage Solutions


Analyze I/O workloads to determine storage performance requirements OCG Page 168, 188, 196


# List VM World GID Info
vscsiStats -l

# Collect stats on GID 42155
vscsiStats -s -w 42155 (s to start collection, w to specify the GID)

# Display stats
vscsiStats -p {type} (Type options all, ioLength, seekDistance, outstandingIOs, latency, interarrival)

# Stop all collection
vscsiStats -x

# View host level statistics, examine disk adapter stats
esxtop > d

# View LUN level statistics
esxtop > u

# View VM level disk stats
esxtop > v

* CMDS/s – This is the total amount of commands per second, which includes IOPS and other SCSI commands (e.g. reservations and locks). Generally speaking CMDS/s = IOPS unless there are a lot of other SCSI operations/metadata operations such as reservations.
* DAVG/cmd – This is the average response time in milliseconds per command being sent to the storage device.
* KAVG/cmd – This is the amount of time the command spends in the VMKernel.
* GAVG/cmd – This is the response time as experienced by the Guest OS. This is calculated by adding together the DAVG and the KAVG values.

As a general rule DAVG/cmd, KAVG/cmd and GAVG/cmd should not exceed 10 milliseconds (ms) for sustained lengths of time.
There are also the following throughput metrics to be aware of:

* CMDS/s – As discussed above
* READS/s – Number of read commands issued per second
* WRITES/s – Number of write commands issued per second
* MBREAD/s – Megabytes read per second
* MBWRTN/s – Megabytes written per second

The sum of reads and writes equals IOPS, which is the the most common benchmark when monitoring and troubleshooting storage performance. These metrics can be monitored at the HBA or Virtual Machine level.


Identify and tag SSD and local devices – page 133


# Identify SSD in the GUI
VC > Host > Configuration > Hardware – Storage > Datastores > Drive Type
WC > Host > Manage > Storage > Storage Devices > Drive Type

# Identify the device to be tagged and its SATP, command & example output

esxcli storage device nmp device list (note the SATP)
 Device Display Name: DGC Fibre Channel Disk (naa.6006016015301d00167ce6e2ddb3de11)
 Storage Array Type: VMW_SATP_CX
 Storage Array Type Device Config: {navireg ipfilter}
 Path Selection Policy: VMW_PSP_MRU
 Path Selection Policy Device Config: Current Path=vmhba4:C0:T0:L25
 Working Paths: vmhba4:C0:T0:L25

# Add a PSA claim rule

## By Device Name
esxcli storage nmp satp rule add -s VMW_SATP_CX -d device_name —o enable_ssd

## Add By Vendor / Model
esxcli storage nmp satp rule add -s VMW_SATP_CX -V vendor_name -M model_name -o enable_ssd

# Reclaim the device
esxcli storage core claiming reclaim —d [devicename]

# Check if device is tagged SSD
esxcli storage core device list -d device_name


Administer hardware acceleration for VAAI – OCG Page 106


VAAI = vSphere Storage APIs Array Integration

“Hardware-acceleration / hardware offload APIs. Storage primitives that allow the host to offload storage operations”

# Full copy – Array performs copies without having to communicate with the host. Speeds up cloning/svmotion.

# Block zeroing . Array performs zeroing. Speeds up block zeroing process when new virtual disk is created

# Hardware-assisted locking. Enhanced locking. ATS replaces SCSI-2. More VMs per Datastore. More Hosts per LUN

# Configuring in GUI
VC > Host > Configuration > Software – Advanced Settings ; use the settings mentioned above; 0 will disable

# Checking for VAAI Support
VC > Host > Configuration > Storage > Hardware > Datastores View


Configure and administer profile-based-storage – OCG Page 109

“VM storage policies can be used during VM provisioning to ensure that the virtual disks are placed on proper storage. VM storage policies can be used to facilitate the management of the VM, such as during migrations, to ensure that the VM remains on compliant storage.”

# Configuration in GUI Video Demo

# 1) Enable the feature on the host/cluster
VC > Home > Management – VM Storage Profiles > Enable VM Storage Profiles > Select the Host/Cluster > Click Enable Storage Profiles > Close

# 2) Define User-defined Capabilities
VC > Home > Management – VM Storage Profiles > Manage Storage Capabilities > Add > Name the capability > OK

# 3) Create VM Storage Profile
VC > Home > Management – VM Storage Profiles > Create > Create new VM storage profile > Name the storage profile > Select a defined capability defined in #2 > Click Next > Click Finish

# 4) Assign User-defined Capabilities
VC > Right-click datastore > Assign User-Defined Storage Capabilitiy > Select a Storage Capability from the drop-down > click OK

# Test by creating new vm
VC > VMs & Templates > New VM > In storage section, use the drop-down, the view will filter datastore options into compatible/non-compatible options.


Prepare Storage for Maintenance – OCG Page 114


“ Datastore maintenance mode “

# Configuring SDRS MM
VC > Datastore > Right click datastore > Enter SDRS Maintenance mode
WC > Datastore > All vCenter Actions > Enter Storage DRS Maintenance mode


Apply Space Utilization Data to Manage Storage Resources


Provision and Manage Storage Resources According to VM Requirements


# Disk Formats:

■ Lazy-zeroed Thick (default) – Space required for the virtual disk is allocated during creation. Any data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine. The virtual machine does not read stale data from disk.
– Fast
– File block zeroed on write
– Fully pre-allocated on datastore

■ Eager-zeroed Thick – Space required for the virtual disk is allocated at creation time. In contrast to zeroedthick format, the data remaining on the physical device is zeroed out during creation. It might take much longer to create disks in this format than to create other types of disks.
– Slow – but faster with VAAI
– File block zeroed when disk is created.
– Fully preallocated on datastore.

■ Thin – Thin-provisioned virtual disk. Unlike with the thick format, space required for the virtual disk is not allocated during creation, but is supplied, zeroed out, on demand at a later time.
– Very Fast
– File block is zeroed on write.
– File block is allocated on write.

■ rdm:device – Virtual compatibility mode raw disk mapping.

■ rdmp:device – Physical compatibility mode (pass-through) raw disk mapping.

■ 2gbsparse – A sparse disk with 2GB maximum extent size. You can use disks in this format with hosted VMware products, such as VMware Fusion, Player, Server, or Workstation. However, you cannot power on sparse disk on an ESXi host unless you first re-import the disk with vmkfstools in a compatible format, such as thick or thin.


Understand Interactions Between Virtual Storage Provisioning and Physical Storage Provisioning

Reference Virtual Disk Format Types – OCG Page 95
Troubleshoot Storage Performance and Connectivity – OCG Page 188


Configure Datastore Alarms – OCG Page 117 (Datastore Alarms) Page 235 (SDRS Alams)

Create and Analyze Datastore Alarms and Errors to Determine Space Availability – OCG page 169, 188 – 201

# Configuring the alarm in the GUI
VC > Define the scope > Alarms Tab > Definitions > Right click Whitespace > New Alarm > Select type Datastore

# Define a trigger

## Datastore alarms support the following triggers:
– Datastore Disk Provisioned (%) >>> Is above / Is below >>> 50, 150, 200, etc (increments of 50)
– Datastore Disk Usage (%) >>> Is above / Is below >>> Defined percentage
– Datastore State to All Hosts >>> Is equal to / Not equal to >>> None / Connected / Disconnected


Objective 1.3 Troubleshoot Complex Storage Solutions


Perform Command-line Configuration of Multipathing Options – OCG Page 188

# Identify LUNs
esxcli storage core device list

# Identify paths
esxcli storage core path list -d device_name

# Get stats on a path
esxcli storage core path stats -d path_name

# Disable a path in the CLI
esxcli storage core path get -p path_name –state=[active/off]


Change a Multipath Policy – OCGP Page 132


# Changing the multipath policy in the GUI
VC > Host > Configuration > Hardware – Storage > Datastores View > Right click Ds, Properties > Manage Paths

# Default PSPs Explained
Most Recently Used (MRU) – VMW_PSP_MRU
– Selects path most recently used
– On failure, an alternate path will take over
– On recovery, the original path becomes an alternate

Round Robin (VMware) – VMW_PSP_RR
– automatic path selection algorithm, rotating through all active paths when connecting to active-passive arrays.

Fixed (VMware) – VMW_PSP_FIXED
– Host uses designated preferred path, if configured. Otherwise uses the first working path. An explicitly designated path will be used even if marked dead.


Troubleshoot Common Storage Issues – OCG Page 188


# Troubleshooting Storage Adapters

# Troubleshooting SSDs

# Troubleshooting Virtual SAN

# Failure to Mount NFS Datastores

# VMkernel Log Files Contain SCSI Sense Codes

vSphere Web Client – How to Enable CDP / LLDP

A quick few steps will enable the vSphere Distributed Switch to participate in LLDP conversations.


vSphere Web Client > Networking.   Drill down and select the DVS.  Click Manage.  Click the Edit distributed switch settings icon.

Screen Shot 2013-02-11 at 1.17.21 AM

In the Edit Settings window, Click Advanced.

Discovery protocol options are (disabled), CDP and Link Layer Discovery Protocol.

Screen Shot 2013-02-11 at 12.54.50 AM

Operation options:

Listen – This mode gives the vSphere admin access switch details (assuming the discovery protocol is enabled at the switch port).  The switch does not show LLDP information about the host.  Once the host successfully receives discovery protocol information from the switch, the information will populate under Manage > Settings > Topology > DVUplinks “show details”  (the blue circles with the i)… the vSphere admin is treated to the following details:

Screen Shot 2013-02-11 at 1.20.28 AM

Advertise –  The vSphere admin does not see any switch details.  This mode treats the switch administrator to see host details.  Here is an example of what is seen on the switch:

Switch#show lldp nei eth 1 det

Interface Ethernet1 detected 1 LLDP neighbors:

Neighbor vmnic0/0050.5612.3456, age 58 seconds
Discovered 0:10:58 ago; Last changed 0:10:58 ago
– Chassis ID type: Interface name (6)
Chassis ID : “vmnic0”
– Port ID type: MAC address (3)
Port ID : 0050.5612.3456
– Time To Live: 180 seconds
– Port Description: “port 6684 on dvSwitch lab-dVS (etherswitch)”
– System Name: “lab-esx0.home.lab”
– System Description: “VMware ESX Releasebuild-123456”
– System Capabilities : Bridge
Enabled Capabilities: Bridge

Both – My favorite mode.  Switch admin sees host details.  vSphere admin sees switch details.  Everyone has a beer and is happy.






vSphere 5.1 Networking Improvements – Network Health Check Improvements

In my previous post I mentioned the welcome additions to the esxcli.

Also very much welcomed are operational improvements  to the VDS and to the troubleshooting toolset.

In this release, the VDS gets the following improvements

Network health check

VDS config backup and restore

Management network rollback and recovery

Distributed port – auto expand

MAC address management

LACP support 

BPDU filter

I want to focus on the details surrounding the Network health check improvements 

Per the 5.1 – What’s new – Networking whitepaper  “With Network health check in vSphere 5.1, the VLAN, MTU and Adapter teaming are monitored at 1 minute intervals using proving packets (sent and received via the physical uplink interfaces of the vDS.  Depending on the config on the connected network device, REQ and ACK packets will be received or dropped, indicating a config issue, and displaying a warning in the vSphere client.”

When we open up the vSphere 5.1 client – these new alarms can be found at the Datacenter object (not the DVS object):

vSphere Distributed Switch MTU matched status

vSphere Distributed Switch MTU supported status

vSphere Distributed Switch teaming matched status

vSphere Distributed Switch VLAN trunked status

These new alarms have their trigger details hidden from viewing or editing from within the vSphere client.  The tab displays this message:


Here’s the alarm detail from the PowerCLI commandlet:

PS C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Get-AlarmDefinition -Name “*Switch*”| Format-List

Entity : Datacenters
Description : Default alarm to monitor changes in vSphere Distributed Switch vlan trunked status.
Enabled : True
Name : vSphere Distributed Switch vlan trunked status
ExtensionData : VMware.Vim.Alarm
ActionRepeatMinutes : 0
Id : Alarm-alarm-57
Uid : /VIServer=networkdojo\grosas@localhost:443/Alarm=Alarm-alarm-57/

Entity : Datacenters
Description : Default alarm to monitor changes in vSphere Distributed Switch MTU matched status.
Enabled : True
Name : vSphere Distributed Switch MTU matched status
ExtensionData : VMware.Vim.Alarm
ActionRepeatMinutes : 0
Id : Alarm-alarm-58
Uid : /VIServer=networkdojo\grosas@localhost:443/Alarm=Alarm-alarm-58/

Entity : Datacenters
Description : Default alarm to monitor changes in vSphere Distributed Switch MTU supported status.
Enabled : True
Name : vSphere Distributed Switch MTU supported status
ExtensionData : VMware.Vim.Alarm
ActionRepeatMinutes : 0
Id : Alarm-alarm-59
Uid : /VIServer=networkdojo\grosas@localhost:443/Alarm=Alarm-alarm-59/

Entity : Datacenters
Description : Default alarm to monitor changes in vSphere Distributed Switch teaming matched status.
Enabled : True
Name : vSphere Distributed Switch teaming matched status
ExtensionData : VMware.Vim.Alarm
ActionRepeatMinutes : 0
Id : Alarm-alarm-60
Uid : /VIServer=networkdojo\grosas@localhost:443/Alarm=Alarm-alarm-60/

Not quite satisfied, I took a peek at the VMware vSPhere 5.1 Documentation Center; there I found the new objects satisfactorily documented.  🙂

On that note – I need to throw in the towel and call it a night.  Happy learning to any and all.

– Gabe


Exciting additions to esxcli with 5.1 plus found an oddity with esxicli maintenanceMode.

In esxi4.1 – you can catch the beginnings of esxcli

The name space is greatly expanded en esxi5.0 showing true commitment by VMware to its standardization going forward.

The update in esxi 5.1 is magnificent.  There are 82 commands added; to my joy 47 of them are in the *network* namespace.  We are even greeted with a brand new primary namespace:

esxcli network 5.0 on the left / esxcli network 5.1 on the right:

With the latest update – maintenance mode operations have also been added to esxcli.

esxcli system maintenanceMode get gives you the maintenance status of the host (Enabled or Disabled)

esxcli system maintenanceMode set provides two command options :

esxcli system maintenanceMode set -e -t 30 should theoretically set our host into maintenance mode, and sets the timeout to enter maintenance to 30 seconds.  Oddly enough it doesn’t work.

The command errors stating that the -e option requires a value…

Ran it again with some common sense applied were directions lacking

esxcli system maintenanceMode set -e yes -t 30

And voila! success!

To close the loop, I confirmed the following command removes maintenance mode:

esxcli system maintenanceMode set -e no

On that note – I need to go; more command exploration when I come into some free time.

– Gabe

esxcli namespace tree

esxcli esxcli command list !– full list of commands is printed

Primary group in the esxcli namespace, designated with @ in the tree
– fcoe
– hardware
– iscsi
– network
– software
– storage
– system
– vm

Namespace Command
————————————————– ———–
esxcli.command list
fcoe.adapter list
fcoe.nic disable
fcoe.nic discover
fcoe.nic list
hardware.bootdevice list
hardware.clock get
hardware.clock set
hardware.cpu.cpuid get
hardware.cpu.global get
hardware.cpu.global set
hardware.cpu list
hardware.memory get
hardware.pci list
hardware.platform get
iscsi.adapter.auth.chap get
iscsi.adapter.auth.chap set
iscsi.adapter.capabilities get
iscsi.adapter.discovery rediscover
iscsi.adapter.discovery.sendtarget add
iscsi.adapter.discovery.sendtarget.auth.chap get
iscsi.adapter.discovery.sendtarget.auth.chap set
iscsi.adapter.discovery.sendtarget list
iscsi.adapter.discovery.sendtarget.param get
iscsi.adapter.discovery.sendtarget.param set
iscsi.adapter.discovery.sendtarget remove
iscsi.adapter.discovery.statictarget add
iscsi.adapter.discovery.statictarget list
iscsi.adapter.discovery.statictarget remove
iscsi.adapter.discovery.status get
iscsi.adapter.firmware get
iscsi.adapter.firmware set
iscsi.adapter get
iscsi.adapter list
iscsi.adapter.param get
iscsi.adapter.param set
iscsi.adapter set
iscsi.adapter.target list
iscsi.adapter.target.portal.auth.chap get
iscsi.adapter.target.portal.auth.chap set
iscsi.adapter.target.portal list
iscsi.adapter.target.portal.param get
iscsi.adapter.target.portal.param set
iscsi.ibftboot get
iscsi.ibftboot import
iscsi.logicalnetworkportal list
iscsi.networkportal add
iscsi.networkportal.ipconfig get
iscsi.networkportal.ipconfig set
iscsi.networkportal list
iscsi.networkportal remove
iscsi.physicalnetworkportal list
iscsi.physicalnetworkportal.param get
iscsi.physicalnetworkportal.param set
iscsi.plugin list
iscsi.session add
iscsi.session.connection list
iscsi.session list
iscsi.session remove
iscsi.software get
iscsi.software set
network.fence list
network.fence.network.bte list
network.fence.network list
network.fence.network.port list
network.firewall get
network.firewall load
network.firewall refresh
network.firewall.ruleset.allowedip add
network.firewall.ruleset.allowedip list
network.firewall.ruleset.allowedip remove
network.firewall.ruleset list
network.firewall.ruleset.rule list
network.firewall.ruleset set
network.firewall set
network.firewall unload
network.ip.connection list
network.ip.dns.search add
network.ip.dns.search list
network.ip.dns.search remove
network.ip.dns.server add
network.ip.dns.server list
network.ip.dns.server remove
network.ip get
network.ip.interface add
network.ip.interface.ipv4 get
network.ip.interface.ipv4 set
network.ip.interface.ipv6.address add
network.ip.interface.ipv6.address list
network.ip.interface.ipv6.address remove
network.ip.interface.ipv6 get
network.ip.interface.ipv6 set
network.ip.interface list
network.ip.interface remove
network.ip.interface set
network.ip.neighbor list
network.ip set
network.nic down
network.nic get
network.nic list
network.nic set
network.nic up
network.vswitch.dvs.vmware list
network.vswitch.standard add
network.vswitch.standard list
network.vswitch.standard.policy.failover get
network.vswitch.standard.policy.failover set
network.vswitch.standard.policy.security get
network.vswitch.standard.policy.security set
network.vswitch.standard.policy.shaping get
network.vswitch.standard.policy.shaping set
network.vswitch.standard.portgroup add
network.vswitch.standard.portgroup list
network.vswitch.standard.portgroup.policy.failover get
network.vswitch.standard.portgroup.policy.failover set
network.vswitch.standard.portgroup.policy.security get
network.vswitch.standard.portgroup.policy.security set
network.vswitch.standard.portgroup.policy.shaping get
network.vswitch.standard.portgroup.policy.shaping set
network.vswitch.standard.portgroup remove
network.vswitch.standard.portgroup set
network.vswitch.standard remove
network.vswitch.standard set
network.vswitch.standard.uplink add
network.vswitch.standard.uplink remove
software.acceptance get
software.acceptance set
software.profile get
software.profile install
software.profile update
software.profile validate
software.sources.profile get
software.sources.profile list
software.sources.vib get
software.sources.vib list
software.vib get
software.vib install
software.vib list
software.vib remove
software.vib update
storage.core.adapter list
storage.core.adapter rescan
storage.core.adapter.stats get
storage.core.claiming autoclaim
storage.core.claiming reclaim
storage.core.claiming unclaim
storage.core.claimrule add
storage.core.claimrule convert
storage.core.claimrule list
storage.core.claimrule load
storage.core.claimrule move
storage.core.claimrule remove
storage.core.claimrule run
storage.core.device.detached list
storage.core.device.detached remove
storage.core.device list
storage.core.device.partition list
storage.core.device set
storage.core.device setconfig
storage.core.device.stats get
storage.core.device.vaai.status get
storage.core.device.world list
storage.core.path list
storage.core.path set
storage.core.path.stats get
storage.core.plugin list
storage.core.plugin.registration add
storage.core.plugin.registration list
storage.core.plugin.registration remove
storage.filesystem automount
storage.filesystem list
storage.filesystem mount
storage.filesystem rescan
storage.filesystem unmount
storage.nfs add
storage.nfs list
storage.nfs remove
storage.nmp.device list
storage.nmp.device set
storage.nmp.path list
storage.nmp.psp.fixed.deviceconfig get
storage.nmp.psp.fixed.deviceconfig set
storage.nmp.psp.generic.deviceconfig get
storage.nmp.psp.generic.deviceconfig set
storage.nmp.psp.generic.pathconfig get
storage.nmp.psp.generic.pathconfig set
storage.nmp.psp list
storage.nmp.psp.roundrobin.deviceconfig get
storage.nmp.psp.roundrobin.deviceconfig set
storage.nmp.satp.generic.deviceconfig get
storage.nmp.satp.generic.deviceconfig set
storage.nmp.satp.generic.pathconfig get
storage.nmp.satp.generic.pathconfig set
storage.nmp.satp list
storage.nmp.satp.rule add
storage.nmp.satp.rule list
storage.nmp.satp.rule remove
storage.nmp.satp set
storage.vmfs.extent list
storage.vmfs.snapshot.extent list
storage.vmfs.snapshot list
storage.vmfs.snapshot mount
storage.vmfs.snapshot resignature
storage.vmfs upgrade
system.boot.device get
system.coredump.network get
system.coredump.network set
system.coredump.partition get
system.coredump.partition list
system.coredump.partition set
system.hostname get
system.hostname set
system.module get
system.module list
system.module load
system.module.parameters list
system.module.parameters set
system.module set
system.process list
system.process.stats.load get
system.process.stats.running get
system.secpolicy.domain list
system.secpolicy.domain set
system.settings.advanced list
system.settings.advanced set
system.settings.kernel list
system.settings.kernel set
system.settings.keyboard.layout get
system.settings.keyboard.layout list
system.settings.keyboard.layout set
system.stats.uptime get
system.syslog.config get
system.syslog.config.logger list
system.syslog.config.logger set
system.syslog.config set
system.syslog mark
system.syslog reload
system.time get
system.time set
system.uuid get
system.version get
system.visorfs get
system.visorfs.ramdisk add
system.visorfs.ramdisk list
system.visorfs.ramdisk remove
system.visorfs.tardisk list
system.welcomemsg get
system.welcomemsg set
vm.process kill
vm.process list
~ #



… to be continued


Snapshot Quiesce

◎ Configuring quiesced snapshots

When creating a snapshot from the VM console, you are presented with the option to “Quiesce” the guest file system.

That’s it for the configuration, the checkbox you see below.











Easy as pie right?  But..


◎ Hold up; What does Quiesce actually mean (I have not clue) ?

At this point I have to fess up – being a true ESL individual – I’ve never heard the word “Quiesce” before, so the option means nothing to me.  Had to do a little bit of digging.   Here’s what I found:



v. i. 1. To be silent, as a letter; to have no sound.

[imp. & p. p. Quiesced ; p. pr. & vb. n. Quiescing .]

dictionary.reference.com/browse/quiesce   quiesce definition – networking 


 To render quiescent, i.e. temporarily inactive or disabled. For example to quiesce a device (such as a digital modem). It is also a system command in MAX TNT software which is used to “Temporarily disable a modem or DS0 channel”. 

OK – so at this point I get where this is headed.  I’m satisfied with my understanding of the word 😀

I checked the vSphere help file.  Lo and behold! I’m greeted by a perfectly clear explanation:

“Select the Quiesce guest file system (Needs VMware Tools installed) check box to pause running processes on the guest operating system so that file system contents are in a known consistent state when the snapshot is taken. This applies only to virtual machines that are powered on.”

But OCD kicked in.  I had to know…

◎ What else is there to know about Quiescing??  (Tell me more)

“Quiesce: If the <quiesce> flag is 1 or true, and the virtual machine is powered on when the snapshot is taken, VMware Tools is used to quiesce the file system in the virtual machine. Quiescing a file system is a process of bringing the on-disk data of a physical or virtual computer into a state suitable for backups. This process might include such operations as flushing dirty buffers from the operating systems in-memory cache to disk, or other higher-level application-specific tasks. 

Note:Quiescing indicates pausing or altering the state of running processes on a computer, particularly those that might modify information stored on disk during a backup, to guarantee a consistent and usable backup.”

Note: Depending on the guest operating system, the quiescing operation can be done by the sync driver, the vmsync module, or Microsoft’s Volume Shadow Copy (VSS) service

( Tell me more )

VMware products require file systems within a guest operating system to be quiesced prior to a snapshot operation for the purposes of backup and data integrity. VMware products which use quiesced snapshots include, but are not limited to, VMware Consolidated Backup and VMware Data Recovery. 

Virtual machine generating heavy I/O workload may encounter issues when quiescing prior to a snapshot operation. These issues may be related to the component that does the quiescing or the custom quiescing scripts as described in the Virtual Machine Backup Guide.

Services which have been known to generate heavy I/O workload include, but are not limited to, Exchange, Active Directory, LDAP, and MS-SQL.

The quiescing operation is done by an optional VMware Tools component called the SYNC driver.

As of ESX 3.5 Update 2, quiescing is also done by Microsoft’s Volume Shadow Copy Service (VSS). VSS is provided by Microsoft in their operating systems as of Windows Server 2003 and Windows XP.
Operating systems which do not have the Volume Shadow Copy Service make use of the SYNC driver for quiescing operations.


◎ Troubleshooting Quiescing with the SYNC driver

A guest operating system may appear to be unresponsive when there is a conflict between the SYNC driver and services generating heavy I/O. If installed, the SYNC driver holds incoming I/O writes while it flushes all dirty data to a disk, thus making file systems consistent. Under heavy loads, the delay in I/O can become too long, which affects many time-sensitive applications, including the services which generate the heavy I/O (such as an Exchange Server). If writes issued by these services get delayed for too long, the service may stop and issues error messages.

To avoid this issue, disable the SYNC driver or stop the service generating heavy I/O before taking a snapshot.

Note: The sync driver is only required for legacy versions of Windows such as Windows XP and Windows 2000 which do not include the Microsoft VSS service. Updated versions of VMware Tools will automatically uninstall the SYNC driver.

Disabling the VCB SYNC Driver (LGTO_Sync)

Disabling the SYNC driver allows you to keep the heavy I/O services on-line, but results in snapshots being only crash-consistent.

To disable the VCB SYNC driver:

  1. In Device Manager, click View > Show hidden devices.
  2. Expand Non-Plug and Play Drivers.
  3. Right-click Sync Driver and click Disable.
  4. Click Yes twice to disable the device and restart the computer. 

Stopping services generating heavy I/O

Use the following pre-freeze and post-thaw scripts to take the service generating heavy I/O offline for approximately 60 seconds and then restart it again after the snapshot is taken. This approach leaves the service inactive, but keeps the SYNC driver enabled while the snapshot is taken, ensuring application consistency. Using this method, you create the quiesced snapshot of guest operating system.
This example shuts down Exchange Services prior to a quiescing operation:

@echo off
net stop MSExchangeSA /yes

@echo off
Net Start MsExchangeSA
Net Start MsExchangeIS
Net Start MsExchangeMTA


And that’s all the time I have right now – I hope this has been informative.










vShpere Help File
vmware KB 101518 – Understanding VM snapshots
vmware KB 5962168 – VM can freeze under load when you take quiesced snapshots 


vStorage APIs for Array Integration ( VAAI )

What is VAAI?

VAAI is a set of APIs and SCSI commands that offload certain I/O-intensive functions from the ESXi host to the storage platform for more efficient performance.
VAAI was introduces in vSphere 4.1 to enable offload of these features:
◎ Full Copy
What is it? Hardware-accelerated copying of data by performing all duplication and migration operations on the array.
Benefits? Faster data movement via Storage vMotion; faster vm creation and deployment from templates; faster vm cloning.  Reduces server CPU cycles, memory, IP and SAN network bandwidth, and storage front-end controller I/O.
◎ Block Zero
What is it?  Hardware-accelerated zero initialization.
Benefits?  Greatly reduces common input/output tasks, such as creating new vm’s.  Especially beneficial when creating FT enabled VMs or when performing routine app-level Block Zeroing
◎ Hardware-assisted locking
What is it?  Improved locking controls on VMFS.
Benefits?  more VMs per datastore.  Shortened simultaneous block vm boot times.  Faster VM migration.
What is new in 5.0?
Enhancements for environments that use array-based thin provisioning.  Specifically:
◎ Dead Space Reclamation
What is it?  The ability to reclaim blocks on a thin-provisioned LUN on the array when a virtual disk is deleted or migrated to a different datastore.  Historically the blocks used prior to the migration where still reported as “in use” by the array.
Benefits? More accurate reporting of disk space consumption and reclamation of the unused blocks on the thin LUN.
◎ Out-of-space conditions
What is it?  If a thin-provisioned datastore reaches 100 percent, only the virtual machines that require extra blocks of storage are temporarily paused, allowing admins to allocate additional space to the datastore.  Virtual machines on the datastore that don’t need additional space continue to run.
Benefits?  Prevents some catastrophic scenarios encountered with storage oversubscription in thin-provisioned environments.
Configuring / Verifying VAAI Full Copy/Block Zero
In the vSphere client, Host and Clusters > Configuration Tab > (Software) Advanced Settings > DataMover
Full Copy = DataMover.HardwareAcceleratedMove.  1 = Enabled ; 0 = Disabled
Block Zero = DataMover.HardwareAcceleratedInit.  1 = Enabled; 0 = Disabled
Configuring / Verifying VAAI Hardware-Assisted Locking
In the vSphere client, Host and Clusters > Configuration Tab > (Software) Advanced Settings > VMFS3
Hardware-Assisted Locking = VMFS3.HardwareAcceleratedLocking.  1 = Enabled; 0 = Disabled.
VAAI Dead Space Reclamation
This one can be a little bit involved.  There are various resources addressing this topic, all are referenced at the end of this post.
(In a nutshell)
Step 1 – Verify Hardware Acceleration (VAAI) is supported 
Host and Clusters > Configuration tab > (Hardware) Storage > Select Datastore, review details (Not supported in my dinky home lab).
Step 2 – Get the NAA id of the device backing the datastore:
~ # esxcli storage vmfs extent list
Example output:
Step 3 – Get VAAI status:
esxcli storage core device list -d naa.60a98000572d54724a346a6170627a52
# esxcli storage core device list –d  naa.60a98000572d54724a346a6170627a52
   Display Name: NETAPP Fibre Channel Disk (naa.60a98000572d54724a346a6170627a52)
   Has Settable Display Name: true
   Size: 51200
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/naa.60a98000572d54724a346a6170627a52
   Vendor: NETAPP
   Model: LUN
   Revision: 8020
   SCSI Level: 4
   Is Pseudo: false
   Status: on
   Is RDM Capable: true
   Is Local: false
   Is Removable: false
   Is SSD: false
   Is Offline: false
   Is Perennially Reserved: false
   Thin Provisioning Status: yes
   Attached Filters: VAAI_FILTER
   VAAI Status: supported
   Other UIDs: vml.020033000060a98000572d54724a346a6170627a524c554e202020
Step 4 – Check if the array supports the UNMAP primitive for dead space reclamation
esxcli storage core device vaai status get -d naa.60a98000572d54724a346a6170627a52
Step 5 – Run the UNMAP primitive command

Caution – We expect customers to use this primitive during their maintenance window, since running it on a datastore that is in-use by a VM can adversely affect I/O for the VM. I/O can take longer to complete, resulting in lower I/O throughput and higher I/O latency.

A point I would like to emphasize is that the whole UNMAP performance is totally driven by the storage array. Even the recommendation that vmkfstools -y be issued in a maintenance window is mostly based on the effect of UNMAP commands on the array’s handling of other commands.

There is no way of knowing how long an UNMAP operation will take to complete. It can be anywhere from few minutes to couple of hours depending on the size of the datastore, the amount of content that needs to be reclaimed and how well the storage array can handle the UNMAP operation.

To run the command, you should change directory to the root of the VMFS volume that you wish reclaim space from. The command is run as:

vmkfstools –y <% of free space to unmap>

Step 6 – Verify
Verify using esxtop > u > f > o > p  ; Review the DELETE, DELETE_F and MBDEL/s columns.
For this one I recommend reviewing the article put together by Pauldie O’Riordan, the last referenced below at the end of this post.
Out of Space Conditions / Thin Provisioning Stun
I can’t find a setting for this so I am assuming if VAAI is supported by the array, then the OOS/TPS behavior will apply.  I will keep digging on this one.  This snippet out of a VMware Community blog clarifies the feature to satisfaction (at least we know what to expect):
That’s all I got peeps.  Live long and prosper.

VMware Network I/O Control ( NetIOC )

Cliff notes for NetIOC.  You can find a most excellent white paper describing this feature in 25 glorious pages here:  VMware Network I/O Control, Architecture, Performance and Best Practices.

Prerequisites for NetIOC

NetIOC is only supported with the vNetwork Distributed Switch (vDS).

NetIOC Feature Set
NetIOC provides users with the following features:
• Isolation: ensure traffic isolation so that a given flow will never be allowed to dominate over others, thus preventing drops and
undesired jitter
• Shares: allow flexible networking capacity partitioning to help users to deal with overcommitment when flows compete
aggressively for the same resources
• Limits: enforce traffic bandwidth limit on the overall vDS set of dvUplinks
• Load-Based Teaming: efficiently use a vDS set of dvUplinks for networking capacity
NetIOC Traffic Classes
The NetIOC concept revolves around resource pools that are similar in many ways to the ones already existing for CPU and Memory.
NetIOC classifies traffic into six predefined resource pools as follows:
• vMotion
• FT logging
• Management
• Virtual machine traffic

A user can specify the relative importance of a given resource-pool flow using shares that are enforced at the dvUplink level. The
underlying dvUplink bandwidth is then divided among resource-pool flows based on their relative shares in a work-conserving
way. This means that unused capacity will be redistributed to other contending flows and won’t go to waste. As shown in Figure 1,
the network flow scheduler is the entity responsible for enforcing shares and therefore is in charge of the overall arbitration under
overcommitment. Each resource-pool flow has its own dedicated software queue inside the scheduler so that packets from a given
resource pool won’t be dropped due to high utilization by other flows.
A user can specify an absolute shaping limit for a given resource-pool flow using a bandwidth capacity limiter. As opposed to shares
that are enforced at the dvUplink level, limits are enforced on the overall vDS set of dvUplinks, which means that a flow of a given
resource pool will never exceed a given limit for a vDS out of a given vSphere host.
Load-Based Teaming (LBT)
vSphere 4.1 introduces a load-based teaming (LBT) policy that ensures vDS dvUplink capacity is optimized. LBT avoids the situation of
other teaming policies in which some of the dvUplinks in a DV Port Group’s team were idle while others were completely saturated just
because the teaming policy used is statically determined. LBT reshuffles port binding dynamically based on load and dvUplinks usage
to make an efficient use of the bandwidth available. LBT only moves ports to dvUplinks configured for the corresponding DV Port
Group’s team. Note that LBT does not use shares or limits to make its judgment while rebinding ports from one dvUplink to another.
LBT is not the default teaming policy in a DV Port Group so it is up to the user to configure it as the active policy.
LBT will only move a flow when the mean send or receive utilization on an uplink exceeds 75 percent of capacity over a 30-second
period. LBT will not move flows more often than every 30 seconds.

Configuring NetIOC

NetIOC is configured through the vSphere Client in the Resource Allocation tab of the vDS from within the “Home->Inventory->Networking” panel.
NetIOC is enabled by clicking on “Properties…” on the right side of the panel and then checking “Enable network I/O control on this
vDS” in the pop up box.

Editing NetIOC Settings

That’s all folks.  I’m out.