VDCA550 Objective 1.1 – 1.3 (Implement and Manage Storage) in One Dense Post

Because sharing is caring.  Here are my notes after tons of reading and lab time.  Heavily using the VCAP5-DCA Official Cert Guide (OCG), the vSphere 5.5 Documentation Center.  Supplementing with blogs and youtube anywhere my main sources fall short.

Extra special thanks to Chris Wahl for his Study Sheets.  They are helping me tons with managing my time.  I’m using the VDCA550 version.

Objective 1.1 Implement Complex Storage Solutions


VMware DirectPath I/O – OCG Page 101


“VM access to PCI devices”

# Configuring in GUI – Video Demo

# Pre-reqs
– Intel VT-d or AMD IOMMU enabled in BIOS
– Devices connected and marked as available for passthrough
– VM Hardware version 7

# Enabling in the GUI
VC > Host > Configuration > Hardware > Advanced Settings > Configure Passthrough (add a PCI device)
VC > VM > Edit Settings > Add > Add the PCI device.


N-Port Virtualization (NPIV) – OCG Page 99


“WWN at VM level”

# Pre-reqs
– Only on VMs with RDM disks (VMs with reg disks use WWN of the Host’s HBAs.
– HBA on host must support NPIV
– Fabric switches must be NPIV-aware

# Capabilities & Limitations
– vMotion supported; vmkernel reverts to physical hba if destination host ors not support NPIV.
– Concurrent I/O supported.
– Requires FC switch
– Clones do not retain WWN
– Does not support Storage vMotion
– Disabling and Re-enabling NPIC capability on FC Switch while VM running can cause FC link to fail and I/O to stop.

# Configuring in the GUI
VC > VM > Edit Settings > Options Tab > Advanced – Fibre Channel NPIV
WC > VM > Edit Settings > VM Options > Expand FC NPIV triangle> Deselect “Temporarily Disable NPIV for this VM > Generate new WWN


Raw Device Mappings (RDM) – OCG Page 98


“An RDM allows a VM to directly utilize a LUN”

# Considerations & Limitations
– RDM is not available for directly attached block devices.
– Snapshots are not supported in

# Configuring in GUI
VC > VM > Edit Settings > Hardware – Add > Hard Disk > Type: Raw Device Mappings > Select LUN > Select datastore


Configure vCenter Server Storage Filters (Storage Profiles) – OCG Page 102


“vCenter Server provides storage filters to help you avoid storage device corruption or performance degradation that can be caused by an unsupported use of storage devices.”

# Configuring in the GUI
VC > Administration > vCenter Server Settings > Advanced Settings
WC > VC Server > Manage > Settings > Advanced Settings > Edit

(filters by default are not listed and are TRUE)

Add the key – In the Value box, type False > Add > OK


VMFS re-signaturing – OCG Page 104


“When resignaturing a VMFS copy, ESXi assigns a new UUID and a new label to the copy, and mounts the copy as a datastore distinct from the original.”

# Resignaturing in the GUI

# Checking UUID
esxcli storage vmfs extent list
vmkfstools -P -h [datastoreName]

# Checking UUID in the GUI
VC > Datastores and DS Clusters > Configuration > Datastore Details > Location

# Resignaturing with GUI
VC > Host > Configuration > Storage > Add Storage… > Select Disk/LUN > Select Datastore > Mount options > Assign New Signature
VC > Host > Configuration > Storage > Add Storage… > Select Disk/LUN > Select Datastore > Mount options > Keep Existing Signature.

# Resignaturing with esxcli
esxcli storage vmfs snapshot list
esxcli storage vmfs snapshot mount -l ‘datastore-volume-label’
esxcli storage vmfs snapshot resignature -l ‘datastore-volume-lable’


Understand and apply LUN masking using PSA-related commands – OCG Page 127, Page 191

# Applying LUN Masking


# Changing the Path Selection Plugin for a Storage Array Type Plugin
/vmfs/volumes # esxcli storage nmp satp set -s VMW_SATP_CX -P VMW_PSP_RR
Default PSP for VMW_SATP_CX is now VMW_PSP_RR

# List devices
esxcli storage vmfs extent list

# List paths
esxcli storage nmp path list

# List all claim rules
esxcli storage core claimrule list

# Claimrule based on Fiber Channel
esxcli storage core claimrule add -u -P MASK_PATH -t transport -R fc

# Claimrule rule #333 masking on adapter , channel, target, lun
esxcli storage core claim rule add -r 333 -P MASK_PATH -t location -A vmhba32 -C 0 -T 0 -L 0

# Load the claim rules into runtime
esxcli storage core claimrule load
esxcli storage core claimrule run

# Reclaim a lun
esxcli storage core claiming reclaim -d

# Remove a rule
esxcli storage core claimrule remove -r 333
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t location -A vmhba2 -C 0 -T 0 -L 2

# LUN Masking in the GUI
No GUI method exactly matches the commands above.
– Native Multipathing (NMP) paths can be enabled/disabled
– Path Selection Policy (PSP) can be configured (Fixed, Most Recently Used, Round Robin)

VC > Hosts and Clusters > Host > Configuration > Storage > View Devices > Manage Paths
Web Client > Storage > Datastore > Manage > Settings > Connectivity and Multipathing


Configure iSCSI Port Binding – OCG Page 123

# Configuring iSCSI Port Binding Video Demo

# Adding the Software iSCI adapter
host > configuration > storage adapters > Add > Select “Add Software iSCSI adapter” > OK

# Adding the iSCI vmkernel interface
VC > host > configuration > networking > vSphere Standard Switch > Add Networking… VMkernel > Select vSwitch

#Configure the Storage Adapter
VC > Hosts and Clusters > host > configuration > storage adapters > select the iSCSI Software Adapter > Properties


vSphere Flash Read Cache (Not covered in printed OCG – Covered in supplemental Appendix C)


“Performance enhancement of read-intensive applications by providing a write-through cache for virtual disks. It uses the Virtual Flash Resource, which can be built on Flash-based, solid-state drives (SSDs) that are installed locally in the ESXi hosts.”

# Configuring vFRC On the host
Web Client > Hosts & Clusters > Host > Manage > Settings > Virtual Flash > Virtual Flash Resource Management > Add Capacity

# Configure a VM with vFRC
Web Client > VM > Edit Settings > Select/Expand Hard Disk > Enter qty or FRC > OK

# Configure Host Cache in GUI
WC > Host > Manage > Storage > Host Cache Configuration > Select DS > Allocate space for host cache.


Configure Datastore Cluster – OCG Page 120

# Configure DS Cluster in the GUI
VC > Storage > New DS Cluster > Storage DRS Automation level > Select Runtime settings/IO inclusion > Select Clusters/Hosts > Select Datastores
WC > Right click DC Object > New DS Cluster


Upgrade VMware Storage Infrastructure – OCG Page 115

# Upgrade datastores in the GUI
VC > Datastore > Configuration > Upgrade (Option does not appear if running latest)
WC > Datastore > Manage > Settings > General > Properties (Option does not appear if running latest)



Objective 1.2 Manage Complex Storage Solutions


Analyze I/O workloads to determine storage performance requirements OCG Page 168, 188, 196


# List VM World GID Info
vscsiStats -l

# Collect stats on GID 42155
vscsiStats -s -w 42155 (s to start collection, w to specify the GID)

# Display stats
vscsiStats -p {type} (Type options all, ioLength, seekDistance, outstandingIOs, latency, interarrival)

# Stop all collection
vscsiStats -x

# View host level statistics, examine disk adapter stats
esxtop > d

# View LUN level statistics
esxtop > u

# View VM level disk stats
esxtop > v

* CMDS/s – This is the total amount of commands per second, which includes IOPS and other SCSI commands (e.g. reservations and locks). Generally speaking CMDS/s = IOPS unless there are a lot of other SCSI operations/metadata operations such as reservations.
* DAVG/cmd – This is the average response time in milliseconds per command being sent to the storage device.
* KAVG/cmd – This is the amount of time the command spends in the VMKernel.
* GAVG/cmd – This is the response time as experienced by the Guest OS. This is calculated by adding together the DAVG and the KAVG values.

As a general rule DAVG/cmd, KAVG/cmd and GAVG/cmd should not exceed 10 milliseconds (ms) for sustained lengths of time.
There are also the following throughput metrics to be aware of:

* CMDS/s – As discussed above
* READS/s – Number of read commands issued per second
* WRITES/s – Number of write commands issued per second
* MBREAD/s – Megabytes read per second
* MBWRTN/s – Megabytes written per second

The sum of reads and writes equals IOPS, which is the the most common benchmark when monitoring and troubleshooting storage performance. These metrics can be monitored at the HBA or Virtual Machine level.


Identify and tag SSD and local devices – page 133


# Identify SSD in the GUI
VC > Host > Configuration > Hardware – Storage > Datastores > Drive Type
WC > Host > Manage > Storage > Storage Devices > Drive Type

# Identify the device to be tagged and its SATP, command & example output

esxcli storage device nmp device list (note the SATP)
 Device Display Name: DGC Fibre Channel Disk (naa.6006016015301d00167ce6e2ddb3de11)
 Storage Array Type: VMW_SATP_CX
 Storage Array Type Device Config: {navireg ipfilter}
 Path Selection Policy: VMW_PSP_MRU
 Path Selection Policy Device Config: Current Path=vmhba4:C0:T0:L25
 Working Paths: vmhba4:C0:T0:L25

# Add a PSA claim rule

## By Device Name
esxcli storage nmp satp rule add -s VMW_SATP_CX -d device_name —o enable_ssd

## Add By Vendor / Model
esxcli storage nmp satp rule add -s VMW_SATP_CX -V vendor_name -M model_name -o enable_ssd

# Reclaim the device
esxcli storage core claiming reclaim —d [devicename]

# Check if device is tagged SSD
esxcli storage core device list -d device_name


Administer hardware acceleration for VAAI – OCG Page 106

Click to access VMware-vSphere-Storage-API-Array-Integration.pdf

VAAI = vSphere Storage APIs Array Integration

“Hardware-acceleration / hardware offload APIs. Storage primitives that allow the host to offload storage operations”

# Full copy – Array performs copies without having to communicate with the host. Speeds up cloning/svmotion.

# Block zeroing . Array performs zeroing. Speeds up block zeroing process when new virtual disk is created

# Hardware-assisted locking. Enhanced locking. ATS replaces SCSI-2. More VMs per Datastore. More Hosts per LUN

# Configuring in GUI
VC > Host > Configuration > Software – Advanced Settings ; use the settings mentioned above; 0 will disable

# Checking for VAAI Support
VC > Host > Configuration > Storage > Hardware > Datastores View


Configure and administer profile-based-storage – OCG Page 109

“VM storage policies can be used during VM provisioning to ensure that the virtual disks are placed on proper storage. VM storage policies can be used to facilitate the management of the VM, such as during migrations, to ensure that the VM remains on compliant storage.”

# Configuration in GUI Video Demo

# 1) Enable the feature on the host/cluster
VC > Home > Management – VM Storage Profiles > Enable VM Storage Profiles > Select the Host/Cluster > Click Enable Storage Profiles > Close

# 2) Define User-defined Capabilities
VC > Home > Management – VM Storage Profiles > Manage Storage Capabilities > Add > Name the capability > OK

# 3) Create VM Storage Profile
VC > Home > Management – VM Storage Profiles > Create > Create new VM storage profile > Name the storage profile > Select a defined capability defined in #2 > Click Next > Click Finish

# 4) Assign User-defined Capabilities
VC > Right-click datastore > Assign User-Defined Storage Capabilitiy > Select a Storage Capability from the drop-down > click OK

# Test by creating new vm
VC > VMs & Templates > New VM > In storage section, use the drop-down, the view will filter datastore options into compatible/non-compatible options.


Prepare Storage for Maintenance – OCG Page 114


“ Datastore maintenance mode “

# Configuring SDRS MM
VC > Datastore > Right click datastore > Enter SDRS Maintenance mode
WC > Datastore > All vCenter Actions > Enter Storage DRS Maintenance mode


Apply Space Utilization Data to Manage Storage Resources


Provision and Manage Storage Resources According to VM Requirements


# Disk Formats:

■ Lazy-zeroed Thick (default) – Space required for the virtual disk is allocated during creation. Any data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine. The virtual machine does not read stale data from disk.
– Fast
– File block zeroed on write
– Fully pre-allocated on datastore

■ Eager-zeroed Thick – Space required for the virtual disk is allocated at creation time. In contrast to zeroedthick format, the data remaining on the physical device is zeroed out during creation. It might take much longer to create disks in this format than to create other types of disks.
– Slow – but faster with VAAI
– File block zeroed when disk is created.
– Fully preallocated on datastore.

■ Thin – Thin-provisioned virtual disk. Unlike with the thick format, space required for the virtual disk is not allocated during creation, but is supplied, zeroed out, on demand at a later time.
– Very Fast
– File block is zeroed on write.
– File block is allocated on write.

■ rdm:device – Virtual compatibility mode raw disk mapping.

■ rdmp:device – Physical compatibility mode (pass-through) raw disk mapping.

■ 2gbsparse – A sparse disk with 2GB maximum extent size. You can use disks in this format with hosted VMware products, such as VMware Fusion, Player, Server, or Workstation. However, you cannot power on sparse disk on an ESXi host unless you first re-import the disk with vmkfstools in a compatible format, such as thick or thin.


Understand Interactions Between Virtual Storage Provisioning and Physical Storage Provisioning

Reference Virtual Disk Format Types – OCG Page 95
Troubleshoot Storage Performance and Connectivity – OCG Page 188


Configure Datastore Alarms – OCG Page 117 (Datastore Alarms) Page 235 (SDRS Alams)

Create and Analyze Datastore Alarms and Errors to Determine Space Availability – OCG page 169, 188 – 201

# Configuring the alarm in the GUI
VC > Define the scope > Alarms Tab > Definitions > Right click Whitespace > New Alarm > Select type Datastore

# Define a trigger

## Datastore alarms support the following triggers:
– Datastore Disk Provisioned (%) >>> Is above / Is below >>> 50, 150, 200, etc (increments of 50)
– Datastore Disk Usage (%) >>> Is above / Is below >>> Defined percentage
– Datastore State to All Hosts >>> Is equal to / Not equal to >>> None / Connected / Disconnected


Objective 1.3 Troubleshoot Complex Storage Solutions


Perform Command-line Configuration of Multipathing Options – OCG Page 188

# Identify LUNs
esxcli storage core device list

# Identify paths
esxcli storage core path list -d device_name

# Get stats on a path
esxcli storage core path stats -d path_name

# Disable a path in the CLI
esxcli storage core path get -p path_name –state=[active/off]


Change a Multipath Policy – OCGP Page 132


# Changing the multipath policy in the GUI
VC > Host > Configuration > Hardware – Storage > Datastores View > Right click Ds, Properties > Manage Paths

# Default PSPs Explained
Most Recently Used (MRU) – VMW_PSP_MRU
– Selects path most recently used
– On failure, an alternate path will take over
– On recovery, the original path becomes an alternate

Round Robin (VMware) – VMW_PSP_RR
– automatic path selection algorithm, rotating through all active paths when connecting to active-passive arrays.

Fixed (VMware) – VMW_PSP_FIXED
– Host uses designated preferred path, if configured. Otherwise uses the first working path. An explicitly designated path will be used even if marked dead.


Troubleshoot Common Storage Issues – OCG Page 188


# Troubleshooting Storage Adapters

# Troubleshooting SSDs

# Troubleshooting Virtual SAN

# Failure to Mount NFS Datastores

# VMkernel Log Files Contain SCSI Sense Codes

One thought on “VDCA550 Objective 1.1 – 1.3 (Implement and Manage Storage) in One Dense Post


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s