Objective 3 – Configure and Administer Advanced vSphere Storage

Objective 3.1: Manage vSphere Storage Virtualization

Knowledge

  • Configure FC/iSCSI/FCoE LUNs as ESXi boot devices
    • Each host must have it’s own boot lun. If not, image corruption is likely to occur
    • HBA/FCoE or CNA
    • FC
      • Follow vendor recommendations
      • Enable and correctly configure the adapter, ensure access to the boot lun
      • Each host must have access to its boot lun only and no others
        • Multiple servers can share a diagnostic partition. Use lun masking.
      • Multipathing to a boot lun on active-passive array is NOT supported
  1. Configure san components
    1. Cable server and switches appropriately (duh)
    2. Make esxi host visible to san
    3. Create/assign luns.
    4. Document
  2. Configure storage adapter to boot from san
    1. Storage adapters must be configured to boot from san. Refer to vendor documentation. (think bios pointed at san).
  3. Set up system to boot from installation media
    1. First boot, install from media.
    2. Install OS on san boot partition
    3. change boot order to boot from FC adapter first. Pg 52-53 storage guide
      1. Some adapters (Emulex for example) need to have their bios setting enabled prior to being set in the boot order
      2. Adapters will need to be configured as a boot device. With the boot lun and the WWPN.
    • FCoE
      • Most of the config is done through the rom of the adapter. Adapter must support FBFT (Intel) or FBPT (Vmware defined)
      • esxi5.1 or later
      • Adapter must be FCoE compatible, support FBFT or FBPT, support ESXi on open FCoE stack
      • Multipathing is not supported at pre boot.
      • Boot lun cannot be shared with other hosts
      • If using Intel 10Gb controller configure switch port to Enable spanning tree (STP) and to turn off switchport trunk native vlan for vlan used for FCoE
      1. Configure    FCoE paramters
        1. Configure rom of adapter to include target, lun, vlan id, etc
      2. Install and boot from FCoE lun
        1. Change boot order. Install.
        2. Change boot order to boot from FCoE.
      • iSCSI
        • General
        • Review vendor recommendations, check HCL
        • Ensure nic can use iSCSI boot protocol
        • Use statie IP’s
        • VMFS and boot partitions must be separate
        • Boot lun should only be visible to the host that uses the LUN.
        • Configure a diagnostic partition.
        • Prepare
          • Ensure physical/IP connectivity
          • Configure the storage. Setup lun, ACL’s, presentation, recored iscsi name & IP address of the targets
        • Independent iscsi adapter for san boot
          • Set bios Boot to install media
          • During POST enter qlogic iscsi hba menu. Configure host adapter settings. Reboot. Configure iscsi boot settings from qlogic firmware menu.
        • Software and dependent iscsi adapters can use iBFT to boot from san.
          • Must have a iSCSI boot capable network adapter that supports iBFT.
          • Before installing ESXi and booting from the iSCSI SAN, configure the networking and iSCSI boot parameters on the network adapter and enable the adapter for the iSCSI boot.
          • Once setting up a host to boot from iBFT iSCSI, you cannot disable the software iscsi adapter
          • Configure iscsi boot parameters
          • Change boot sequence
          • Install to iscsi target
          • Boot from iscsi target
        • If you set an iBFT gateway it becomes the systems default gateway.
    • Enable/Configure/Disable vCenter Server storage filters
      • Filters are enabled by default by showing only compatible storage devices for a particular operation. Turning off the filters allows you to veiw all devicesfilters
    • Configure/Edit hardware/dependent hardware initiators
      • manage->storage->storage adapters
      • To enable flow control for the host, use the esxcli system module parameters command.
        • By default, flow control is enabled on all network interfaces in VMware ESXi and ESX. This is the preferred configuration.
      • Hardware iSCSI adapters are enabled by default, but for hardware dependent iSCSI adapter to be functional you must configure networking for iSCSI traffic and associated the adapter to the appropriate vmkernel iSCSI port.
        • The same process to bind network adapters with software iSCSI adapter when enabling multipathing, except that iSCSI adapters can only be connected to their own physical nics.
        • For a dependent hardware iSCSI adapter, only one VMkernel adapter associated with the correct physical NIC is available.
        • If you choose to use separate vsphere switches you must connect them to different IP subnets. If you choose to use a single vswitch with all vmnics added to it, you must override failover order on the vmk adapters to only have a single active physical adapter per vmk.
        • Physical nics must be on the same subnet as the iSCSI storage system they connect to
    • Enable/Disable software iSCSI initiator
      • Right-click, properties ->configure, enable/disable checkbox (c# client)
      • manage->storage->select adapter, under adapter details click the disable button. Reboot the host. (web client)
      • To add a new software iSCSI adapter (can only have one per host), configuration->storage adapters, add.
    • Configure/Edit software iSCSI initiator settings
      • If you change the name of an iSCSI adapter it will be used for new sessions. Existing sessions aren’t altered.
      • When configuring targets, you can only add new targets. To make changes, remove the existing target and re-add it.
    • Configure iSCSI port binding
      • See above. Process for software & dependent iscis is very similar, except that dependent must have its adapters associated with the appropriate physical nics.
      • manage->storage->storage adapters, select adapter->adapter details, network port binding tab->add, select desired adapters that are compliant with the port group policy and click ok. Rescan and Refresh.
      • From page 91 & 95 of the storage guide:
        • iSCSI ports of the array target must reside in the same broadcast domain and IP subnet as the VMkernel adapters.
        • All VMkernel adapters used for iSCSI port binding must reside in the same broadcast domain and IP subnet.
        • All VMkernel adapters used for iSCSI connectivity must reside in the same virtual switch.
        • Port binding does not support network routing.
        • When using separate vSphere switches to connect physical network adapters and VMkernel adapters, make sure that the vSphere switches connect to different IP subnets.
        • If VMkernel adapters are on the same subnet, they must connect to a single vSwitch.
        • If you migrate VMkernel adapters to a different vSphere switch, move associated physical adapters.
        • Do not make configuration changes to iSCSI-bound VMkernel adapters or physical network adapters.

Do not use port binding when any of the following conditions exist:

  • Array target iSCSI ports are in a different broadcast domain and IP subnet.
  • VMkernel adapters used for iSCSI connectivity exist in different broadcast domains, IP subnets, or use different virtual switches.
  • Routing is required to reach the iSCSI array.
    • A warning sign indicates non-compliant port group policy for an iSCSI-bound VMkernel adapter.    Problem: The VMkernel adapter’s port group policy is considered non-compliant in the following cases:
      • The VMkernel adapter is not connected to an active physical network adapter.
      • The VMkernel adapter is connected to more than one physical network adapter.
      • The VMkernel adapter is connected to one or more standby physical adapters.
      • The active physical adapter is changed.
  • Enable/Configure/Disable iSCSI CHAP
    • For chap authentication it applies for all targets not yet discovered, but does not apply to targets that are already discovered

Tools

+ Objective 3.2: Configure Software-defined Storage

Knowledge

  • Determine the role of storage providers in VSAN
    • VASA providers or storage providers provide storage information and datastore characteristics (aka data services) to vsphere that can be used in VM storage policies
      • A single datastore can offer multiple services
    • The data services surfaced by a storage provider are the fundamentals for storage policy’s. To define a policy a rule set is required, additional rule sets are optional
      • The relationship between all rule sets within a policy is a boolean OR, whereas the the rules within a rule set is a boolean AND.
    • vSAN configures and registers a storage provider for each host in the vSAN cluster
      • All hosts have a storage provider, but only one host SP is active. Storage provider that belong to other hosts are in standby. If the host that currently has the online storage provider fails, another host will bring its provider online.
  • Determine the role of storage providers in VVOLs
    • Virtual volumes are objects exported by a compliant storage system and typically correspond one-to-one with a virtual machine disk and other VM-related files. A virtual volume is created and manipulated out-of band,not in the data path, by a VASA provider.
      A VASA provider, or a storage provider, is developed through vSphere APIs for Storage Awareness. The storage provider enables communication between the vSphere stack — ESXi hosts, vCenter server, and the vSphere Web Client — on one side, and the storage system on the other. The VASA provider runs on the storage side and integrates with vSphere Storage Monitoring Service (SMS) to manage all aspects of VirtualVolumes storage. The VASA provider maps virtual disk objects and their derivatives, such as clones, snapshots, and replicas, directly to virtual volumes on the storage system. ” storage guide, pg 216
    • Vvols storage provider = VASA. acts as storage awareness service. Controls out of band management coordination and in band storage communications
    • Delivers information from the underlying storage (storage container) so that storage container capabilities can appear in vCenter
    • Delivers information back to the storage container regarding virtual machine storage requirements
    • Vendors are responsible for supplying storage providers that can provide vvols support
    • After you register a storage provider vCenter discovers all configured storage containers and their capabilities, endpoints, relevant attributes.
    • Essentially a bi-directional integration engine
    • Every storage provider must be certified by VMware
  • Explain VSAN failure domains functionality
    • “A disk group represents a single failure domain in the Virtual SAN datastore”
  • Configure/Manage VMware Virtual SAN
    • Before considering the number of failures to tolerate, make sure that in each disk group the size of the flash cache is at least 10 percent of the anticipated consumed capacity, without the protection copies.
    • When you reserve capacity for your vSphere HA cluster with an admission control policy, this setting must be coordinated with the corresponding Virtual SAN setting that ensures data accessibility on failures. Specifically, the Number of Failures Tolerated setting in the Virtual SAN rule set must not be lower than the capacity reserved by the vSphere HA admission control setting.
    • One SAS or SATA host bus adapter (HBA), or a RAID controller that is in passthrough or RAID 0 mode is required
    • During the Virtual SAN upgrade from version 5.5 to version 6.0, you can keep the on-disk format version 1.0, but you cannot use many of the new features
    • Each host must have minimum bandwidth dedicated to Virtual SAN.
      • Dedicated 1 Gbps for hybrid configurations
      • Dedicated or shared 10 Gbps for all-flash configurations
    • vSAN licenses must cover the total number of CPU’s in the cluster
    • if the number of failures to tolerate is one, virtual machines can use about 50 percent of the raw capacity. If the number of failures to tolerate is two, the usable capacity is about 33 percent. If the number of failures to tolerate is equal to the maximum of three, the usable capacity is about 25 percent.
    • Use read cache reservation only if you must meet a specific, known performance requirement for a particular workload.(hybrid only, all flash uses cache only for write caching)
    • You configure the number of failures to tolerate attribute in the VM storage policies to handle host failures.
      • The number of hosts required for the cluster is equal to 2 * number of failures to tolerate + 1. The more failures the cluster tolerates, the more capacity hosts are required.
    • If you deploy vCenter Server on the Virtual SAN datastore, you might not be able to use vCenter Server for troubleshooting, if a problem occurs in the Virtual SAN cluster.
    • Virtual SAN uses the teaming and failover policy that is configured on the backing virtual switch for network redundancy only. Virtual SAN does not use NIC teaming for load balancing.
    • If you have multiple hosts in the Virtual SAN cluster, and you use vSphere Update Manager to upgrade the hosts, the default evacuation mode is Ensure Accessibility. If you use this mode and while upgrading Virtual SAN if you encounter a failure, your data will be at risk.
    • The new on-disk format 2.0 exposes your environment to the complete feature set of Virtual SAN.
      • When upgrading For a three-host cluster, you must choose the Ensure accessibility evacuation mode. When in this mode, any hardware failure might result in data loss.
      • When working with a three-host cluster or when upgrading Virtual SAN with limited resources, run the RVC command with the option, vsan.v2_ondisk_upgrade –allow-reduced-redundancy, to allow the virtual machines to operate in a reduced redundancy mode during upgrade.
      • Review vsan guide page 55 for upgrade process.
    • When you manually add a new device or host to a vsan cluster, the data does not automatically redistribute data. If you move a esxi host to the vsan cluster by using a host profile
    • vSAN will resynchronize data when:
      • Editing a VM storage policy (if you choose to apply it automatically, rather than the default of manual)
      • Restarting a host after a failure
      • Evacuating data by using full data migration
      • Exceed the util of a capacity device. When a device crosses 80%, it will trigger a resynch
        • Consider having 30% free capacity at all times
    • Rebalance is a separate activity caused hardware failures or hosts going into maintenance mode
      • Exceed the util of a capacity device. When a device crosses 80%, it will trigger a resynch
        • Consider having 30% free capacity at all times
      • Manual rebalance can be initiated by using the RVC tool
      • Use RVC to monitor the rebalance activity
    • In Virtual SAN, components that have failed can be in absent or degraded state
      • Degraded presumes permanent failure and vsan starts rebuilding immediately
      • Absent, presumed to be temporary and starts rebuiliding absent components olny if they are not available within a timeout period (default 60 minutes)
  • Create/Modify VMware Virtual Volumes (VVOLs)
    • Create new datastore, choose vvol as the storage type and choose the storage provider
    • Prereq’s
      • Array must support vVols and integrate with vsphere via VASA provider
      • Vvols storage provider must be deployed
      • Protocol endpoints, storage containers and storage profiles must be configured on the storage side
      • NTP must be setup and synchronized across the VMware and storage environments
    • Process:
      • Register storage providers for vVols (aka vasa providers)
        • Browse to vCenter in web client
        • Manage, storage providers, register new storage provider
      • Create a virtual datastore
        • Create new datastore, choose vvol as the storage type and choose the storage provider
        • Select the hosts that will access the datastore
      • Review and manage protocol endpoints
    • vVols capabilities:
      • Supports offloading snapshotts, cloning and storage DRS to array
      • Can use array functionality like dedup, encryption, replication, compression
      • Supports vmotion, storage vmotion, snapshots, lined clones, flash read cache and DRS
      • VAAI can be used with arrays that support it
      • Can support backup software that use VADP
      • To use iSCSI, you must used the software iscsi adapter and configure Dynamic discovery with the IP address ofthe vvols storage provider
    • vVols Limitations:
      • Vvols cannot be part of a datastore cluster
      • A single storage container cannot span different physical arrays.
      • Requiers vcenter, so cannot be on standalone host
      • Does not support RDM’s
      • Host profiles that contain virtual datastores are vCenter specific
      • vVols do not support NFS 4.1
  • Enable/Disable Virtual SAN Fault Domains
    • The Virtual SAN fault domains feature instructs Virtual SAN to spread redundancy components across theservers in separate computing racks.
    • You must define at least three fault domains, each of which might consist of one or more hosts. Fault domain definitions must acknowledge physical hardware constructs that might represent a potential failure domain, for example, an individual computing rack enclosure
      • number of fault domains = 2 * number of failures to tolerate + 1
  • Create Virtual Volumes given the workload and availability requirements
  • Create storage policies appropriate for given workloads and availability requirements
    • Storage policy order:
      • Create/assign tags and or storage provider
      • home->policies and profiles -> vm storage profiles-> create new policy
        • Within rule set elements are boolean AND. Between multiple rule set is boolean OR.
        • When vsan is created, a default storage policy is automaticlly created
      • Assign policy to VM’s and/or storage.
      • [Optional] assign a default policy to a vvol or vsan datastore
        • When you assign a user-defined storage policy as the default policy to a datastore, Virtual SAN automatically removes the association to the default storage policy and applies the settings for the user defined policy on the specified datastore. At any point, you can assign only one virtual machine storage policy as the default policy to the Virtual SAN datastore.
        • You can edit (not recommended), but cannot delete the default vsan storage policy
    • A VM that runs on a virtual datastore must have a VM storage policy
    • vSAN storage policy attributes
      • Number of disk stripes per object. Essentially raid level. Default is 1 (VMware recommends not changing).
      • Flash read cache reservation – default 0%
        • Should be used only for very specific performance issues
        • By default vsan dynamically allocates read cache based on demand amongst all objects
      • Number of failures to tolerate.
      • Default1, max 3.
        • For n failures tolerated, n+1 copies of the virtual machine object are created and 2*n+1 hosts contributing storage are required.
        • If fault domains are configure, 2n+1 fault domains with hosts contributing capacity are required. A host, which is not part of any fault domain is considered as its own single host fault domain.
        • NOTE When creating a new storage policy, if you do not specify any value for Number of failures to tolerate, by default, Virtual SAN creates a single mirror copy of the virtual machine objects and tolerates only one failure. However, in the event of a multiple component failures your data might be at risk.
        • Force provisioning
          • Object will be deployed even if the vsan datastore does not satisfy the policies
        • Object space reservation
          • Amount that should be reserved, Aka thick provisioned.
    • vSAN default storage policy
      • Number of failures to tolerate = 1
      • Number of stripes per object = 1
      • Flash read cache reservation = 0
      • Object space reservation = 0
      • Force provisioning = no
    • VMware procedure
      • Define a VM storage policy for vVols
        • From web client home, Polices & profiles -> VM storage policies
        • Create new storage policy, select the vCenter
        • Type a name and description
        • Select the VASA storage provider from “Rules based on data services”.
          • Select the appropriate data service and desired values, ensuring that values are within valid ranges
      • Assign the vVols storage policy to the Virtual Machines
        • During VM provisioning, select the appropriate storage policy from the VM Storage Policy dropdown -> select available virtual datastore
        • OPTIONAL change the storage policy for the virtual disk, on the customize hardware page.
      • Change default storage policy for a virtual (vvol or vsan) datastore
        • Browse to datastore->manage->settings->general->edit default storage policy

Tools

+ Objective 3.3: Configure vSphere Storage Multi-pathing and Failover

Knowledge

  • Explain common multi-pathing components
    • Iscsi does not support multipathing when you combine wan independent hardware adapter with either software or dependent iscsi adapters
    • Multipathing between software and dependent adapters within the same host is supported
    • PSA – pluggable storage architecture
      • NMP – native multipathing plugin
        • MPP – third party multipath plugin. Can replace or run in conjunction with NMP
        • In general the NMP supports all storage arrays on the HCL
        • Provides a default path selection algorithm based on the array type
        • NMP determines which SATP to use for a specific storage device and associated the satp with the physical paths for that device
      • PSP – path selection plugin
        • Specific details for determining which physical path to use for I/O
        • Sub-plugin under the NMP.
          • VMW_PSP_MRU (most recently used)
            • Default policy for most active-passive arrays
          • VMW_PSP_Fixed (fixed path)
            • Default policy for most active-active arrays
            • When using Fixed, the preferred path is marked with an asterisk.
          • VMW_PSP_RR (round robin)
            • Rotates through all active paths when using active-passive or through all available paths when connecting to active-active
            • Default for a number of arrays.
      • SATP – storage array type plugin
        • Specific details of how to handle path failover for a given storage array
        • Array specific options
        • Can work with the array on specifics to allow NMP to handle pathing without needing to be aware of the specific array
  • Differentiate APD and PDL states
    • Vsphere HA can detect connectivity problems arising from PDL or APD by using VM component protection (VMCP)
      • PDL (Permanent Device Loss) is an unrecoverable loss of accessibility that occurs when a storage device reports the datastore is no longer accessible by the host. This condition cannot be reverted without powering off virtual machines.
      • APD : all paths down. Typically occurs if a host loses connectivity to a device in an uncontrolled manner. Ex.\ failed switch or disconnection
      • APD (All Paths Down) represents a transient or unknown accessibility loss or any other unidentified delay in I/O processing. This type of accessibility issue is recoverable.
      • Treated as transient
      • Host indefinitely continues to retry issues commands
        • This can result in host becoming unresponsive.
      • When APD is detected, system timer kicks in to try non-VM commands for a time. Default 140 seconds. If device becomes available during this time hosts & VM’s resume normal operations. If it doesn’t recover, host IO stops, but VM IO will continue retrying.
        • Timeout parameter is configurable via: Misc.APDTimeout
      • Devices show “Dead” or “Error”
      • All paths show as dead
      • Datastores are dimmed out.
    • PDL: permanent device loss. Via scsi sense codes the host can learn from the array that the device is gone permanently. Ex.\ unintentional removal, ID change, or unrecoverable hardware error.
      • Under a PDL a host stops attempting to reestablish connectivity
      • Operational state ofthe device changes to “Lost Communication”
      • All paths show as dead
      • Datastores on the device are grayed out.
      • Host automatically removes the PDL device & all paths if no open connections exist or when last connection closes. If it returns it can be discovered, but is treated as a new device.
      • Host terminates all IO froma vm when PDL is registered. HA can detect and restart
      • To recover from a PDL & remove the affected device:
        • Power off & unregister all VM’s running on the PDL affected datastore
        • Unmount the datastore
        • Perform a rescan on all ESXi hosts that had access.
          • If rescan is not successful or datastore still exists, something either has open session, active references or pending IO to the device. Must find the offender to remove the device.
  • Understand the effects of a given claim rule on multipathing and failover
    • Claim rules determine which MPP should claim paths and manage multipathing
    • Pg 187 of the storage guide “The claim rules are numbered. For each physical path, the host runs through the claim rules starting with the lowest number first. The attributes of the physical path are compared to the path specification in the claim rule. If there is a match, the host assigns the MPP specified in the claim rule to manage the physical path. This continues until all physical paths are claimed by corresponding MPPs, either third-partymultipathing plug-ins or the native multipathing plug-in (NMP).
      For the paths managed by the NMP module, a second set of claim rules is applied. These rules determine which Storage Array Type Plug-In (SATP) should be used to manage the paths for a specific array type, and which Path Selection Plug-In (PSP) is to be used for each storage device.
    • For each storage device the PSP is set based on claim rules.
      • Fixed – preferred path (if configured), otherwise first working path discovered at boot.
      • MRU – most recently used. If path becomes unavailable, chooses a new path. If the original again becomes available, it does not revert back to the original.
      • RR – automatic path selection algorithm rotating through all active paths. Can be used with both active-active and active-passive.
  • Change the Path Selection Policy using the UI
    • Host -> manage->storage->devices->select device->edit multipathing. If fixed, set the preferred path.
  • Determine required claim rule elements to change the default PSP
    • You add a new PSA claim rule when, for example, you load a new multipathing plug-in (MPP) and need to define which paths this module should claim. You may need to create a claim rule if you add new paths and want an existing MPP to claim them.
    • Adding a claim rule allows you to specify which MPP/NMP will own a device.
      • esxcli storage core claimrule add -r 555 -V XtremIO -t vendor -P NMP
      • After adding a claim rule you have to reload it:
        • esxstorage core claimrule load
        • esxcli storage core claimrule list
    • Pg 192 of the storage guide
    • If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED
    • When the system searches the SATP rules to locate a SATP for a given device, it searches the driver rules first. If there is no match, the vendor/model rules are searched, and finally the transport rules are searched. If no match occurs, NMP selects a default SATP for the device.
    • If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, no claim rule match occurs for this device. The device is claimed by the default SATP based on the device’s transport type.
    • The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The VMW_PSP_MRU selects an active/optimized path as reported by the VMW_SATP_ALUA, or an active/unoptimized path if there is no active/optimized path. This path is used until a better path is available (MRU). For example, if the VMW_PSP_MRU is currently using an active/unoptimized path and an active/optimized path becomes available, the VMW_PSP_MRU will switch the current path to the active/optimized one.
    • While VMW_PSP_MRU is typically selected for ALUA arrays by default, certain ALUA storage arrays need to use VMW_PSP_FIXED. To check whether your storage array requires VMW_PSP_FIXED, see the VMware Compatibility Guide or contact your storage vendor. When using VMW_PSP_FIXED with ALUA arrays, unless you explicitly specify a preferred path, the ESXi host selects the most optimal working path and designates it as the default preferred path. If the host selected path becomes unavailable, the host selects an alternative available path. However, if you explicitly designate the preferred path, it will remain preferred no matter what its status is.
    • By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you want to unmask these devices.
    • List Multipathing Claim Rules for the Host
      • Use the esxcli command to list available multipathing claim rules.
      • Claim rules indicate which multipathing plug-in, the NMP or any third-party MPP, manages a given physical path. Each claim rule identifies a set of paths based on the following parameters:
        • Vendor/model strings
        • Transportation, such as SATA, IDE, Fibre Channel, and so on
        • Adapter, target, or LUN location
        • Device driver, for example, Mega-RAID
      • In the procedure, –server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported. For a list of connection options, see Getting Started with vSphere Command-Line Interfaces.
      • Run the esxcli –server=server_name storage core claimrule list –claimrule-class=MP command to list the multipathing claim rules.
        • The Rule Class column in the output describes the category of a claim rule. It can be MP (multipathing plug-in), Filter, or VAAI.
        • The Class column shows which rules are defined and which are loaded. The file parameter in the Class column indicates that the rule is defined. The runtime parameter indicates that the rule has been loaded into your system. For a user-defined claim rule to be active, two lines with the same rule number should exist, one line for the rule with the file parameter and another line with runtime. Several low numbered rules, have only one line with the Class of runtime. These are system-defined claim rules that you cannot modify.
  • Determine the effect of changing PSP on Multipathing and failover
    • To list multipathing modules, run the following command:
      esxcli –server=server_name storage core plugin list –plugin-class=MP
    • This command typically shows the NMP and, if loaded, the MASK_PATH module. If any third-party MPPs have been loaded, they are listed as well.
  • Determine the effects of changing SATP on relevant device behavior
    • You might need to create a SATP rule when you install a third-party SATP for a specific storage array.
    • To change the default SATP, you need to modify claim rules using the CLI
    • esxcli storage nmp satp list
    • Use the esxcli command to list all storage devices controlled by the VMware NMP and display SATP and PSP information associated with each device.
      • esxcli –server=server_name storage nmp device list
    • esxcli storage nmp satp rule  add -V XtremIO -s VMW_SATP_DEFAULT_AA -P VMW_PSP_MRU
    • Rerun the claim rule to have the updated satp rule take effect
      • Esxcli storage core claiming reclaim -d naa.514d34987we
  • Configure/Manage Storage Load Balancing
  • Differentiate available Storage Load Balancing options
  • Differentiate available Storage Multi-pathing Policies
  • Configure Storage Policies

Tools

+ Objective 3.4: Perform Advanced VMFS and NFS Configurations and Upgrades

Knowledge

  • Upgrade VMFS3 to VMFS5
    • *note you cannot upgrade datastores from NFS3 to NFS4.1
    • VMFS2 must be upgraded to VMFS3 on an ESXi4.x hosts before upgrading to VMFS5 via 6.0
    • Right-click datastore -> upgrade to VMFS5
  • Compare functionality of newly created vs. upgraded VMFS5 datastores
    • VMFS3 datastores continue to use the MBR format for their storage devices. Consider the following items when you work with VMFS3 datastores:
    • For VMFS3 datastores, the 2TB limit still applies, even when the storage device has a capacity of more than 2TB. To be able to use the entire storage space, upgrade a VMFS3 datastore to VMFS5. Conversion of the MBR format to GPT happens only after you expand the datastore to a size larger than 2TB.
    • When you upgrade a VMFS3 datastore to VMFS5, the datastore uses the MBR format. Conversion to GPT happens only after you expand the datastore to a size larger than 2TB.
    • When you upgrade a VMFS3 datastore, remove from the storage device any partitions that ESXi does not recognize, for example, partitions that use the EXT2 or EXT3 formats. Otherwise, the host cannot format the device with GPT and the upgrade fails.
    • You cannot expand a VMFS3 datastore on devices that have the GPT partition format.
    • Pg164 of the storage guidevmfs
  • Compare and contrast VMFS and NFS datastore properties
    • NFS can’t be used as a RDM or in a VM cluster
    • NFS 4.1 does not support FT
    • Virtual disks provisioned on NFS datastores are thin provisioned by default.
    • When using NFS3 volumes make sure server and folder names are identical across hosts, otherwise they are seen as different datastores.
    • NFS details pg 153 storage guide
    • NFS 4.1 doesn’t support sdrs, sioc, srm or vvols (nfs3 does)
  • Extend/Expand VMFS datastores
    • Expand using existing adjacent capacent in an extent
    • Add a new extent to the datastore. The datastore can span 32 extents
  • Place a VMFS datastore in Maintenance Mode
  • Select the Preferred Path/Disable a Path to a VMFS datastore
  • Enable/Disable vStorage API for Array Integration (VAAI)
    • Disabled on a per host basis
    • Set all 3 to 0
    • VMFS3.HardwareAcceleratedLocking
    • DataMover.HardwareAcceleratedMove
    • DataMover.HardwareAcceleratedInit
  • Given a scenario, determine a proper use case for multiple VMFS/NFS datastores

Tools

+ Objective 3.5: Setup and Configure Storage I/O Control

Knowledge

  • Enable and configure SIOC
    • 1- Enable Storage I/O Control for the datastore.
      • RDM is not supported
      • Datastores with multiple extents are not supported
      • Datastore-> manage-> settings->datastore capabilities, edit. Enable
    • 2 Set the number of storage I/O shares and upper limit of I/O operations per second (IOPS) allowed for each virtual machine.
      • Edit VM->disk->shares & limits
  • Configure/Manage SIOC
    • SIOC operates on a share based system, the same as CPU/memory, specify a relative importance
  • Monitor SIOC
    • Browse to datastore->related objects->VM’s, see the relative percentage per machine
    • Datastore->monitor->performance, choose performance to view SIOC activity
  • Given a scenario, determine a proper use case for SIOC
    • From page 50 of the resource guide
      • If you change the congestion threshold setting, set the value based on the following considerations.
        • A higher value typically results in higher aggregate throughput and weaker isolation. Throttling will not occur unless the overall average latency is higher than the threshold.
        • If throughput is more critical than latency, do not set the value too low. For example, for Fibre Channel disks, a value below 20ms could lower peak disk throughput. A very high value (above 50ms) might allow very high latency without any significant gain in overall throughput.
        • A lower value will result in lower device latency and stronger virtual machine I/O performance isolation. Stronger isolation means that the shares controls are enforced more often. Lower device latency translates into lower I/O latency for the virtual machines with the highest shares, at the cost of higher I/O latency experienced by the virtual machines with fewer shares.
        • A very low value (lower than 20ms) will result in lower device latency and isolation among I/Os at the potential cost of a decrease in aggregate datastore throughput.
        • Setting the value extremely high or extremely lowly results in poor isolation.
  • Compare and contrast the effects of I/O contention in environments with and without SIOC
    • When SIOC is not enabled, equal disk resource will be allocated to all ESXi servers without considering the number of virtual machines running on each of the ESXi servers or their allocated share value. The disk resource allocated to each ESXi server will be distributed within the server based on share value allocation for virtual machine. -FROM: http://www.virtualites.com/2015/12/sioc-storage-io-control_10.html
    • Without SIOC shares only pertain to the host. With SIOC shares are tracked and allocated at the shared datastore.
  • Administering VMware Virtual SAN
  • vSphere Storage Guide
  • vSphere Resource Management Guide
  • vSphere Client / vSphere Web Client
Advertisements