Objective 1.1 – Implement Complex Storage Solutions

  • Determine use case for and configure VMware DirectPath I/O

www.BuildVirtual.net have a VCAP5-DCA study guide, the link listed explains VMware DirectPath I/O.

Saves me taking a tone of snapshots 🙂

  • Determine requirements for and configure NPIV

The vBrownbag video (linked in the Section 1 page above) goes through this process for the VCAP-DCA 5 blueprint.

A written guide can also be found here on BuildVirtual

A good artical on VMware Blogs here

An artical explaining NPIV  & NPV which maybe useful here

  • Understand use case for Raw Device mapping

The vBrownbag video (linked in the Section 1 page above) goes through this process for the VCAP-DCA 5 blueprint.

A written guide can also be found here on BuildVirtual

  • Configure vCenter Server storage filters

The vBrownbag video (linked in the Section 1 page above) goes through this process for the VCAP-DCA 5 blueprint.

A written guide can also be found here on BuildVirtual

  • Understand and apply VMFS re-signaturing

Buildvirtual have covered this in the VCAP-DCA5 studyguide here

  • Understand and apply LUN masking using PSA-related commands

The vBrownbag video (linked in the Section 1 page above) goes through this process for the VCAP-DCA 5 blueprint.

A written guide can also be found here on BuildVirtual

  • Configure Software iSCSI port binding

The below sliderocket presentation wouldn’t embed, I’ve put a link to the short presentation below.

It’s a VMware presentation on iSCSI port binding best practices

http://app.sliderocket.com:80/app/fullplayer.aspx?id=99be2617-609a-4e32-9b5a-c932ad78e40b

As the blueprint lists configuring Software iScsi port binding I am making the assumption that the iSCSI software adapter has already been added.

To configure port binding, under the host select the “Configuration” tab and then select the “Storage Adapters” option under hardware. Select the iSCSI software Adapter right click and select “Properties…”

Under the iSCSI Initiator Properties select the “Network Configuration” tab & select “Add…” select the Port Group with the VMKernel Adapter that is used for iSCSI.

You will only see VMkernel adapters compatible with the iSCSI port binding requirements and available physical adapters listed.

binding

VMkernel network adapters must have exactly one active uplink and no standby uplinks to be eligible for binding to the iSCSI HBA

Also note in the presentation above the best practices.

  • Array Target iSCSI ports must reside in the same broadcast domain and IP subnet as the VMkernel port.
  • All VMkernel ports used for iSCSI connectivity must reside in the same broadcast domain and IP subnet.
  • All VMkernel ports used for iSCSI connectivity must reside in the same vSwitch.
  • Currently, port binding does not support network routing.

bindging2

The screenshots above show only 1 bound VMK port, in production scenarios you would have at least 2.

  • Configure and manage vSphere Flash Read Cache

There’s a good piece on Flash Read Cache over at www.yellow-bricks.com

There’s not much I can add to it because all I know is what I’ve learnt reading Duncan Eppings blog 🙂

There’s a handy youtube video embedded below.

If for whatever reason your local SSDs arent reported as SSDs within vSphere then you will need to create a claim rule to tag it as an SSD. This is covered in Objective 1.2

  • Configure Datastore Clusters

A datastore cluster is a collection of datastores with shared resources and a shared management interface. Datastore clusters are to datastores what clusters are to hosts.

When you create a datastore cluster, you can use vSphere Storage DRS to manage storage resources.

NOTE – Datastore clusters are referred to as storage pods in the vSphere API.

Space utilization load balancing You can set a threshold for space use. When space use on a datastore exceeds the threshold, Storage DRS generates recommendations or performs Storage vMotion migrations to balance space use across the datastore cluster.
I/O latency load balancing You can set an I/O latency threshold for bottleneck avoidance. When I/O latency on a datastore exceeds the threshold, Storage DRS generates recommendations or performs Storage vMotion migrations to help alleviate high I/O load.
Anti-affinity rules You can create anti-affinity rules for virtual machine disks. For example, the virtual disks of a certain virtual machine must be kept on different datastores. By default, all virtual disks for a virtual machine are placed on the same datastore.

SDRS, Anti-Affinity is discussed in more detail in section 3 objective 3.2 implement and manage complex DRS Solutions

A datastore cluster can be created under “Home>Inventory>Datastores and Datastore Clusters” by right clicking on the appropriate datacentre and select “New Datastore Cluster”

In the vSphere client

 

Procedure

1 In the Datastores and Datastore Clusters view of the vSphere Client inventory, right-click the Datacenter object and select New Datastore Cluster.
2 Follow the prompts to complete the Create Datastore Cluster wizard.

 

In the vSphere Web Client

 

Procedure

1 Browse to Datacenters in the vSphere Web Client navigator.
2 Right-click the datacenter object and select New Datastore Cluster.
3 Follow the prompts to complete the New Datastore Cluster wizard.
4 Click Finish.

During the wizard you have the option to turn off Storage DRS as well as setting the automation level for SDRS recommendations.

There is an Advanced Options button (pictured below) which allows you to make changes to the way the SDRS algorithm makes recommendations. For instance if you use the IgnoreAffinityRulesForMaintenance advanced option, virtual machine affinity rules will be ignored when a datastore is put into maintenance mode.

clus2

I found a list of advanced options on William Lams blog  www.virtuallyghetto.com

You have the option within the wizard to include I/O Metrics for SDRS recommendations, with this option enabled Storage I/O control is configure on all Datastores with the Cluster.

Within the wizard you can also select which datastores to include in the cluster. A summary is included before the datastore is created

clus5

Datastore clusters can be created through powercli utilising the

New-DatastoreCluster -Name ‘MyDatastoreCluster’ -Location ‘MyDatacenter’

When using the New-DatastoreCluster cmdlet storage DRS is disabled. To enable Storage DRS and to configure StorageDRS options you would use Set-DatastoreCluster

 

  • Upgrade VMware storage infrastructure

VMware VMFS (Virtual Machine File System) is VMware, Inc.’s cluster file system.

There are four versions of VMFS, corresponding with ESX Server product releases

  • VMFS1 was used by ESX Server v1.x. It did not feature the cluster filesystem properties and was used only by a single server at a time. VMFS1 is a flat filesystem with no directory structure.
  • VMFS2 is used by ESX Server v2.x and (in a limited capacity) v3.x. VMFS2 is a flat filesystem with no directory structure.
  • VMFS3 is used by ESX Server v3.x and vSphere 4.x. notably, it introduces directory structure in the filesystem.
  • VMFS5 is used by vSphere 5.x. Notably, it raises the extent limit to 64 TB and the file size limit to 62 TB,[2] though vSphere versions earlier than 5.5 are limited to VMDKs smaller than 2 TB.[5]
  • VMFS-L is the underlying file system for VSAN-1.0. Leaf level VSAN objects reside directly on VMFS-L volumes that are composed from server side direct attached storage(DAS). File system format is optimized for DAS. Optimization include aggressive caching with for the DAS use case, a stripped lock down lock manger and faster formats.

When ESXi servers are updated to a newer version of vSphere the corresponding data stores are not automatically updated, they will remain on the version of VMFS that was relevant at the time of original install.

For example, if a ESXi 4 host is upgraded to ESXi 5, then the datastore will remain VMFS3 even though the host is now running ESXi 5.

VMFS datstores need to be upgraded to the latest VMFS filesystem manually

The process is as follows using the vSphere client

In “Home > Inventory > Datastore and Datastore Clusters”  select the VMFS datastore that requires updating & select the “Upgrade to VMFS-5…” option

vmfs upgrade

The only caveat to upgrading a VMFS datastore is to ensure that all hosts accessing that datastore are running the correct version of ESXi.

For instance, if a VMFS 3 datastore was shared across a mixed ESXi4 & ESXi5 host cluster, then you could not upgrade the datastore to VMFS5 until all hosts ran ESXi5.

This process can also be completed by clicking on a host and selecting the “Configuration” tab & then the “Storage” option. The same  “Upgrade to VMFS-5…” option exists here.

To upgrade using powercli you can use the following command. Where “Datastore1” is replaced by the name of the datastore you wish to upgrade.

Get-Datastore -Name Datastore1 | Upgrade-VmfsDatastore

It can also be done on the ESXi Host command line with the following command, where datastore1 is the name of the datastore you wish to upgrade.

vmkfstools -T /vmfs/volumes/datastore1

Upgrading from VMFS-3 to VMFS-5 can be done on-the-fly (virtual machines do not need to be powered-off, suspended, or migrated).

Upgraded VMFS-5 partitions will retain the partition characteristics of the original VMFS-3 datastore, including file block-size, sub-block size of 64K, etc. To take full advantage of all the benefits of VMFS-5, migrate the virtual machines to another datastore(s), delete the existing datastore, and re-create it using VMFS-5.

Note: Increasing the size of an upgraded VMFS datastore beyond 2TB changes the partition type from MBR to GPT. However, all other features/characteristics continue to remain same.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s