Flexiant Cloud Orchestrator's support of multi-tier storage has two aspects:
- QoS control allows the licensee to control storage QoS (Quality of Service) offered to customers through disk product offers.
- Storage groups allow the licensee to group together storage units of similar performance and make the groups available to different customers and billing entities, potentially at different prices.
VMware is something of a special case in respect of multi-tiered storage; please see the section below on VMware storage for more details.
QoS control allows the licensee to control storage QoS (Quality of Service) offered to customers with definable values of disk I/O operations per second (IOPS), or disk throughput. These parameters can be associated with a disk product offer.
IOPS is the number of read/write cycles a storage device can perform in one second. Throughput is a measurement of how fast data can be transferred to or from the storage device once a read/write operation has begun. In both cases, a higher value means a shorter time taken to complete the read from or write to the storage device. As storage speed is often a bottleneck, with the storage device commonly slower than the RAM or CPU, a shorter time to read from or write to disk will usually mean increased VM performance.
The QoS aspect of multi-tier storage works by allowing the specification of limits for various hypervisor-specific measured values in disk and disk I/O product offers. This means that if a licensee has a storage unit which is particularly "fast" according to one of these metrics, they can charge their customers more to use it. As above, the use of a "faster" storage unit will usually mean increased VM performance.
Virtuozzo (formerly PCS) does not currently support the measurement of values relating to storage unit performance.
Virtuozzo (formerly PCS) only supports one remote storage group for VM disks (which must contain only PStorage storage units), and one local storage group for VM disks, in addition to a remote storage groups for images (which must be NFS).
VMware does not support the limiting of IOPS total burstable to values lower than 16.
|Name||Description||Required disk product component||KVM||VMware||Hyper-V|
|IOPS total||Total I/O operations per second|
Disk throttling in I/O operations
|IOPS total burstable||Maximum burstable I/O operations per second||Y||Y||N|
|IOPS (read)||Read I/O operations per second||Y||N||N|
|IOPS (burstable read maximum)||Maximum burstable read I/O operations per second||Y||N||N|
|IOPS (write)||Write I/O operations per second||Y||N||N|
|IOPS (burstable write maximum)||Maximum burstable write I/O operations per second||Y||N||N|
|Bytes Per Second||Total throughput limit in bytes per second|
Disk throttling in bytes
|Total burstable max bytes||Total burstable max throughput limit in bytes||Y||N||N|
|Bytes Per Second (read)||Read throughput limit in bytes per second||Y||N||N|
|Burstable max read bytes||Burstable max read throughput limit in bytes||Y||N||N|
|Bytes Per Second (write)||Write throughput limit in bytes per second||Y||N||N|
|Burstable max bytes||Total burstable max throughput limit in bytes||Y||N||N|
To enable the limitation of a measurable value supported by your chosen hypervisor, create a disk product using either the Disk Throttling in I/O operations or Disk Throttling in Bytes product component, then create a product offer based on this new product. For information about how to do this, see Creating Products and Creating Product Offers.
Control of, billing for, and access to storage devices works by associating a disk product offer with a particular storage group. Storage groups are groups of one or more storage units in the same cluster and of the same type that are to be billed identically. For more details, see Storage groups. This means that the associated disk product offer allows the customer to select which storage group they would like to have their virtual disk stored in, along with the appropriate billing for that group.
When a VM's disks are created, they will always be placed in the storage group associated with the product offer for that disk; similarly, that product offer will determine the billing for that disk (both for disk space and, where the hypervisor supports it, disk I/O).
If the end user changes the disk configuration in the UI to a new product offer that uses a different storage group, the disk will be copied from its current storage unit to a new storage unit within the storage group associated with the new disk product offer.
It is thus important to ensure that all storage units within any given storage group are of similar performance. As the allocation of disk space is between storage units in the chosen storage group, it's also important in the case of remote storage to ensure that each storage unit within a storage group is available to the same set of nodes.
If a disk product offer is used on a given cluster which refers to a storage group which does not contain a storage unit on that cluster, creation of the disk will fail. If a disk product offer contains no storage group, disks will be placed on the default storage group; thus use of a disk product offer with no storage unit group set on a cluster which does not have a storage unit in the default storage group will also fail. Therefore:
- Where disk product offers refer to a specific storage group, you should either ensure either that storage group has storage units on each cluster or that disk product offer is not available on the clusters that do not have a storage unit in that storage group; and
- Where disk product offers do not refer to a specific storage group, you should either ensure either that the default storage group has storage units on each cluster or that disk product offer is not available on the clusters that do not have a storage unit in the default storage group.
Note that the above placement applies to the creation of servers (VMs) and their disks. It does not apply to the creation of images. Remote storage groups are (optionally) able to contain images. Local storage groups cannot contain images. Fetched images will be fetched to a storage unit group that can contain images (if the default storage group can contain images, that storage group will be used), and propagated from there to all storage unit groups on the same cluster that require the image.
When running VMware, VMware datastores must have a one-to-one relationship with FCO storage units. In order to add a VMware datastore as a storage unit, it is merely necessary to give the datastore's name.
For a small deployment, each VMware datastore should be connected to every node within the FCO cluster, i.e. there will be a single VMware cluster within the FCO cluster.
For a larger deployment, this configuration may not be desirable; here the FCO cluster may consist of a number of VMware clusters, each containing one or more storage units, and many nodes. Within each cluster, every datastore (i.e. every storage unit) should be accessible to every node. Datastores in one cluster would not normally be available within other clusters. Thus here it is permissible for storage groups to contain some storage units that are accessible to some nodes, and some storage units that are accessible to other nodes. However, if this the case, for placement to work reliably, each cluster should have at least one storage unit in every storage group, else it would be possible for the user to construct a virtual machine which could never be started due to there being no node with storage units available from the selected storage groups. In such cases, FCO will not start the server.
When a virtual machine is created, it will use the datastore within the VMware cluster with the most free space. If there is more than one VMware cluster in use, and localisation to a particular VMware cluster be desirable, a sticky key containing the UUID of the VMware cluster should be attached to the nodes in FCO which correspond to the ESXi hosts within the VMware cluster. For information about sticky keys, see How the Dynamic Workload Placement System Works.