Various hypervisor specific parameters can be changed when using the KVM, VMware, and Xen hypervisors. These can be configured when managing a cluster, image, or virtual machine. For information on how to do this, see the following topics:
The hypervisor specific parameters are applied hierarchically, which means you can set a parameter on higher up which will apply it to all resources below it, unless you specify otherwise. We say the settings are inherited from the layer above. The hierarchy is as follows:
For instance, one hypervisor setting on VMware is OS type. This means that if you specify the OS type used by an image, you don't need to specify the same parameter for each virtual machine created using the image.
If conflicts arise between parameters, the more specific parameter is used unless inheritance of parameters is forced using the Lock buttons. For example, if OS type X is specified on a cluster, but OS type Y is specified on an image within that cluster, the parameter set on the image overrides that set on the cluster, meaning OS type Y is used for all virtual machines created using the image. Virtual machines created using other images within the cluster will use OS type X, unless this setting is overridden. If the Lock button is used when the parameter is set on the cluster, all new and existing virtual machines and images in the cluster will use OS type X, and users will be unable to override the settings on a per image or per virtual machine basis.
Control of emulated devices
When the Xen4 and KVM hypervisors run virtual machines, they can provide two types of access to virtual devices (network cards and hard disks):
- Emulated devices, where a device appearing to be physical hardware appears in the guest operating system; and
- Paravirtualised (or PV) devices, where a driver in the guest operating system is used to access the device.
Emulated devices will work with all guest operating systems. Paravirtualised devices work only in those operating systems with appropriate drivers. In this sense emulated devices are better, as they provide better compatibility. However, paravirtualised devices provide far better performance.
To make matters more complicated:
- Certain versions of Microsoft Windows will not support a hard disk driver for both emulated and paravirtualised devices being installed at the same time.
- Older versions of Linux (and all versions of Linux on KVM) will happily boot with both paravirtualised and emulated devices, sometimes confusing the operating system as to which they should use.
- Newer versions of Linux on Xen4 will, if they detect they are running on Xen4, unplug access to the emulated disk devices (but not the net devices) on boot up.
- Whilst some bootloaders support both paravirtualised and emulated devices, other older bootloaders require emulated devices to boot, even though the operating system uses paravirtualised devices.
- In general, FCO has know way of knowing a priori what the requirements of the booting operating system are, not least as users may self-install or upgrade the operating system concerned.
On Xen this is handled by the device being unplugged for Linux, and by installing the right drivers on Windows.
However, when using KVM, the end may user need some control over whether or not emulated devices are offered. If they are offered, they are offered in addition to paravirtualised devices, not in substitution. If the user is running modern software, use of emulated devices is unnecessary. However, on particular servers it may be necessary to turn emulated devices on. The user can control use of emulated devices using the Hypervisor Settings slider when Managing a Server.
In FCO v3.0 (version 3.0.3 or later), emulated devices are switched off by default. We recommend that you keep this default, as it is the least confusing for end users now modern software has support for virtual devices. The user can change the settings on an individual basis.
Prior to version 3.0.3, emulated devices were switched on by default. To switch emulated devices on by default, you can simply change the default at a cluster level using Manage Cluster.
Changing this setting (having emulated devices on by default) generates a number of problems in producing reliable images and produces user confusion. In particular, if an image uses both paravirtualised and emulated write access to the same disk (even for different partitions), snapshotting will be unreliable when using Universal Storage Support.
Selecting NIC type
When using VMware or Virtuozzo (formerly PCS) as the hypervisor, it is possible to specify which type of network interface card (NIC) is emulated. This ensures that all virtual machines across a cluster use the same type of network interface card, which can help to ensure live migrations are performed successfully.
The following NIC types can be emulated when VMware is the cluster hypervisor:
The following NIC types can be emulated when Virtuozzo is the cluster hypervisor:
NIC types can only be altered for virtual machines in Virtuozzo clusters. Containers are not affected by this setting.
Selecting operating system type
When using VMware as the hypervisor, it is possible to specify which type of operating system is used on clusters, images, and virtual machines. For a list of the operating system types that can be specified, see this page from VMware's documentation.
The available OS type may depend on the version of vSphere you are using. Flexiant Cloud Orchestrator supports versions 5.0, 5.1, and 5.5 of vSphere.
Setting server virtualisation type
When using Virtuozzo (formerly PCS) as the hypervisor, it is possible to specify whether a server is created based on container or virtual machine technology. Containers differ from virtual machines in that they have no virtualised hardware of their own, instead running on a share of the host server's physical hardware. This potentially allows more containers to be created on each physical server. This setting is available on images and clusters only.
Setting Hypervisor Specific Parameters
Hypervisor specific parameters can be controlled on a cluster-wide basis using the
nodetemplate.xml system. See Nodeconfig for more details.