Page tree
Skip to end of metadata
Go to start of metadata

Factors affecting networking configuration

Network configurations will vary according to:

  • The degree of resilience you need.
  • Your choice of hypervisor.
  • The networking modes in use.

The different networks

Flexiant Cloud Orchestrator utilises a number of different networks:

  • The DMZ network: This has a public IP address and is normally located behind a DMZ
  • The management network: This is on a private IP range and is used to communicate between the management servers (where there are more than one of them), with the storage unit (except where VMware is used as the hypervisor), and with the vSphere management server (where VMware is used as the hypervisor)
  • The node network: This is on a private IP range and is used to communicate between with the nodes (other than VMware compute nodes) and the management stack. 

    If you are using Virtuozzo (formerly PCS) as the hypervisor, be aware that the Virtuozzo compute nodes can be on a separate IP range from the default node network. Router nodes must however be on the default node network; these are only required if you are using VLAN based networking.

  • The public network: This carries traffic between the compute nodes and router nodes in Public VLAN mode, and between compute nodes in Private VLAN and Interworking mode
  • The PVIP network: This carries traffic between the compute nodes running PVIP and the upstream router, and between the router nodes and the upstream router
  • The storage network: Other than on VMware, this is used to move storage traffic between the compute nodes and the SAN, or (for image and disk fetch operations) the management servers and the SAN

In addition, there is a network we refer to as the 'External network'; this will be whatever gives Flexiant Cloud Orchestrator external network access. It is in general not connected to any part of Flexiant Cloud Orchestrator. Rather it provides connectivity to the DMZ router (the other side of the relevant firewall) and the upstream router(s) (which are not part of Flexiant Cloud Orchestrator)

Simple deployments

In a simple deployment with only one management server, it is possible to use a very simple network configuration. Here:

  • There is no DMZ; the management stack relies on its own iptables firewalls, and thus the DMZ network is replaced by the external network; this could also share the same network as the PVIP network
  • The management, node and storage networks all share the same VLAN
  • There are no public networks as only PVIP mode is used

Note that this is a minimal specification, not necessarily a recommended one.

A schematic for this mode of operation is set out below:

More complex deployments

More complex deployments may require networks being split up. A large single cluster deployment might look like this:

Connecting the management stack

Your Flexiant Cloud Orchestrator management stack will talk to four networks:

  • The DMZ network: This has a public IP address and is normally located behind a DMZ
  • The node network: This is on a private IP range and is used to communicate with the nodes that it manages
  • The management network: This is on a private IP range and is used to communicate between the management servers (where there are more than one of them), with the storage unit (except where VMware is used as the hypervisor), and with the vSphere management server (where VMware is used as the hypervisor)
  • The storage network: This is used to allow the management to access the SAN (other than under VMware) to write disks or images fetched from the internet.

Connecting the nodes (All nodes under KVM, Xen4, and router nodes under Virtuozzo, VMware, and Hyper-V)

On KVM and Xen4, each node is a diskless server. On Virtuozzo, VMware, and Hyper-V, each router node is a diskless server.

Each node will be attached to 3 networks (for the node network, the public network and the storage network). These networks can be shared between NICs. However, as with the management network, we strongly recommend that you do not run a Node on a single NIC, as this can lead to performance problems. In particular, the Public network should not share a port with anything else, as this will increase the chance that a denial of service attack can take the node out of service; in practice this produces a minimum requirement of two NICs (one for the Public network, and one for the Node Network).

It is acceptable to share the storage and the node network, provided appropriate VLAN tagging is used (meaning you would require either 2 or 4 ports).

If you want network port resilience, you will need twice as many ports. In the latter case, you should attempt to spread the network ports across network cards such that no two ports on the same network share the same card; this allows you to configure them in such a way that each LAN is being run in a bonded pair across at least two physical cards.

The node network must be delivered to the nodes on an untagged port to allow PXE booting. See Network interfaces for nodes for more details.

Connecting the nodes (VMware compute nodes)

Flexiant Cloud Orchestrator's management stack requires a direct layer 2 connection to VMware's compute nodes for the Console service on the node network. However, if Public VLAN mode is utilised, the VMware nodes need to have a public interface connected to the router nodes.

Connecting the SAN (under KVM and Xen)

The SAN must be connected to the storage and the management network. For instructions on how to do this, see KVM cluster with iSCSI.

Connecting the SAN (under VMware)

No direct connection to the SAN is needed under VMware, save that it must be connected to the vSphere management server and the VMware compute nodes in the normal manner required for a VMware installation. The SAN is managed by the vSphere management server.

Connecting the vSphere Management Server (under VMware)

The vSphere management server must be accessible to the cluster controller (in a single server deployment this will be on the same server as the management plane). We suggest it is connected to the management network, though all that is needed is that the cluster controller can open a TCP connection to it.

Deploying multiple clusters

A multi-cluster deployment consists of:

  • One management plane
  • Two or more clusters

Each cluster that is deployed requires a similar configuration to the above. You should bear the following points in mind:

  • The management network must stretch between each cluster, or failing that secure connectivity must be established between the two. If the sites are geographically separated, this can be via a VPN. For more precise details, see below.
  • The management plane may be deployed separately, or as part of one of the clusters.
  • The management plane does not require access to the storage, node, PVIP or public networks of any cluster (just the management network); hence if the management plane is deployed separately, it does not require this network.
  • The clusters do not require a DMZ network (save for a cluster which is combined with the management plane)

Networking hints for multiple cluster deployments

Each cluster requires external internet connectivity to perform disk and image fetches, among other things. The default gateway is used for this.

On multiple cluster deployments, the control plane requires secure connectivity to each cluster. One obvious way to do this is simply to ensure that the clusters are on the same management network (that's the MGMT network in the diagrams above). If the clusters are all in the same building as the control plane, this is simple to achieve. If the clusters are remote from the control plane, this can be accomplished by bridging technologies such as VPLS Ethernet.

However, it is possible to configure clusters which are separate at the layer 3 level, provided the following are borne in mind:

  • The management network address on the control plane must be reachable over a secure network from the management network address on the cluster, and vice versa.
  • The management network range in the cluster can be overridden by specifying NETWORK_MANAGEMENT_RANGE in /etc/extility/cluster.cfg. This should be set to indicate the range in which the interface of the cluster controller in the management network lies.
  • When adding clusters, use the management address of the cluster controller, not the public address.
  • You will therefore almost inevitably require an interface on the management network which is different from the interface carrying the default route.

Do not attempt to give the external IP address of the cluster to the control plane as the cluster's IP address. Firstly, this would make the control channel between the cluster and the control plane insecure. Secondly, the cluster controller will only listen on the management IP address, so clustering will fail.

  • No labels