Page tree
Skip to end of metadata
Go to start of metadata

On platforms where VMware is the hypervisor, it is possible to virtualise router nodes, thus avoiding the need for extra hardware.

  • Virtualised router nodes may take longer than usual to load initially. This is due to the node PXE booting.
  • It is recommended that the switch used to communicate with customer VLANs and the upstream network is a VMware Distributed vSwitch. For information on how to set up and configure a Distributed vSwitch, see Configuration Customisations.

If the ESXi host running the virtualised router node has a pair of physical NICs for redundancy purposes, this can lead to duplication of ARP traffic, resulting in MACs appearing on the wrong bridge ports. This is due to the fact that Linux bridges are used on the nodes. To resolve this issue, completely remove the extraneous NIC from ESXi host (setting to stand-by is not enough), or consult the following external site for other potential solutions: http://communities.vmware.com/thread/262520.

Network configuration

The network diagram below outlines a suggested configuration for a platform using virtualised router nodes. Virtual resources are shown in blue, and physical resources in grey/black. For more details about the symbols used, see the Key section below.

In this example, the following configuration of VLANs is used:

  • Customer VLANs are on VLANs 200-4096.
  • Node management is on VLAN 20.
  • Upstream router connectivity is on VLAN 30.
  • Public access to vSphere server is on VLAN 10.

Key

The following symbols are used in the network diagram:

SymbolDescription

 

Upstream router

 

Switch

 

Distributed vSwitch

VLAN
LAN
Virtual trunk
Trunk

 

ESXi host



Customers' virtual machines. These are shown below the ESXi host they are running on.

 

Virtualised router node

 

vSphere server

 

LAN numbers

 

VLAN numbers

Installation instructions

To set up virtualised router nodes, configure the router nodes as appropriate in both Flexiant Cloud Orchestrator and vSphere. Once this is done, perform the following steps:

  1. In vSphere, create a virtual machine for each router node to be virtualised, with the following settings: 

    We suggest running virtualised router nodes on ESXi hosts which are only running infrastructure VMs. This is so resources such as network bandwidth and CPU are not being shared with customer VMs.

     

    • Guest Operating System = ubuntu 64bit. 
    • RAM = at least 4GB.
    • CPUs = at least 2.
    • Hard Drive = none. The router node will PXE boot and thus does not require a hard drive. 
    • 3 x network adapters of type VMXNET3 
      • 1 for node management (eth0). This should be connected to the cluster control server's NETWORK_NODE_RANGE network.
      • 1 for upstream router connection (eth1). This should be connected to the upstream router's NETWORK_PVIP_RANGE network via BGP/ARC/OSPF.
      • 1 for customer VMs (internal VLANs) (eth2). This should be connected to the automatically configured customer VLANs, which have IDs 200-4095 by default. 

        The port for customer VMs network should be set to VLAN ID 'All (4095)'.

        Alternatively, a virtualised router node can be created with only 2 network adapters by running the upstream router connection in access mode and pairing it with the internal VLANs NIC (eth2 in the example above). 

  2. In vSphere, ensure that the following security settings are enabled for the NICs used to communicate with internal VLANs and the upstream router (eth1 and eth2 in the example above):
    • Promiscuous Mode: Accept
    • Forged Transmits: Accept
    • MAC Address Changes: Accept
       
  3. Log in to the console of the Flexiant Cloud Orchestrator cluster control server (in single cluster deployments this is normally the Flexiant Cloud Orchestrator management server) and edit /etc/extility/nodeconfig/nodetemplate.xml, replacing the contents with the following: 

    <node>
    	<networks>
    		<network type="ip">
    			<name>eth0</name>
    			<interface>eth0</interface>
    			<function>node</function>
    		</network>
    		<network type="ip">
    			<name>eth0:1</name>
    			<interface>eth0:1</interface>
    			<function>storage1</function>
    		</network>
    		<network type="ip">
    			<name>eth1:0</name>
    			<interface>eth1:0</interface>
    			<function>PVIP</function>
    		</network>
    		<network type="ip">
    			<name>eth2</name>
    			<interface>eth2</interface>
    			<function>public</function>
    		</network>
    		<network type="bridge">
    			<name>pvip-bridge</name>
    			<interfaces>
    				<interface>eth1</interface>
    			</interfaces>
    		</network>
    	</networks>
    </node>



  4. Add each virtualised router node to Flexiant Cloud Orchestrator. For information on how to do this, see Adding Router Nodes. When specifying the MAC address for the router node, use the MAC address of the node management NIC (eth0 in the example above). Start the router nodes once they have been added and they will PXE boot.

  5. Install VMware tools using the node payload system. For information on how to do this, see Node Payload System.

  6. Place the VMware tools packages in /etc/extility/nodeconfig/payload/ROUTER/. The packages are available from http://packages.vmware.com/tools/esx/latest/ubuntu/dists/precise/main/binary-amd64/index.html. The following packages are required:

    root@example:/etc/extility/nodeconfig/payload# ls vmware-tools-*
    vmware-tools-core_9.0.1-2_amd64.deb
    vmware-tools-esx-nox_9.0.1-2_amd64.deb
    vmware-tools-foundation_9.0.1-2_all.deb
    vmware-tools-guestlib_9.0.1-2_amd64.deb
    vmware-tools-libraries-nox_9.0.1-2_amd64.deb
    vmware-tools-plugins-autoupgrade_9.0.1-2_amd64.deb
    vmware-tools-plugins-deploypkg_9.0.1-2_amd64.deb
    vmware-tools-plugins-guestinfo_9.0.1-2_amd64.deb
    vmware-tools-plugins-hgfsserver_9.0.1-2_amd64.deb
    vmware-tools-plugins-powerops_9.0.1-2_amd64.deb
    vmware-tools-plugins-timesync_9.0.1-2_amd64.deb
    vmware-tools-plugins-vix_9.0.1-2_amd64.deb
    vmware-tools-plugins-vmbackup_9.0.1-2_amd64.deb
    vmware-tools-services_9.0.1-2_amd64.deb


    The smallest file (vmware-tools-esx-nox_9.0.1-2_amd64.deb) is a meta package for all the others. 

  7. If required, the success of this operation can be verified in one of the following ways:
    • Using the command dpkg -l |grep vmware on the node console once it is booted.
    • In vSphere, viewing the properties dialog for the virtual machine and checking whether there is an entry reading "VMware Tools installed".

     

  8. If the interface status within the router node is UNKNOWN, place the following script in /etc/extility/nodeconfig/payload/ROUTER/ on the cluster control server, adjusting interface names as required:

    root@example:/etc/extility/nodeconfig/payload# cat down-up-interfaces.sh
    #!/bin/bash
    echo "downing/upping eth1" > /tmp/down-up-script
    /sbin/ifconfig eth1 down
    /sbin/ifconfig eth1 up
    /sbin/ifconfig eth2 down 
    /sbin/ifconfig eth2 up

    Ensure the script is executable using the command chmod +x. You can check that the script ran successfully using the command cat /tmp/down-up-script on the node console.

  • No labels