Page tree
Skip to end of metadata
Go to start of metadata

The nodeconfig service works using a node template, an XML file, which specifies which interfaces on the node are used for what purpose. Node templates are located in the directory /etc/extility/nodeconfig on the management server running the nodeconfig service.

A default template for nodes (nodetemplate.xml.sample) is supplied, which is installed as /etc/extility/nodeconfig/nodetemplate.xml.

We recommend that each node has identical hardware, and has its network interfaces in the same configuration. In practice, this may not be possible. To alleviate this problem, each node may form part of a config group of nodes with similar hardware.

When a node boots, it supplies its MAC address to the nodeconfig system, which, using the database, is able to translate this into an IP address. The nodeconfig system loads the appropriate node template. It first tries the files in the following order:

  1. /etc/extility/nodeconfig/nodetemplate.xml-IPADDRESS
  2. /etc/extility/nodeconfig/nodetemplate.xml-MACADDRESS
  3. /etc/extility/nodeconfig/nodetemplate.xml-CONFIGGROUP
  4. /etc/extility/nodeconfig/nodetemplate.xml

Thus, if the MAC address is aa:bb:cc:dd:ee:ff, the IP address is 10.157.128.10, and the node is in config group 3, then the following files will be tried in order: 

Letters in MAC addresses must be in lower case.

  1. /etc/extility/nodeconfig/nodetemplate.xml-10.157.128.10
  2. /etc/extility/nodeconfig/nodetemplate.xml-aa:bb:cc:dd:ee:ff
  3. /etc/extility/nodeconfig/nodetemplate.xml-3
  4. /etc/extility/nodeconfig/nodetemplate.xml

This same fallback system is used to control routing protocol configuration.

The nodeconfig system also provides the Node Payload System.

If the node configuration for any given node is changed, the node will need to be rebooted for the change to take effect.

This topic describes the following:

Format of the node template file

The node template file is located on the cluster controller at the locations given above.

Two sample files are provided, /etc/extility/nodeconfig/nodetemplate.xml.sample and /etc/extility/nodeconfig/nodetemplate.xml.vlans.sample.

The first of these, set out below, show a simple configuration for a dual ethernet node where:

  • The node network and storage network IPs share the same untagged VLAN on eth0
  • The PVIP network is on eth1 untagged, and the VLANs are tagged within eth1

This is the simplest usable configuration. The XML is set out below:

nodetemplate.xml.sample
<node>
        <networks>
                <network type="ip">
                        <name>eth0</name>
                        <interface>eth0</interface>
                        <function>node</function>
                </network>
                <network type="ip">
                        <name>eth0:1</name>
                        <interface>eth0:1</interface>
                        <function>storage1</function>
                </network>
                <network type="ip">
                        <name>eth1:0</name>
                        <interface>eth1:0</interface>
                        <function>pvip</function>
                </network>
                <network type="ip">
                        <name>eth1</name>
                        <interface>eth1</interface>
                        <function>public</function>
                </network>
                <network type="bridge">
                        <name>pvip-bridge</name>
                        <interfaces>
                                <interface>eth1</interface>
                        </interfaces>
                </network>
        </networks>
</node>

As you can see, within the networks tag, there are four 'ip' interfaces and one bridge. For each ip interface, the name sets out the name of the interface, the interface section the physical interface concerned, and a function which sets out what the interface is to do (this controls its numbering). Available functions are:

  • node
  • storage1 and storage2 (the latter being used for redundancy)
  • pvip
  • public

As the virtual router runs inside a container, it needs a bridge of the appropriate interface to reach the PVIP network, that is what lines 27-32 perform.

A more complex example is set out below:

nodetemplate.xml.vlans.sample
<node>
        <networks>
                <network type="bond">
                        <name>bond0</name>
                        <interfaces>
                                <interface>eth0</interface>
                        </interfaces>
                        <function>node</function>
                </network>
                <network type="bond">
                        <name>bond1</name>
                        <interfaces>
                                <interface>eth1</interface>
                        </interfaces>
                        <function>public</function>
                </network>
                <network type="bond">
                        <name>bond2</name>
                        <interfaces>
                                <interface>eth2</interface>
                        </interfaces>
                </network>
                <network type="vlan">
                        <interface>bond2</interface>
                        <tag>111</tag>
                        <function>storage1</function>
                </network>
                <network type="vlan">
                        <interface>bond2</interface>
                        <tag>112</tag>
                        <function>storage2</function>
                </network>
                <network type="vlan">
                        <interface>bond1</interface>
                        <tag>102</tag>
                        <function>pvip</function>
                </network>
                <network type="bridge">
                        <name>pvip-bridge</name>
                        <interfaces>
                                <interface>bond1.102</interface>
                        </interfaces>
                </network>
        </networks>
</node>

This shows bonded ethernet interfaces being made (here with only one interface in each, but adding other interfaces at lines 6, 14 and 22 would be simple), called bond0, bond1 and bond2. The storage interfaces are on a VLAN on bond2 (eth2) with tags 111 and 112, and the PVIP network is also VLAN tagged with tag 102 on bond2. The public VLANs are on bond1 and the node network on bond0. As bond0 contains eth0, that is permissible.

See Network interfaces for nodes for details on network interface naming, and Physical Networking Configuration for details on which interfaces should connect to which networks.

Changing Hypervisor Functionality

This is an experimental unsupported feature. Use it at your own risk.

On the KVM and Xen4 hypervisors, it is possible to change the characteristics of the hypervisor using the nodetemplate.xml file. Currently the only parameter that can be changed is which NIC device is emulated. The format of the entry is as follows:

<node>
    ...
    <hypervisor_data>
        <nicmodel>NICTYPE</nicmodel>
    </hypervisor_data>
    ...
</node>

The vale of NICTYPE determines the model of interface card that is emulated and presented to the guest. If no model is specified, an rtl8139 card is used.

Not all guest operating systems will support all emulation types. Some emulated NICs may perform better than others. Some may not be reliable.

On KVM, valid values of NICTYPE are:

  • i82551
  • i82557b
  • i82559er
  • ne2k_pci
  • ne2k_isa
  • pcnet
  • rtl8139
  • e1000
  • smc91c111
  • lance
  • mcf_fec

On XEN4, valid values of NICTYPE are:

  • ne2k_pci
  • ne2k_isa
  • pcnet
  • rtl8139
  • e1000

If you use multiple nodetemplate.xml files, then you will almost certainly want to ensure that the NIC emulation is consistent between them, or the presented NIC type will differ between nodes. Apart from causing user confusion, this is likely to stop live migrate working.

Enabling Multipath

If you are using iSCSI attached storage (either directly integrated or using our USS feature) you may wish to enable multipath. Multipath is disabled by default. To enable multipath add the following line to nodetemplate.xml.

<node>
    ...
    <multipath>1</multipath>
    ...
</node>

You will need to reboot each node for this change to take effect.

Multipath is incompatible with local storage; do not attempt to use the two together.

Changing MTU settings

If you have configured OSPF to use an MTU other than Flexiant Cloud Orchestrator's default value of 1500, you can modify the MTU for an interface using the syntax outlined below, replacing VALUE with the appropriate MTU. 

In /etc/extility/nodeconfig/nodetemplate.xml:

<node>
...
	<network>
	...
		<mtu>VALUE</mtu> 
	...
	</network>
...
</node>

In /etc/extility/nodeconfig/evrtemplate.xml:

<evr>
...
	<interface>
	...
		<mtu>VALUE</mtu> 
	...
	</interface>
...
</evr>
  • No labels