Networking
XCP-ng is using Open vSwitch as its core, and supports various features from it.
Even if one NIC can be enough for your host, having a dedicated NIC for storage will be really important to get consistent performances (especially if you use shared storage like iSCSI or NFS).
π Conceptsβ
This section describes the general concepts of networking in XCP-ng. For a deeper dive, check the Network Architecture section.
XCP-ng creates a network for each physical NIC during installation. When you add a server to a pool, the default networks are merged. This is meant to be sure that all physical NICs with the same device name are attached to the same network, authorizing seamless VM flexibility on any host of the pool.
Typically, you add a network to create a new external network (bridge), set up a new VLAN using an existing NIC, or create a NIC bond.
You can configure four different types of networks in XCP-ng:
- Default Networks have an association with a physical network interface. Those are also called "External networks" provide a bridge between a virtual machine and the physical network interface connected to the network. External networks enable a virtual machine to connect to resources available through the serverβs physical NIC.
- Bonded networks create a bond between two or more NICs to create a single, high-performing channel between the virtual machine and the network.
- Private networks are used to connect VMs internally, without sending traffic outside the host
- Global Private Networks extend the single server private network concept to allow VMs on different pools and/or hosts to communicate with each other by using the XOA SDN controller.
Network objectsβ
This section uses three types of server-side software objects to represent networking entities. These objects are:
- A PIF, which is a way to connect outside of a host. PIF objects have:
-
- a name
-
- a description
-
- a UUID
-
- the parameters of the NIC they represent
-
- the network and server they are connected to. PIFs can represent:
- A physical NIC
- A VLAN on top of a physical NIC
- A bond of multiple NICs
- A tunnel interface (GRE/VXLAN)
-
- A VIF, which represents a virtual NIC on a virtual machine. VIF objects have a name and description, a UUID, and the network and VM they are connected to. They can be:
- A PV driver backed device
- An emulated device
- A network, which is a virtual Ethernet switch on a host. Network objects have a name and description, a UUID, and the collection of VIFs and PIFs connected to them.
xe CLI, Xen Orchestra or XCP-ng center allow you to configure networking options. You can control the NIC used for management operations, and create advanced networking features such as VLANs and NIC bonds.
Networksβ
Each XCP-ng server has one or more networks, which are virtual Ethernet switches. Networks that are not associated with a PIF are considered internal. Internal networks can be used to provide connectivity only between VMs on a given XCP-ng server, with no connection to the outside world. Networks associated with a PIF are considered external. External networks provide a bridge between VIFs and the PIF connected to the network, enabling connectivity to resources available through the PIFβs NIC.
MTUsβ
Definitionβ
The Maximum Transmission Unit (MTU) represents the largest packet size, measured in bytes, that a network layerβsuch as Ethernet or TCP/IPβcan transmit without fragmenting the data. It effectively sets the upper limit for the payload size of a single network packet.
Typical MTU valuesβ
In most Ethernet networks, the default MTU is 1500 bytes, though this value can differ depending on the infrastructure. For instance, high-performance environments often use Jumbo frames, which support MTU sizes up to 9000 bytes to reduce overhead and improve throughput. When a packet exceeds the MTU, it is split into smaller fragments, which can increase latency and processing overhead.
MTUs and virtualized environmentsβ
In virtualized environments like XCP-ng, maintaining consistent MTU settings across physical and virtual network interfaces is critical. Mismatched MTU values can lead to connectivity problems or degraded performance, particularly in storage or high-bandwidth scenarios.
For example, if a virtual machine (VM) in XCP-ng communicates over a network with a non-standard MTU, administrators should ensure the VMβs network interface MTU aligns with the physical networkβs configuration to prevent packet loss or inefficiencies.
Support in XCP-ngβ
Non-standard MTUs (such as jumbo frames) are not supported on management interfaces. Using them can lead to serious issues, including failed pool member joins or unexpected network outages.
Non-standard MTUs are supported on storage interfaces. However, jumbo frames are unnecessary for most modern workloads, and can introduce more issues than benefits. Use them with caution.
While these limitations may be addressed in future updates, stick to default MTU settings on management interfaces to ensure operational stability.
π·οΈ VLANsβ
VLANs, as defined by the IEEE 802.1Q standard, allow a single physical network to support multiple logical networks. XCP-ng hosts support VLANs in multiple ways.
VLANs for VMsβ
Switch ports configured as 802.1Q VLAN trunk ports can be used with XCP-ng VLAN features to connect guest virtual network interfaces (VIFs) to specific VLANs. In this case, XCP-ng server performs the VLAN tagging/untagging functions for the guest, which is unaware of any VLAN configuration.
XCP-ng VLANs are represented by additional PIF objects representing VLAN interfaces corresponding to a specified VLAN tag. You can connect XCP-ng networks to the PIF representing the physical NIC to see all traffic on the NIC. Alternatively, connect networks to a PIF representing a VLAN to see only the traffic with the specified VLAN tag. You can also connect a network such that it only sees the native VLAN traffic, by attaching it to VLAN 0.
To use VLANs for your network logical isolation, it's really easy. You'll create a new network with a VLAN ID, and all virtual interfaces created on this network will transparently have traffic tagged in this VLAN. No need to configure anything inside your VM.
First step, go in Xen Orchestra menu, "New" then "Network":

Then, select a pool where you want to create this network, and fill what's required, like physical interface selection, name and description and also VLAN number:

Finally, click on "Create network":
That's it!
π Bondsβ
It's same as previous section, just check the "Bonded Network" and select multiple PIFs in the Interface selector. You can either use VLANs or not, it doesn't matter!
π Manage physical NICsβ
Add a new NICβ
Once a NIC is physically installed, in Xen Orchestra, go to your host's networking tab and click refresh.

This can also be done on the command line. After physically installing a new NIC, you'll need to run a xe pif-scan command on the host to get this NIC added as an available PIF.
xe pif-scan host-uuid=<HOST UUID>
Check new NIC by UUID:
xe pif-list
Plug new NIC:
xe pif-plug uuid=<NIC UUID>
Renaming NICsβ
In a pool, all NICs across your hosts should match up exactly. So if your management is NIC 0 and your 10Gbit storage interface is NIC 4 on host 1, it should be the same on host 2.
If for some reason the NIC order between hosts doesn't match up, you can fix it with the interface-rename command.
These commands are meant to be done on non-active interfaces. Typically this will be done directly after install, before even joining a pool.
interface-rename --help
This will display all available options.
interface-rename --list
This will display the current interface mapping/assignments.
Interfaces you wish to rename need to be downed first:
ifconfig eth4 down
ifconfig eth8 down
The most common use will be an update statement like the following:
interface-rename --update eth4=00:24:81:80:19:63 eth8=00:24:81:7f:cf:8b
This example will set the mac-address for eth4 & eth8, switching them in the process.
The XAPI database needs the old PIFs removed. First list your PIFs for the affected NICs:
xe pif-list
xe pif-forget uuid=<uuid of eth4>
xe pif-forget uuid=<uuid of eth8>
Reboot the host to apply these settings.
The interfaces by their new names need to be re-enabled:
ifconfig eth4 up
ifconfig eth8 up
The new interfaces need to be introduced to the PIF database:
xe host-list
Make note of the host uuid. Then introduce the interfaces:
xe pif-introduce device=eth4 host-uuid=<host uuid> mac=<mac>
xe pif-introduce device=eth8 host-uuid=<host uuid> mac=<mac>
By renaming/updating interfaces like this, you can assure all your hosts have the same interface order.
Remove a physical NICβ
Before removing a physical NIC, ensure that no VMs are using the interface. Shutdown the host, physically remove the NIC and boot. After boot, the PIF will need to be removed. You can do it this way:
xe pif-forget uuid=<PIF UUID>
The <PIF UUID> can be obtained with either xe pif-list or with Xen Orchestra. This command only needs to be ran once on the pool.
π SDN controllerβ
An SDN controller is provided by a Xen Orchestra plugin. Thanks to that, you can enjoy advanced network features.
GRE/VXLAN tunnelsβ
Private network (using tunnels) are very handy when you want to access resources in a secure manner, that are not in the same physical network.
So we want a network that is:
- reachable by all the hosts in a pool or even between different pools!
- unreachable by anything outside the network
- reactive when the pool changes (new host, host ejected,
PIFunplugged etc):
That's exactly what you can have thanks to XO SDN controller (here via GRE tunnels):

To create a private network, go in Xen Orchestra, New/Network and select "Private Network":

Encryptionβ
To be able to encrypt the networks, openvswitch-ipsec package must be installed on all the hosts:
yum install openvswitch-ipsec --enablerepo=xcp-ng-testingsystemctl enable ipsecsystemctl enable openvswitch-ipsecsystemctl start ipsecsystemctl start openvswitch-ipsec
More information available on XO official documentation for SDN controller.
OpenFlow Rulesβ
This section describes the new implementation (currently available in BETA). For the previous implementation, see the Xen Orchestra documentation.
- This is still in BETA. Do not use in production!
xcp-ng-xapi-plugins>= 0.15.0 is required. To check the version, runyum info xcp-ng-xapi-plugins.
Using Open vSwitch OpenFlow rules, you can setup traffic rules limiting some network accesses directly at the hypervisor vswitch level. No need for an additional layer of firewalling or filtering setup or equipment.
There are 2 ways to configure OpenFlow rules:
- Through Xen Orchestra's web UI (currently only available for per VIF rules)
- Using
xo-clias explained in the Xen Orchestra documentation
For debugging purposes, internals are documented in the sdncontroller.py plugin repository.
Common errorsβ
TLS connection issueβ
The error would look like this:
Client network socket disconnected before secure TLS connection was established
It means the TLS certificate, used to identify an SDN controller, on the host doesn't match the one of the plugin, to solve it:
- unload the SDN Controller plugin
- in plugin config, set
override-certsoption to on (it will allow the plugin to uninstall the existing certificate before installing its own) - load the plugin
The issue should be fixed.
π Static routesβ
Sometimes you need to add extra routes to an XCP-ng host. It can be done manually with an ip route add 10.88.0.0/14 via 10.88.113.193 (for example). But it won't persist after a reboot.
To properly create persistent static routes, first create your xen network interface as usual. If you already have this network created previously, just get its UUID with an xe network-list. You're looking for the interface you have a management IP on typically, something like xapi0 or xapi1 for example. If you're not sure which one it is, you can run ifconfig and find the interface name that has the IP address this static route traffic will be exiting. Then get that interfaces UUID using the previous xe network-list command.
Now insert the UUID in the below example command. Also change the IPs to what you need, using the following format: <network>/<netmask>/gateway IP>. For example, our previous ip route add 10.88.0.0/14 via 10.88.113.193 will be translated into:
xe network-param-set uuid=<network UUID> other-config:static-routes=10.88.0.0/14/10.88.113.193
You must restart the toolstack on the host for the new route to be added!
You can check the result with a route -n afterwards to see if the route is now present. If you must add multiple static routes, it must be in one command, and the routes separated by commas. For example, to add both 10.88.0.0/14 via 10.88.113.193 and 10.0.0.0/24 via 192.168.1.1, you would use this:
xe network-param-set uuid=<network UUID> other-config:static-routes=10.88.0.0/14/10.88.113.193,10.0.0.0/24/192.168.1.1
Removing static routesβ
To remove static routes you have added, stick the same network UUID from before in the below command:
xe network-param-remove uuid=<network UUID> param-key=static-routes param-name=other-config
A toolstack restart is needed as before.
XAPI might not remove the already-installed route until the host is rebooted. If you need to remove it ASAP, you can use ip route del 10.88.0.0/14 via 10.88.113.193. Check that it's gone with route -n.
πΈοΈ Full mesh networkβ
This page describes how to configure a three node meshed network (see Wikipedia) which is a very cheap approach to create a 3 node HA cluster, that can be used to host a Ceph cluster, or similar clustered solutions that require 3 nodes in order to operate with full high-availability.
Meshed network requires no physical network switches, the 3 physical nodes are interlinked with each other using multiple network interfaces.
Example with 3 nodes that each has 3 NIC, 1 is for WAN connection and 2 are used to interlink with remaining 2 nodes:

Right now only known-to-work option is to use bridge network backend, but hopefully in future it should be possible to setup meshed network using Open vSwitch as well (should you know how, please update this wiki)
Using bridge backendβ
These steps will require reboot of all 3 nodes multiple times. They will also require you to use bridge network backend instead of Open vSwitch, which will result in loss of some functionality and is not commercially supported