Difference between revisions of "Xen Project 4.5 Feature List"

From Xen
(Toolstack Updates)
m (Correct mistakenly included ARM h/w platform)
Line 1: Line 1:
= x86 Hypervisor-Specific Updates =
= x86 Hypervisor-Specific Updates =
On the x86 side, development has focused on improving performance on various fronts:
On the x86 side, development has focused on improving performance on various fronts:
Line 40: Line 39:
* Broadcom 7445D0 A15
* Broadcom 7445D0 A15
* Midway (Calxeda)
* Midway (Calxeda)
* Odroid-XU
* Vexpress (ARM Ltd.)
* Vexpress (ARM Ltd.)
* OMAP5, OMAP6, DRA7 (Texas Instrument)
* OMAP5, OMAP6, DRA7 (Texas Instrument)
* Exynos5250 (Exynos 5 Dual) and Exynos 5420 (Exynos Octa) (Samsung SoC for Arndale and various smartphones and tablets)
* Exynos5250 (Exynos 5 Dual) and Exynos 5410 (Odroid-XU) (Samsung SoC for Arndale and various smartphones and tablets)
* SunXI (AllWinner), aka A20/A21, CubieTruck, CubieBoard
* SunXI (AllWinner), aka A20/A21, CubieTruck, CubieBoard
* Mustang (Applied Micro-X-Gene, the ARMv8 SoC)
* Mustang (Applied Micro-X-Gene, the ARMv8 SoC)

Latest revision as of 16:09, 5 March 2015

x86 Hypervisor-Specific Updates

On the x86 side, development has focused on improving performance on various fronts:

  • The HPET has been modified to provide faster and better resolution values.
  • Memory is scrubbed in parallel on bootup giving a huge time boost (faster boot) for large-scale machines (1TB or more).
  • PVH initial domain support for Intel has been added and now supports running as dom0 with Linux platforms. PVH is an extension to the classic Xen Project Paravirtualization (PV) that uses the hardware virtualization extensions available on modern x86 processor servers. Requiring no additional support other than the hypervisor, PVH boots as the first guest and takes on the responsibilities of the initial domain known as dom0. This means Xen Project Hypervisor is able to take advantage of contemporary hardware features like virtual machine extensions (VMX) to significantly expedite execution of the initial domain. Instead of asking the hypervisor to handle certain operations, the dom0 can execute operations natively without compromising security. For more background, Virtualization Spectrum is an excellent introduction to PVH.
  • Lower interrupt latency for PCI passthrough on large-scale machines (more than 2 sockets).
  • Multiple IO-REQ services for guests, which is a technique to have many QEMUs assigned for one domain. This allows speed up of guests operation by having multiple backends (QEMUs) deal with different emulations.

We also expanded support for:

  • Soft affinity for VCPUs (aka NUMA affinity) – Xen has NUMA Aware Scheduling since 4.3. In Xen 4.5, we build on that to make it more general, and useful on non-NUMA systems too. In fact, it is now possible for the sysadmin to define an arbitrary set of physical CPUs on which vCPUs prefer to run on, and Xen will try as hard as possible to follow this indication.
  • Security Improvements: Guest introspection expansion. There is an an excellent video of this on Youtube or the presentation (also part of the video). VM introspection using Intel EPT / AMD RVI hardware virtualization functionality builds on Xen Project Hypervisors Memory Inspection APIs introduced in 2011. This addresses a number of security issues from outside the guest OS without relying on functionality that can be rendered unreliable by advanced malware. The approach works by auditing access of sensitive memory areas using HW support in guests in an unobtrusive way (or maybe better: with minimal overhead) and allows control software running within a dedicated VM to allow or deny attempts to access sensitive memory based on policy and security heuristics. You can also find an excellent introduction on the topic of VM introspection here.
  • Serial support for debug purposes. This covers PCIe cards (Oxford ones) and newer Broadcom ones found on blades.
  • Real Time Scheduler - improved multi-core support allows users to predict timing and performance of VMs. Video at Youtube and presentation at Linux Foundation and blog.

Intel Hypervisor-Specific Updates

  • Broadwell Supervisor Mode Access Prevention. The LWN article has an excellent explanation of it – but a short summary is that it restricts the kernel from accessing the user-space pages. This feature in Xen also added alternative assembler support to patch the hypervisor during run-time (so that we won’t be running these operations on older hardware).
  • Haswell Cache QoS Monitoring aka Intel Resource Director Technology is “a new area of architecture extension that seeks to provide better information and control of applications running on Intel processors. The first few features we will cover, documented in the Software Developers’ Manual, are to monitor application thread LLC usage, to provide a means of directing such usage and provide more information on the amount of memory traffic out of the LLC.” (from xen-devel)
  • SandyBridge (vAPIC) extensions. In Xen 4.3 support for VT-d Posted Interrupts was added. In Xen 4.5 we added extensions for PVHVM guests to take advantage of VT-d Posted Interrupts. Instead of using vector callback the guest can utilize the vAPIC to lower its VMEXIT overhead, leading to lower interrupt latency and performance improvements for I/O intensive workloads in PVHMM guests.

AMD Hypervisor-Specific Updates

  • Fixes in the microcode loading.
  • Data Breakpoint Extensions and masking MSR Support for Kabini, Kaveri and further. This allows “.. to specify cpuid masks to help with cpuid levelling across a pool of hosts. ” from xen-command-line manual.

ARM Hypervisor-Specific Updates

The ARM ecosystem operates differently to the x86 architecture – in which ARM licensees design new chipsets and features and OEMs manufacture platforms based on these specifications. OEMs designing ARM based platforms determine what they need on the SoC – that is the System On Chip. As such they can selectively enable or disable certain functionality that they consider important (or unimportant). ARM provides the Intellectual Property (IP) and standards from which OEMs can further specialize and optimize. Therefore the list of features that Xen Project Hypervisor supports on ARM is not for a specific platform – but rather for functionality SoCs provide.

New updates include:

  • Support for more than 1TB guests.
  • The Generic Interrupt Controller (GIC) v3 is supported in Xen 4.5. v3 is very important because it introduces support for Message Signaled Interrupts (MSI), emulation of GICv3 for guests – and most importantly – for more than 8 CPUS. Many of the new features are not used by Xen yet but the driver is on par with v2.
  • Power State Coordination Interface 0.2 (PSCI) – important in the embedded environment where power consumption needs to be kept to the absolute minimum. It allows us to power down/up CPUS, suspend them, etc.
  • UEFI booting. On ARM64 servers both U-Boot and UEFI can be used to boot the OS.
  • IOMMU support (SMMUv1). For isolation between guests, ARM platforms can come with an IOMMU chipset based on the SMMU specification.
  • Super Pages (2MB) support in Xen. Using super pages for the guest pseudo-physical to physical translation tables significantly improves overall guest performance.
  • Passthrough – the PCI passthrough features did not make it on time, but doing passthrough of MMIO regions did. In the ARM world it is quite common to have no PCIe devices and to only access devices using MMIO regions. As such this feature allows us to have driver domains be in charge of network or storage devices.
  • Interrupt latency reduction: no maintenance interrupts. Please see Stefano’s slides (recording on YouTube).

With these new features, the following motherboards are now supported in Xen Project Hypervisor 4.5:

  • AMD Seattle
  • Broadcom 7445D0 A15
  • Midway (Calxeda)
  • Vexpress (ARM Ltd.)
  • OMAP5, OMAP6, DRA7 (Texas Instrument)
  • Exynos5250 (Exynos 5 Dual) and Exynos 5410 (Odroid-XU) (Samsung SoC for Arndale and various smartphones and tablets)
  • SunXI (AllWinner), aka A20/A21, CubieTruck, CubieBoard
  • Mustang (Applied Micro-X-Gene, the ARMv8 SoC)
  • McDivitt aka HP Moonshot cartridge (Applied Micro X-Gene)

The Xen Project also maintains this list of ARM boards that work with Xen Project software.

Toolstack Updates

Xen Project software is now using a C-based toolstack called xl or libxl, replacing the obsolete Python toolstack called xend. Based on a this more modern architecture for easier maintenance, users will not be affected by this move since xm and xl offer feature parity. In fact, the move greatly simplifies managing Xen instances as other toolstack, such as libvirt are C based and less complex. libvirt and XAPI are now using libxl as well. For more background on this move, check out our new hands-on tutorial XM to XL: A Short, but Necessary, Journey.

Additional toolstack changes include

  • VM Generation ID. This allows Windows 2012 Server and later active directory domain controllers to be migrated.
  • Remus initial support – which provides high availability by check pointing guests states at high frequency. Also see COLO - Coarse Grain Lock Stepping and COLO - Coarse Grain Lock Stepping SLES
  • Libxenlight (libxl) JSON infrastructure support. This allows libxenlight to use JSON to communicate with other toolstacks.
  • Libxenlight to keep track of domain configuration. It now uses the JSON infrastructure to keep track of domain configuration. The is feature parity with Xend.
  • Systemd support. This allows one source base to contain the systemd files, which can be used by various distributions instead of them having to generate them.
  • oxenstored scalability improvements, fd limit increased from 1024 to 4096, switch from select() to poll().
  • pygrub updates to support latest GRUB versions / distros.

On the libvirt side, changes include:

  • PCI/SR-IOV passthrough, including hot{un}plug
  • Migration support
  • Improved concurrency through job support in the libxl driver – no more locking entire driver when modifying a domain
  • Improved domxml-{to,from}-native support, e.g. for converting between xl config and libvirt domXML and vise-versa
  • PV console support
  • Improved qdisk support
  • Support for <interface type=’network’> – allows using libvirt-managed networks in the libxl driver
  • Support PARAVIRT and ACPI shutdown flags
  • Support PARAVIRT reboot flag
  • Support for domain lifecycle event configuration, e.g. on_crash, on_reboot, etc
  • A few improvements for ARM
  • Lots of bug fixes

QEMU Updates

Xen Project 4.5 will ship with QEMU v2.0 and SeaBIOS v1.7.5 with the following updates:

  • Bigger PCI MMIO hole in QEMU via the mmio_hole parameter in guest config, which allows configuring the MMIO size below 4GB. This allows users to pack more legacy PCI devices for passthrough in an guest when using qemu-upstream.
  • QEMU is now built for ARM providing backend support for framebuffer (VNC).
  • HVM guest direct kernel boot support (on x86).


The 4.5 release also takes advantage of new features in Linux and FreeBSD such as PVH support.

Also See

Source Download

The sources are located in the git tree or one can download the tarball:

  • xen: with a recent enough git (>= just pull from the proper tag (RELEASE-4.5.0) from the main repo directly:
  • git clone -b RELEASE-4.5.0 git://xenbits.xen.org/xen.git

With an older git version (and/or if that does not work, e.g., complaining with a message like this: Remote branch RELEASE-4.5.0 not found in upstream origin, using HEAD instead), do the following:

tarball: here it is a 4.5.0 and its signature.