Xen Project 4.4 Feature List
High Level Features
See this table for a comparison of the feature sets of different Xen Project releases. Compatibility information can be found in the following two tables: Host Operating Systems and Guest Operating Systems.
Note that Linux Distributions and other operating systems, will upgrade to Xen Project 4.4 according to their own release schedules.
Improved Flexibility in Driver Domains
Linux driver domains used to rely on udev events in order to launch backends for guests. With Xen Project 4.4, the dependency on udev is replaced with a custom daemon built on top of libxl, that provides greater flexibility in order to run user-space backends inside of driver domains. As an example, this allows driver domains to use Qdisk backends, which was not possible with udev.
Event Channel Scalability Improvements
Event channels are para-virtualized interrupts. These were previously limited to either 1024 or 4096 channels per domain. Domain 0 needs several event channels for each guest VM (for network/disk backends, qemu etc.). This limited the total number of VMs to around 300-500 (depending on VM configuration).
The FIFO-based event channel ABI allows for over 100,000 event channels and has improved fairness and multiple priorities. The increased limit allows for more VMs, which benefits large systems and cloud operating systems such as MirageOS, ErlangOnXen, OSv, HalVM.
The new ABI requires guest support (which will be available in Linux 3.14).
Experimental Support for PVH Mode for Guests
PVH mode combines the best elements of HVM and PV into a mode which allows Xen Project to take advantage of many of the hardware virtualization features without needing the overhead of simulating devices of a physical computer. This will allow for increased efficiency, as well as reduced footprint in Linux and FreeBSD going forward.
More information on PVH: see https://www.linux.com/news/enterprise/systems-management/658784-the-spectrum-of-paravirtualization-with-xen-part-2 and http://blog.xenproject.org/index.php/2014/01/31/linux-3-14-and-pvh/
Intel Nested Virtualization declared "Tech Preview"
Nested virtualization provides virtualized hardware virtualization extensions to guests. This allows you to run Xen Project, KVM, VMWare or HyperV inside of a guest for debugging or deployment testing. It also allows Windows 7 "XP Compatibility mode". Nested virtualization is not yet ready for production use, but has made significant gains in functionality and reliability, and is now ready to be declared "tech preview". Please try it out and report any issues you find.
More information on nested virtualization: see Xen nested
Improved Support for SPICE
SPICE is a protocol for virtial desktops which allows a much richer connection than display-only protocols like VNC. Xen Project 4.4 adds support for additional SPICE functionality, including vdagent, clipboard sharing, and USB redirection.
GRUB 2 Support of Xen Project PV Images (External)
In the past, Xen Project software required a custom implementation of GRUB called pvgrub. The upstream GRUB 2 (see http://www.gnu.org/software/grub/) project now has a build target which will construct a bootable PV xen image. This ensures 100% GRUB 2 compatibility for pvgrub going forward.
Indirect Descriptors for Block PV Protocol (Linux)
Modern storage devices work much better with larger chunks of data. Indirect descriptors have allowed the size of each individual request to triple, greatly improving I/O performance when running on fast storage technologies like SSD and RAID. This support is available in any guest running Linux 3.11 or higher (regardless of Xen Project version).
Improved kexec Support
kexec allows a running Xen Project host to be replaced with another OS without rebooting. This is primarily used execute a crash environment to collect information on a Xen Project hypervisor or dom0 crash.
The existing functionality has been extended to:
- Allow tools to load images without requiring dom0 kernel support (which does not exist in upstream kernels).
- Improve reliability when used from a 32-bit dom0.
kexec-tools 2.0.5 or later is required.
Improved XAPI and Mirage OS support in Xen Project environment
XAPI and Mirage OS are sub-projects within the Xen Project written in OCaml. Both are also used in XenServer (see http://xenserver.org/) and rely on the Xen Project OCaml language bindings to operate well. These language bindings have had a major overhaul, and result in much better compatibility between XAPI, Mirage OS and Linux distributions going forward.
Experimental Support for Guest EFI boot
EFI is the new booting standard that is replacing BIOS. Some operating systems only boot with EFI; and some features, like SecureBoot, only work with EFI.
Improved Integration between GlusterFS and Xen Project
You can find a blog post to set up an iSCSI target on the gluster blog here.
Improved ARM support for Xen Project Hypervisor
A number of new features have been implemented:
- 64 bit Xen on ARM now supports booting guests
- Physical disk partitions and LVM volumes can now be used to store guest images using xen-blkback (or is PV drivers better in terms of terminology)
- Significant stability improvements across the board
- ARM/multiboot booting protocol design and implementation in Xen Project and U-boot
- PSCI support in Xen Project
- Same DMA in Dom0 even with no hardware IOMMUs (not sure what the implications of this are)
- ARM and ARM64 ABIs in Xen Project are declared stable and maintained for backwards compatibility
- Significant usability improvements, such as automatic creation of guest device trees and improved handling of host DTBs.
- Adding new hardware platforms to Xen Project on ARM has been vastly improved, making it easier for Hardware vendors and embedded vendors to port Xen on ARM to their board.
- Xen on ARM now supports the Arndale board, Calxeda ECX-2000 (aka Midway), Applied Micro X-Gene Storm, TI OMAP5 and Allwinner A20/A30 boards.
- ARM server class hardware (Calxeda Midway) has been introduced in the Xen Project OSSTest automated testing framework.
Early microcode loading
The hypervisor can update the microcode in the early phase of boot time. The microcode binary blob can be either as a standalone multiboot payload, or part of the initial kernel (dom0) initial ramdisk (initrd). To take advantage of this use latest version of dracut with --early-microcode parameter and on the Xen command line specify: ucode=scan. For details see dracut manpage and http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
- Updated qemu to 1.6
- Updated SeaBIOS to 220.127.116.11
You can find Xen Project 4.4 documentation in the following two locations:
- Xen Project 4.4 Release Notes
- Xen Project 4.4 Man Pages
- Articles and tutorials related to new functionality in Xen Project 4.4
We wanted to thank the various contributors to Xen Project 4.4 : for a complete list of contributions check the Xen Project 4.4 Acknowledgements.
Xen Project 4.4 (and update releases) can be downloaded from the 4.4 Download Archives.