Xen 4.5 RC3 test instructions
- 1 What needs to be tested
- 2 Installing
- 3 Known issues
- 4 Building xen-source with bison 2.4.1 and 3.0.2 can fail
- 5 xl built under Gentoo when doing PCI passthrough seg-failts
- 6 Test instructions
- 7 Reporting Bugs (& Issues)
- 8 Reporting success
What needs to be tested
- Making sure that Xen 4.5 compiles and installs properly on different software configurations; particularly on distros
- Making sure that Xen 4.5, along with appropriately up-to-date kernels, work on different hardware.
For more ideas about what to test, please see Testing Xen.
- xen: with a recent enough
git(>= 22.214.171.124) just pull from the proper tag (
4.5.0-rc3) from the main repo directly:
git clone -b 4.5.0-rc3 git://xenbits.xen.org/xen.git
With an older
git version (and/or if that does not work, e.g., complaining with a message like this:
Remote branch 4.5.0-rc3 not found in upstream origin, using HEAD instead), do the following:
git clone git://xenbits.xen.org/xen.git ; cd xen ; git checkout 4.5.0-rc3
- tarball: here it is a 4.5.0 RC3 tarball (and its signature)
- RPMS: Michael Young graciously provided temporary Xen 4.5.0-rc3 RPMs. They are at Koji temporary build. You can use an scratch downloader to get all the RPMs and its command line would be:
./download-scratch.py -t 8289186. Since ALL 'virt-manager','virsh' and 'qemu-system-x86' are built against Xen 4.4, you will have dependency problems. You can use xen-compat-libs if you want to re-install back the old Xen libraries.
Building xen-source with bison 2.4.1 and 3.0.2 can fail
The work-around is to use ./configure BISON=/bin/true
xl built under Gentoo when doing PCI passthrough seg-failts
- Remove any old versions of Xen toolstack and userspace binaries (including
- Download and install the most recent Xen 4.5 RC, as described above. Make sure to check the
READMEfor changes in required development libraries and procedures. Some particular things to note:
- Since Xen 4.4 the default installation path has changed from
/usr/local. Take extra care when removing any old versions to allow for this.
- Since Xen 4.4 the default installation path has changed from
Once you have Xen 4.5 RC installed check that you can install a guest etc and use it in the ways which you normally would, i.e. that your existing guest configurations, scripts etc still work.
In particular if you were using the (deprecated) xm/XEND toolstack it is now REMOVED- hence please do try your normal use cases with the XL toolstack. The XL page has some information on the differences between XEND and XL.
Specific RC3 things
- systemd integration works
- pygrub works with older guests
- Passthrough of legacy PCIe devices works now
- EFI booting can work with FibreCards modifying EFI memmap during loading.
Specific ARM Test Instructions
To allow auto-translated domains to directly access specific hardware I/O memory pages pertaining a device that is not IOMMU-protected, use the iomem configuration option, whose usage is described in the following paragraph.
iomem=[ "IOMEM_START,NUM_PAGES[@GFN]", "IOMEM_START,NUM_PAGES[@GFN]", ... ]
IOMEM_START is a physical page number. NUM_PAGES is the number of pages beginning with START_PAGE> to allow access to. GFN specifies the guest frame number where the mapping will start in the domU's address space. If GFN is not specified, the mapping will be performed using IOMEM_START as a start in the domU's address space, therefore performing an 1:1 mapping as default. All of these values must be given in hexadecimal.
Specific x86 Test Instructions
Xen 4.4 added support to run certain PV guests in PVH mode. This requires the operating system to support a subset of PV ABI, as such only two exist:
- Linux 3.18-rc3 and later (The previous versions of Linux had an ABI violation so they do not work),
- FreeBSD guest wiki.
- FreeBSD initial domain support out of Roger's branch (based on stable/10 snapshot).
In Xen 4.5 we also added the support to run those guests as the initial domain (dom0). Unfortunately the work to make this work on AMD did not make, so it only works on Intel. To use this an extra parameter on Xen command line is required: dom0pvh=1.
Xen 4.3 and later can be built as EFI binaries. Xen 4.5 can be built as an EFI under ARM.
Instruction on how to build Xen as EFI and boot under it can be found here.
If you are using Fedora, you can install the RPMs mentioned above and use this test-case:
- Make sure that the Xen packages are installed correctly and run. The RC3 spec file back to the custom systemd configurations.
- QA Testcases for PV guests
Note that the RPM mentioned above has conflicts with the 'virt' type tools in Fedora 21 (as they are built against Xen 4.4). One workaround is to do after installing the RPMs:
cd /usr/lib64 ln -s libxenlight.so.4.5 libxenlight.so.4.4 ln -s libxenctrl.so.4.5 libxenctrl.so.4.4
After that restart libvirtd:
systemctl restart libvirtd
Your launch of guests might still not work with libvirt, and you can export the configuration to the native format and use xl to launch the guests:
[root@localhost ~]# virsh -c xen:/// dumpxml F21-PV-32 > F21-PV-32.xml [root@localhost ~]# virsh -c xen:/// domxml-to-native xen-xm F21-PV-32.xml name = "F21-PV-32" uuid = "3c2c560f-61d3-42d3-9152-77f13de80686" maxmem = 1024 memory = 1024 vcpus = 1 bootloader = "/usr/bin/pygrub" localtime = 0 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ "type=vnc,vncunused=1,keymap=en-us" ] vif = [ "mac=00:16:3e:ad:cc:c6,bridge=xenbr0,script=vif-bridge" ] disk = [ "phy:/dev/g/F21-PV-32,xvda,w" ] [root@localhost ~]# virsh -c xen:/// domxml-to-native xen-xm F21-PV-32.xml > F21-PV-32.xm [root@localhost ~]# xl create F21-PV-32.xm Parsing config from F21-PV-32.xm libxl: warning: libxl_bootloader.c:415:bootloader_disk_attached_cb: bootloader='/usr/bin/pygrub' is deprecated; use bootloader='pygrub' instead
libvirt is usually shipped by the distro. You would need to use libvirt-daemon-driver-xen to manage your Xen instances. If you are building from scratch, follow Libvirt compiling HOWTO
For instructions on how to install guests, please visit: Guest install using libvirt
Reporting Bugs (& Issues)
- Use Freenode IRC channel #xentest to discuss questions interactively
- Report any bugs / missing functionality / unexpected results.
- Please put [TestDay] into the subject line
- Also make sure you specify the RC number you are using
- Make sure to follow the guidelines on Reporting Bugs against Xen.
We would love it if you could report successes by e-mailing
email@example.com, preferably including:
- Hardware: Please at least include the processor manufacturer (Intel/AMD). Other helpful information might include specific processor models, amount of memory, number of cores, and so on
- Software: If you're using a distro, the distro name and version would be the most helpful. Other helpful information might include the kernel that you're running, or other virtualization-related software you're using (e.g., libvirt, xen-tools, drbd, &c).
- Guest operating systems: If running a Linux version, please specify whether you ran it in PV or HVM mode.
- Functionality tested: High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c)
The following template might be helpful: should you use Xen 4.5.0-RC3 for testing, please make sure you state that information!
Subject: [TESTDAY] Test report * Hardware: * Software: * Guest operating systems: * Functionality tested: * Comments:
Subject: [TESTDAY] Test report * Hardware: Dell 390's (Intel, dual-core) x15 HP (AMD, quad-core) x5 * Software: Ubuntu 10.10,11.10 Fedora 17 * Guest operating systems: Windows 8 Ubuntu 12.10,11.10 (HVM) Fedora 17 (PV) * Functionality tested: xl suspend/resume pygrub * Comments: Window 8 booting seemed a little slower than normal. Other than that, great work!