This page is designed to help bring a little bit of methodology to testing of Xen. Hopefully it will be useful as a basic check-list for people testing RCs, as well as something which suggests new tests which people may want to try to round out their testing. It should also be useful for developers working on an automated test system, to get an idea what coverage should be.
- 1 Theory
- 2 Types of testing
- 3 System State / Configuration
- 4 Functionality
In general, when we think about testing, we are thinking about testing functionality -- if I do a specific action, does it do what I expect it to do, or something else. For instance, if I create a VM, does the VM boot, or does it crash?
What makes testing so complicated is that there are so many different variables that can affect whether functionality works or not. For instance, the guest may boot on Intel but not on AMD hardware; or it may work if you're using a qdisk backend but not a blktap backend.
Additionally, there are different kinds of tests which can be performed. A bit functionality might work for a quick test, but fail when more complicated test cases are used; or may fail after a long time of heavy usage. Or, the functionality might succeed but perform really slowly.
So we need to think about specific functionality working the system is in a particular system state; and we need to think of what type of test is being done.
The goal for this document is to help enumerate some of the important aspects of state, functionality, and type of test, to help give people ideas for new tests that can be done, and hopefully expand the test coverage.
Types of testing
Just do something quickly to see if it works
Normal functional test
Try to use it the way you expect it to work
Automated heavy use over long period of time
Try to use it in ways you don't expect it to be used
Test it specifically measuring the performance
System State / Configuration
For "system state", we mean any aspect of the system that may affect the functionality. This include things from the list below.
These variables are normally also important to include in your bug reports.
- Intel / AMD
- Model of processor
- Number of sockets / cores
- NUMA topology
- Amount of memory
- Platform: IOMMU, motherboard type, &c
- Devices: GPU, NIC, disk, &c
- Domain 0 distro
- Debian, Fedora, Ubuntu, Arch, Alpine, NetBSD
- Domain 0 kernel
- distro, mainline most recent, mainline stable
- blktap drivers?
- Network config
- Linux bridging / openvswitch
- Network driver domain See Driver_Domain
- XSM policy
- Amount of memory / vcpus
- Guest OS
- PV / PVHVM (for Linux / NetBSD guests)
- qemu version ( qemu-xen-traditional / qemu-xen / straight qemu )
- Emualted / PV devices (disk, network)
- Disk backend
- Location: LVM / file / NFS / iSCSI
- Type: qdisk / blkback / blktap
- Format: raw, file, phys, qcow, &c
- PV bootloader
- pygrub, pvgrub, xenpvnetboot
- USB pass-through
- Spice / qxl
When you do your testing, consider changing the different variables outside what you normally use, and trying to do as complete a testing in that new state as you can.
For a more thorough look at guest config options, see the xl.cfg manpage.
Most of this functionality can be found by looking at the xl manpage.
Build / install
create, list, shutdown, reboot, destroy, suspend/resume, migrate, pause, unpause, console
cd-insert, cd-eject, button-press, vcpu-pin, vcpu-set, domid, domname, rename, trigger, sysrq, info, dmesg, top, config-update, info
- PCI attach/detach/list
- Network attach/detach/list
- Block attach/detach/list
PoD, mem-max, mem-set
- Page sharing
- Hypervisor swap
- alternate block / network scripts
- Scheduling parameters