The following lists some steps which are commonly necessary after installing Xen on a new host.
For all pages in this category go to here.
For a Xen System to boot correctly, the Xen hypervisor needs to be started before your Dom0 Linux (or BSD) kernel. This means that the bootloader needs to load and boot Xen first and then chain-boots the Dom0 kernel. Typically, your bootloader configuration gets automatically updated by your chosen distro: you install your Dom0 distro, then your Xen packages, which should update your bootloader configuration accordingly.
However, this merely set up a number of defaults for Xen. If you want to use specific start-up options for Xen - for a list see Xen Command Line Options - you will need to manually change bootloader options in the appropriate configuration file (e.g. /boot/grub/grub.conf for grub).
- Command line options for specific versions of Xen are documented in the respective Man Pages
Boot Managers for Xen VMs
Xen comes with two boot managers for Xen virtual machines.
- PvGrub: PV-GRUB (ParaVirtual Grub) is a boot manager for Xen virtual machines. It is a safer (as in designed to be more secure - see Securing Xen) and more efficient alternative to PyGrub to boot domU images: unlike pygrub it runs an adapted version of the grub boot loader inside the created domain itself, and uses the regular domU facilities to read the disk mounted as root directory, fetch files from network, etc. It also eventually loads the PV kernel and chain-boots it.
- PyGrub: enables you to start Linux domUs with a kernel inside the DomU instead of a kernel that lies in the filesystem of the dom0. This means easier management - each domU manages its own kernel and initrd, meaning you can use the built-in package manager to update the kernels, instead of having to track and update kernels stored in your dom0. It also allows easy migration of HVM'ed Linuxes - there's no need to extract the installed kernel & initrd.
You can find information about setting up a host serial console on the Xen Serial Console page.
When running Xen on a host it is normal to require some particular setup of the network stack in order to provide a way for Virtual Machines to access the network. The most common of these is to use bridging.
System wide xen configuration
The file /etc/xen/scripts/hotplugpath.sh is used by xen throughout code for system configuration, this set of variables however are used at build / run time by a few scripts / C code, so a shared set of base files are used by the Xen build system, one which can be used by scripts (shell, python), and another which can be used by C code.
* config/xen-environment-header.in - used to generate config/xen-environment-header * config/xen-environment-scripts.in - used to generate config/xen-environment-scripts
The target files are generated after running ./configure. The above two files are currently only in a pending series of patches (this notice will be removed if this gets upstream).
Xen supports multiple operating systems but on different OSes modules are at time used to help support Xen. If a kernel requires modules the list of required modules are maintained through a OS specific file and used by the build system to expand the init scripts to ensure the required modules are loaded. The list of general modules are kept in the build system under:
On Linux this is config/Linux.modules and is used to expand the set of modules in /etc/init.d/xencommons, the build system replaces it in the location of the tools/hotplug/Linux/init.d/xencommons.in where @LOAD_MODULES@ is found. Please note that this change is still not yet merged, right now the list of modules are kept statically in the xencommons script, but in order to share the same set of modules with systemd a general place is being defined.