[an error occurred while processing the directive]
RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

Xen

Product
Developers: Linux Foundation
Last Release Date: 2015/1014
Technology: Virtualization

Content

Xen is the monitor of virtual machines (VMM), or a hypervisor. Works in a paravirtual mode and in the mode of hardware virtualization (HVM), uses hardware opportunities of processors therefore has no binding to the specific operating system and it can be set "atop" only the hardware, in the so-called bare metal mode. It is capable to support simultaneous operation of a large number of virtual machines on one physical, at the same time without spending considerable computing resources.

Xen is one of several server hypervisors extended now for architecture of x86. Over 10 million users work with it. Other popular hypervisors are VMware vSphere and Microsoft Hyper-V and also KVM which, as well as Xen, has the open code.


The hypervisor of XEN is the cross-platform solution with the open code. XEN belonging earlier Citrix, and today being running Linux Foundation is one of the most popular virtualization platforms.

By an operation principle the hypervisor is the process separating operating systems and applications from the equipment hardware. The hypervisor in this case is the so-called manager of virtual machines.

Scopes

The technology of virtual machines allows to expand functionality of the equipment with the following methods:

  • The virtual machine has performance comparable from real.
  • Possibility of migration of the started virtual machine between physical machines.
  • Excellent support of the equipment (the majority of drivers of Linux devices is maintained)
  • Possibility of creation of a sandbox, reloadable drivers of devices.

Technology

For the guest XEN systems — it is completely transparent environment. For start of the operating system the hypervisor offers the prepared block devices and network interfaces. Such operation algorithm emphasizes that the hypervisor does not hide itself from operating systems and does not emulate the existing hardware, openly requesting drivers, necessary for start. Such approach has a powerful advantage — the high performance of guest operating system remains[1].

Loading of operating systems on the XEN platform happens to a certain level of privileges. So, for example, instead of independent accomplishment of exclusive transactions, the guest operating system makes to a hypervisor an inquiry for accomplishment of necessary transaction.

Paravirtualization allows to reach very high performance even on such platforms, very heavy for virtualization, as x86. Feature of such approach is need of adaptation of operating system kernel before the placement to Xen. Process of adaptation to Xen is very similar to porting for the new platform, however it is much simpler in view of similarity of the virtual equipment to real. Even taking into account that the operating system kernel obviously should support Xen, user applications and libraries are left without change.

With growth of popularity of technologies of virtualization, the companies making central processors and system logic began to advance means of hardware virtualization actively. So technologies of Intel of VT (the known Vanderpool code-named) and AMD Secure Virtual Machines (the known Pacifica code-named) were born. Thanks to support of virtualization at the level of the equipment in Xen there was an opportunity to start unmodified operating systems, even such as Microsoft Windows which modification is impossible owing to closeness of the source code and license restrictions.

Features of XEN

XEN has two most significant features: paravirtualization and minimality of the code of the hypervisor.

Application of a method of paravirtualization provides a possibility of execution of several virtual machines on one physical equipment at the same time. At the same time the high performance almost similar to speed of the real, not virtualized machine remains.

The basic principle of paravirtualization consists in preparation of guest operating systems by insignificant modification of their core before execution in the virtualized environment. It imposes certain restrictions for the choice of the used systems as from the operating system providing to an opportunity making changes in its source code is required. It concerns as a host systems, and the guest systems.

The second factor which belongs to features of XEN is the insignificant code amount of the hypervisor. It is reached due to carrying out of bigger number of control functions out of limits of the hypervisor. Within a hypervisor developers left the following functions:

  • management of RAM and clock rate (TSC) of the processor,
  • control of interruptions and DMA (engl. direct memory access, direct memory access),
  • timer of real time.

All other functionality (from management of settings of network before creation and removal of virtual machines) is in the so-called managing domain.

Such approach to distribution of functionality increases in general fault tolerance of a system of virtualization as failure in the components which are outside a hypervisor has an impact only on an abnormal component, without affecting operability of other system.

Prevalence

Xen every day supports more and more platforms. Linux and NetBSD is supported. The port for FreeBSD passes test now and will be soon officially released (it is available in FreeBSD SVN repository already now). Ports of other operating systems, such as Plan 9 also are in work. It is expected that for all these operating systems official ports for Xen will be released (as it happened for NetBSD).

On the basis of Xen several commercial products for consolidation of servers are created. In particular it is such products as:

Development History

2003: The first public release

Xen began as the research project of the Cambridge university under the leadership of Ian Pratt who became further the founder of XenSource company. The company supported development of the opensorsny version (xen) and in parallel sold the commercial versions of software which were called XenServer and XenEnterprise.

The first public release of Xen'a happened in 2003.

2007: Citrix buys XenSource

In October, 2007 Citrix purchased XenSource and performed renaming of products:

  • XenExpress in "XenServer Express Edition" (the built-in version of a hypervisor began to be called "XenServer OEM Edition")
  • XenServer в "XenServer Standard Edition"
  • XenEnterprise в "XenServer Enterprise Edition"

Further they were renamed into XenServer (Free), Essentials for XenServer Enterprise, and Essentials for XenServer Platinum.

On October 22, 2007 Citrix completed merger of XenSource and the opensorsny project moved to the website http://www.xen.org/.

2009: Declaration of opening of source codes

On October 21, 2009 Citrix announced that them which are at the moment commercial versions of XenServer will become completely opensorsny and public and available. Simon Crosby, the chief engineer of division of Tsitriks on virtualization said: "XenServer of 100% is free and its source codes will be completely open in the nearest future. We do not plan profit earning from all this". (engl. XenServer is of 100% free, and also shortly fully open sourced. There is no revenue from it at all.). While versions of Xen Server are free, XenCenter (software for centralized operation) is on sale under the proprietary license Citrix'om.

2012: Release of Xen 4.2.0

The community Xen.org announced in September, 2012 release of upgraded version of a hypervisor of Xen 4.2.0 which received several improvements. Let's remind that based on a hypervisor of Xen the set of public and private cloud infrastructures, in particular a cloud of Amazon of AWS is constructed. Preparation of release of this version of Xen took 18 months and 300 thousand code lines which was kontribyyutit by 43 organizations of 124 independent professionals.

  • In Xen 4.1 new toolstack of a hypervisor XL for management virtual machines appeared, however the demon of Xend remained default. Now a default demon is XL (with support of the SPICE protocol) which gives more opportunities for control of virtual machines. Also the libxl library in terms of functionality and reliability was finished.
  • Support of big computing nodes.

Now the Xen 4.2 nodes support up to 4095 physical processors of a host, up to 512 processors of guest systems (vCPU) and up to 5 TB memories (for a 64-bit hypervisor). There was a support of loading of EFI. Also significant improvements of performance were carried out that increased coefficient of consolidation of systems for VDI loadings. Besides, there was an opportunity to automatically create CPUPOOL for a NUMA node and more intellectually to carry vCPU on the corresponding NUMA nodes and also support of several PCI segments that also positively affects performance of big computing nodes.

  • The improved security.

XSM/Flask stack (security modules) was significantly finished, and security policies became clearer and simple for change according to requirements of the organization.

  • Documentation.

The structure and filling of documentation for Xen 4.2 were significantly finished.

Distributions of the organizations contributors and certain professionals in the Xen 4.2 code (those who enclosed more than 1% of a total quantity of lines of the code are shown):

2013

Citrix transferred Xen hypervisor in Linux Foundation

At the Linux Foundation Collaboration Summit conference in April, 2013 it was announced that instead of Citrix company the Linux Foundation non-profit organization will be engaged in further development and support of a hypervisor of Xen. Thanks to it, believe in Citrix, the Xen project will get more broad support. Already announced participation in the project to Amazon Web Services, AMD, CA Technologies, Cisco, Google, Intel, Oracle, Samsung and Verizon.

Release of Xen 4.3

Released the new version of a hypervisor of Xen 4.3 Linux Foundation in the first days of month, the press service of the organization reported on July 11, 2013.

Xen 4.3 was developed more than 9 months, 90 professionals from 27 organizations and 25 independent developers took part in development.

Many opportunities of the new version will be used in large public clouds:

  • The scheduler of a hypervisor maintains NUMA architecture now that increases the overall performance of the platform.

  • On hosts about 16 TB physical RAM is maintained now.

  • Support of the open openvswitch switch.

  • A limit for 300 virtual processors (vCPU) for a host is lifted, and the hypervisor was tested for 750 vCPU.

  • There was a support of the MWAIT expansion allowing to optimize electricity consumption by the host processor.

2015: Xen 4.6.0

On October 14, 2015 it became known of an exit of release of a free hypervisor of Xen 4.6.0. More than 2 thousand changes[2] are made to Xen 4.6].

XPDS15 - Xen 4.6 and Beyond (2015)

Main changes in Xen 4.6:

  • completely new implementation of Live-migration (Migration v2) considering features of different layers of a program stack of Xen, differing in bigger reliability and expansibility and also the best support of infrastructures of the next generation and the works planned for future releases of a hypervisor is provided to libxc/libxl.
  • the Remus tools for creation of configurations of high availability which are based on Migration v2 practices now are processed;
  • in Libxl the possibility of canceling of already initiated asynchronous transactions is provided that allows the user to cancel without serious consequences it is long the performed parallel operations, to take all advantage of libvirt and to simplify integration with stacks of the orchestration of the cloud systems;
  • support of the SPICE/QXL protocol is improved;
  • support of the disk controllers AHCI is added;
  • the Xenalyze tool for the analysis of trace buffers of a hypervisor which can be used for optimization and debugging is added to the main code base;
  • support of new opportunities of releases of a kernel of Linux from 3:18 a.m. on 4.3, including support of a backend and frontend of Xen SCSI, support of cores of VPMU, performance improvement of mmap, an opportunity to address in P2M more than 512 GB for the paravirtual guest systems is implemented;
  • experimental support of PVH Dom0/DomU on the FreeBSD platform is added. The classical port i386 PV, specific to FreeBSD, and the idle blkfront/back expansions are deleted. Support of an indirect descriptor of Blkfront is added. Work on support of start of the guest systems based on ARM32 and ARM64 in FreeBSD is continued;
  • the subsystem of processing of the events connected with memory is processed into the new subsystem of VM event maintaining architecture of ARM and x86. The subsystem of VM event can be used for interception of any events, specific to the virtual machine, such as storage access and to registers that allows to create applications for inspection of work of the guest systems and the monitor;
  • the support of vTPM 2.0 (Virtual Trusted Platform Module) implemented by Intel and Bitdefender companies is added;
  • the scalability of the table of access (Grant table) is considerably increased that in some configurations led to doubling of the general capacity of a virtual network subsystem of a host and substantial increase of performance of drivers of input-output;
  • the efficiency of the mechanism of blocking for improvement of work of large configurations in which on one host hundreds and thousands of virtual environments are started is increased;
  • support of not used scheduler of SEDF is stopped;
  • mini-OS is selected from code base in a separate tree of source texts and will develop as the separate project;
  • the Intel company implemented a number of the new technologies specific to architecture of x86 for Xen:
    • an alternative P2M-framework with new opportunities of an introspektion and protection of VM;
    • technology of journalizing of IPML (Intel Page Modification Logging) for tracking of the addressing pages of memory at Live-migration accomplishment;
    • the system of individual preference L3 of a cache for VM;
    • mechanisms of monitoring of capacity of memory;
    • tools for profiling of a hypervisor;

  • implementation of the virtual NUMA system for the guest systems working in the HVM mode is brought to complete functionality;
  • the large portion of the improvements connected with architecture of ARM is entered: the number of the virtual CPU supported on ARM64 platforms is increased from 8 to 128;
  • support of a probros of access to a не-PCI to devices is added;
  • support of ARM GICv2 on GICv3;
  • support of a 32-bit user environment on the 64-bit guest systems;
  • support of OVMF;
  • support of the ARM platforms Renesas R-Car Gen2, Thunder X, Huawei hip04-d04 and Xilinx ZynqMP SoC;
  • the Raisin project providing tools for assembly and formation of packets for deployment of the working Xen configurations by reassembly from source texts and loading of all necessary dependences, such as Grub and Libvirt is submitted.
  • the system of continuous integration for testing of the Xen code in combination with the OpenStack components is put into operation.
  • the quality level of support of Xen in OpenStack is raised from level C to level B.

2016: Exit of Xen 4.7.0

On June 24, 2016 there was a release of a free hypervisor of Xen 4.7. 1622 changes are made to it. The companies AMD ARM Bitdefender Bosch Broadcom Citrix Fujitsu Huawei Intel, Linaro Netflix, NSA Oracle, Red Hat and SUSE [3] took part in development of release].

Main changes in the structure of Xen 4.7:

  • Possibility of application of patches on the fly (Live Patching) without the need for restart of a hypervisor. The new equipment is suitable for elimination about 90% of vulnerabilities in a hypervisor. Implementation includes the system call of LIVEPATCH_SYSCTL added to a hypervisor, the xen-livepatch utility for loading of a patch and tools for creation of patches (assembly of a hypervisor with correction and without is created then on the basis of change the module entering changes into the working system is created);
  • Support of removal of separate functions of a hypervisor through change of settings in KCONFIG that allows to create minimalist assemblies of a core with the cut-down hypervisor for application on the built-in systems and devices of Internet of Things (IoT) or for shutdown in a hypervisor of potentially vulnerable subsystems;
  • Optimization of performance and reliability of the interface of an introspektion of virtual machines (VMI, Virtual Machine Introspection), Intel of EPT and AMD RVI allowing to involve hardware mechanisms of virtualization for control of the appeal to areas of memory and blocking of the possible attacks, critical in terms of security, is performed. Based on VMI the new instrument of security of Bitdefender Hypervisor Introspection which was a part of XenServer 7 is developed;
  • Practices on ensuring restart of parts of Dom0 which are taken out in separate environments for disposal of a uniform point of failure are provided. If the demon of xenstored who is responsible for management of settings of a hypervisor can be executed in the separate virtual machine "xenstored stub domain" since Xen 4.2, then in 4.7 process of creation of the similar virtual machine the possibility of restart of xenstored without violation of work of Dom0 is significantly simplified and is provided;
  • The new command line interface for device management of PVUSB for the guest systems is added. It is supported as the backend of PVUSB working at the kernel level and option based on QEMU;
  • Support of hot connection of disk backends of QEMU and USB devices to the guest systems working in the HVM mode that allows to connect and take drives without restart of a guest system. For HVM function of soft reset (Soft-reset) is also implemented;
  • Support of migration of virtual machines is improved. Possibilities of transfer of environments between hosts with the different hardware are expanded. The lock manager of COLO (Coarse-grained Lock-stepping) allowing to raise performance due to disposal of creation of excess molds of a status (checkpoint) is integrated into structure. Separately additions of COLO Block Replication and COLO Proxy which will be a part after their acceptance by the QEMU project develop;
  • Adaptation of Xen for new types of loadings and applications is provided. A limit for the extent of memory of the paravirtualized guest system (512 GB of OZU) is lifted that in combination with the remained restriction for 512 vCPU for VM gives the chance of application of Xen for creation of processing systems of large volumes of data and start of DBMS keeping data in OZU;
  • Work of the scheduler of Credit2 who is almost ready to industrial application is improved. The feature for sending a command for a regrouping of queues of the carried-out tasks (runqueues) and balancing of loading between cores of CPU, separate processors and the NUMA nodes is added. Possibilities of expanded setup allow to implement more aggressive schemes of balancing of loading optimal for the systems of the average size (for example, good performance at Hyper Threading application is shown). For larger systems it is implemented supports of fixing of CPU and vCPU (hard affinity);
  • Work of the realtime-scheduler of RTDS providing to a guest system the guaranteed CPU resources is improved. In new release the scheduler of RTDS is transferred from a distribution model of quanta of time on event - the focused architecture that reduced overheads from work of the scheduler, improved performance on the built-in systems and increased quality of accomplishment of realtime-tasks. The feature for determination of settings for separate vCPU is added;
  • The infrastructure for the organization of blocking of reading/record in a binding to CPU allowing to increase the speed of accomplishment of intensive read operations is added. For example, transition to new blocking allowed to increase capacity of data transmission between virtual machines from 15 to 48 gbit/s on the dual-processor Haswell-EP server;
  • Support of systems based on architecture of ARM is expanded: The feature for loading on ARM hosts with ACPI 6.0 is added. The compatibility with the PSCI 1.0 interface (Power State Co-ordination Interface) is provided. Implementation of vGIC-v3 (Virtual Generic Interrupt Controller version 3) is brought into accord with requirements of the specification. Support of direct data acquisition about Wallclock time through the shared page of memory is added;
  • New opportunities of Intel Xeon processors are involved: Support of the VT-d Posted Interrupts mechanism providing means of hardware acceleration for interrupt handling virtualization is added. Support of CDP technology (Code and Data Prioritization) allowing to isolate the code and data in a shared cache of L3 for increase in efficiency of use of a cache in the multi-user systems is added. Support of VMX TSC Scaling technology which allows to simplify migration between machines with CPU working at different frequency is added. Support of the mechanism of isolation of a stack of Memory Protection Keys is added.

Notes