Article ID: 112588, created on Oct 21, 2011, last review on Feb 17, 2016

  • Applies to:
  • Virtuozzo 6.0
  • Virtuozzo containers
  • Virtuozzo hypervisor

Information

This article describes how CPU limits are applied to containers and virtual machines on Virtuozzo nodes.

CPU limits in megaherz are available for the following products:

  • Virtuozzo
  • Virtuozzo containers for Linux 4.7 and 4.6
  • Virtuozzo hypervisor 5
  • Virtuozzo containers for Windows 4.6

Note: The schema is also valid for other versions of Virtuozzo hypervisor and Virtuozzo containers omitting the CPU limit in megaherz.

There are several limits which determine the maximum CPU power a container can consume and prioritize its CPU share:

  • cpus - amount of CPU cores on the host which can be simultaneously used for running a container's processes. This limit also shows how many virtual CPUs will be displayed inside the container or a virtual machine. Hyper-threading-enabled and multi-core processors are counted as per /proc/cpuinfo.

    NOTE: on the hardware node, however, container threads can be executed on any core available. To bind container threads to specific host core it is needed to use cpumask parameter.

  • cpulimit - total CPU power share which can be consumed by a container. Each physical CPU core is 100% so total power of a hardware node equals the amount of CPUs multiplied by 100%. This limit can also be set in megaherz. It does not change CPU frequency inside a container or a virtual machine but has the same meaning as CPU limit in percent. The only difference is that the total CPU power of the hardware node is calculated as the amount of CPUs multiplied by their frequency and the container's limit is set as the CPU limit divided by total CPU power of the hardware node.

  • cpuunits - weight or priority of container's task among other containers. This value is relative and containers having the same CPU units limit will have the same priority for their tasks regardless of the CPU units value itself.

The limits are applied the following way:

  1. Assume that a node has N CPU cores with frequency CPUFREQ. The total CPU power of the hardware node will be N*100% or N*CPUFREQ;
  2. A container is assigned with cpus=vCPUS limit;
  3. The container is assigned with the CPU limit either in percent or in megaherz:
    1. In percent: CPULIMITp;
    2. In megaherz: CPULIMITm which is converted to CPULIMITp=100%*CPULIMITm/CPUFREQ;
  4. Regardless of the CPU limit, a container cannot consume more than 100% of a single physical CPU core power. Therefore, the total amount of physical CPU core power which could be consumed by a single virtual CPU is min(100%, CPULIMITp);
  5. All virtual CPUs will consume not more than the CPU limit of physical CPUs power. So if each virtual CPU consumes USAGE[vCPU#] of power, then sum(USAGE[vCPUS])<=CPULIMITp;

The below table demonstrates how various limits will be applied on two different hardware nodes:

Node 1: 4 cores @ 2GHz Node 2: 4 cores @ 1GHz
CPU 1 CPU 2 CPU 3 CPU 4 CPU 1 CPU 2 CPU 3 CPU 4
               
               
               
               
Total power: 400% or 8000 MHz Total power: 400% or 4000 MHz




    Node 1 Node 2  
command cpus cpulimit, % cpulimit, MHz cpulimit, % cpulimit, MHz  
vzctl set CTID --save --cpus 1 --cpulimit 100% 1 100 2000 100 1000
vzctl set CTID --save --cpus 2 --cpulimit 100% 2 100 2000 100 1000
vzctl set CTID --save --cpus 1 --cpulimit 50% 1 50 1000 50 500
vzctl set CTID --save --cpus 2 --cpulimit 50% 2 50 1000 50 500
vzctl set CTID --save --cpus 1 --cpulimit 1000m 1 50 1000 100 1000
vzctl set CTID --save --cpus 1 --cpulimit 2000m 1 100 2000 200 2000
vzctl set CTID --save --cpus 4 --cpulimit 100% 4 100 2000 100 1000
vzctl set CTID --save --cpus 4 --cpulimit 2000m 4 100 2000 200 2000
vzctl set CTID --save --cpus 4 --cpulimit 250% 4 250 5000 250 2500

Search Words

load

cpuunit

high load

cpulimit

cpu limit

cpuunits

2897d76d56d2010f4e3a28f864d69223 a26b38f94253cdfbf1028d72cf3a498b d02f9caf3e11b191a38179103495106f 0dd5b9380c7d4884d77587f3eb0fa8ef c62e8726973f80975db0531f1ed5c6a2

Email subscription for changes to this article
Save as PDF