util-vserver:Cgroups
From Linux-VServer
Bears run away when you yell at them, even lynxes.
Contents |
Kernel configuration
When configuring your kernel for cgroups with util-vserver you must make sure CONFIG_CGROUP_NS (CGroup Namespaces) is unset for the time being.
CGroup Namespaces are a different approach to namespaces than that used by Linux vServer, and are not currently supported.
From what i gathered in sched-design-CFS.txt [1]
This is simply done by adjusting the cpu.shares. Just do:
echo '512' > /dev/cgroup/<guest name>/cpu.shares
The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for exemple :
vserver guest 1 => 512 vserver guest 2 => 512 vserver guest 3 => 2048 vserver guest 4 => 512
so you have a total of 3584 cpu shares (2048+512+512+512) , then you get :
vserver guest 1 => 512 / 3584 = 14% cpu vserver guest 2 => 512 / 3584 = 14% cpu vserver guest 3 => 2048 / 3584 = 57% cpu vserver guest 4 => 512 / 3584 = 14% cpu
Note that this is fair scheduling and this will not enfore HARD limit (as far as i know).
You must use the "cgroup" directory. You can apply defaults to all vservers or choose different settings for each guest:
- /etc/vservers/.default/cgroup , this directory contains settings applying to all guest when they start
- /etc/vservers/<guestname>/cgroup , this directory contains settings for the guest when it starts.
Example :
mkdir /etc/vservers/.defaults/cgroup mkdir /etc/vservers/<guestname>/cgroup echo '2048' > /etc/vservers/<guestname>/cgroup/cpu.shares # List of CPUs echo 1 > /etc/vservers/<guestname>/cgroup/cpuset.cpus # NUMA nodes echo 1 > /etc/vservers/<guestname>/cgroup/cpuset.mems
Note that /etc/vservers is an example, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.
Regards, Ghislain.
cgroup and CFS based CPU hard limiting that replaces sched_hard
References
You can find documentation about the CFS hard limiting in Documentation/scheduler/sched-cfs-hard-limits.txt inside your kernel source dir.
Requirements
This feature is currently available in patch-2.6.31.2-vs2.3.0.36.15.diff and is in testing phase as of this patch set so report any bugs to the mailing list.
To get the hard limit setup on every vServer start you need a recent utils package. It worked for me with: 0.30.216-pre2864. (Download from util-vserver prereleases)
Before trying to setup limits for one guest you should mount the cgroup filesystem:
[ -d /dev/cgroup ] || mkdir /dev/cgroup mount -t cgroup -ocpu none /dev/cgroup
Configuration
Example for an upper bound of 2/5th (or 40%) of the all CPU power that a guest/cgroup can use :
# force CFS hard limit (only needed for older kernel versions) # echo 1 > /etc/vservers/<guestname>/cgroup/cpu.cfs_hard_limit # time assigned to guest (in microseconds) 200000 = 0,2 sec echo 200000 > /etc/vservers/<guestname>/cgroup/cpu.cfs_runtime_us # in each specified period (in microseconds) 500000 = 0,5 sec echo 500000 > /etc/vservers/<guestname>/cgroup/cpu.cfs_period_us
This limit is an hard limit, see it like an upper wall for the resources used by the cgroup.
If you set both CPU share AND hard limit the system will do fine but hard limits takes priority over CPU share scheduling, so CPU share will do the job but each cgroup will have an upper bound that it cannot cross even if the CPU share you gave it is higher.
The hard limit feature adds 3 cgroup files for the CFS group scheduler:
- cfs_runtime_us: Hard limit for the group in microseconds.
- cfs_period_us: Time period in microseconds within which hard limits is enforced.
- cfs_hard_limit: The control file to enable or disable hard limiting for the group.
using cgroup to enforce memory limits
in linux-vserver patch version vs2.3.0.36.29 memory limiting by cgroup is introduced. to use it you need to have the following config lines in your kernel build (aditionally to the others mentioned for cgroup cpu limits):
- CONFIG_RESOURCE_COUNTERS=y
- CONFIG_CGROUP_MEM_RES_CTLR=y
- CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y
make sure /dev/cgroup is mounted with -o...,memory to be able to use this feature. The following files let you adjust memory limits of a running vserver (create them in /etc/vservers/-vserver-name- /cgroup/ to make them permanent):
- memory.memsw.limit_in_bytes the total memory limit (memory+swap) of your cgroup context
- memory.limit_in_bytes the total memory limit
values are stored in bytes. When writing to those files you can use suffixes: K,M,G.
Note: cgroup memory limits are to replace rss.soft and rss.hard some time in the future. As for vs2.3.0.36.29 the tools top and free do not show the limited memory amount asigned to the guest.
For a deeper understanding check out Documentation/cgroups/memory.txt of your kernel source tree.
Real world Examples of Scheduling
This section is for working and tested examples you have put in place. Please add the vServer patch, base kernel version, util-vserver release for each example you put here (use vserver-info).
Ben's install on Debian Lenny
I used the kernels from [2], described at [3]. I've done this on a few versions, works for 2.6.31.7 with patch vs2.3.0.36.27 on amd64, also 2.6.31.11 with patch vs2.3.0.36.28. I used the stock Lenny util-vserver, patched as described below. The kernel config is critically important, with specific cgroup options necessary in order to get cgroups working in this way. Check the configs for the [4] kernels to see which ones I used.
Getting Lenny Ready
There's a very old version of util-vserver on Lenny, it needs this patch applying before it will set the cgroups properly, it basically only adds one line:
--- /usr/lib/util-vserver/vserver.suexec.orig 2008-12-12 22:56:25.000000000 -0600 +++ /usr/lib/util-vserver/vserver.suexec 2009-08-20 02:11:42.000000000 -0500 @@ -22,7 +22,8 @@ test -z "$is_stopped" -o "$OPTION_INSECU exit 1 } generateOptions "$VSERVER_DIR" -addtoCPUSET "$VSERVER_DIR" +addtoCPUSET "$VSERVER_DIR" +attachToCgroup "$VSERVER_DIR" user=$1 shift
Next I added a correctly mounted cgroup file system on /dev/cgroup/.
$ mkdir /dev/cgroup $ mount -t cgroup vserver /dev/cgroup
For the util-vserver to do the right thing, this directory needed adding too:
$ mkdir /etc/vservers/.defaults/cgroup
Sharing out the CPU between guest servers
I have a few test guests hanging around that I play with, call onetime, twotime, threetime, fourtime and fivetime. I order to set the shares for each guest I did this:
mkdir /etc/vservers/fivetime/cgroup/ /etc/vservers/fourtime/cgroup/ /etc/vservers/threetime/cgroup/ /etc/vservers/twotime/cgroup/ /etc/vservers/twotime/cgroup/ echo "512" > /etc/vservers/fivetime/cgroup/cpu.shares echo "1024" > /etc/vservers/fourtime/cgroup/cpu.shares echo "1024" > /etc/vservers/threetime/cgroup/cpu.shares echo "1536" > /etc/vservers/twotime/cgroup/cpu.shares echo "1024" > /etc/vservers/onetime/cgroup/cpu.shares
Then started the guests. When the system was loaded (I used one instance of cpuburn on each server - not advised but a useful test) they each should have got the following percentage of CPU.
Guest Name | cpu.share given | percentage of cpu |
---|---|---|
fivetime | 512 | 10% |
fourtime | 1024 | 20% |
threetime | 1024 | 20% |
twotime | 1536 | 30% |
onetime | 1024 | 20% |
This didn't quite happen, as each process could migrate to other CPUs. When I fixed every guest to use only one of the available CPUs (see below how I did this) the percentage of processing time alloted to each guest were then pretty much exact! Each process was given exactly it's designated percentage of time according to vtop.
Dishing out different processors sets to different guest servers
The "cpuset" for each guest is the subset of CPUs which it is permitted to use. I found out the number of CPUs available on my system by doing this:
$ cat /dev/cgroup/cpuset.cpus
This gave me the result 0-1, meaning that the overall set for my cgroups consists of CPUs 0 and 1 (for a quad core system one would expect the result 0-3, or for quad core with HT, 0-7). I stopped my guests, then for each guest specified a cpuset containing only CPU 0 for each of them:
$ echo "0" > /etc/vservers/onetime/cgroup/cpuset.cpus $ echo "0" > /etc/vservers/twotime/cgroup/cpuset.cpus $ echo "0" > /etc/vservers/threetime/cgroup/cpuset.cpus $ echo "0" > /etc/vservers/fourtime/cgroup/cpuset.cpus $ echo "0" > /etc/vservers/fivetime/cgroup/cpuset.cpus
On restarting the guest, I could see (using vtop) that these guest were only using the CPU 0 (the column "Last used cpu (SMP)" needs to be on in vtop in order to see this). This set up isn't particularly useful, but did allow me to check that the cpu.shares I specified for my guest were working as expected.
Doing this to servers live
The parameters in the last two sections can be set when the servers are running. For example to move the guest "threetime" so that it could use both CPUs I did this:
$ echo "0-1" > /dev/cgroup/threetime/cpuset.cpus
The processes running on threetime instantly were allocated cycle on both CPUs. Then:
$ echo "1" > /dev/cgroup/threetime/cpuset.cpus
Shifts them all to CPU 1. One can change where cycles are allocated with impunity. The same with CPU shares:
$ echo "4096" > /dev/cgroup/threetime/cpu.shares
Gave threetime a much bigger slice of the processors when it was under load.
NOTE: The range "0-1" is not the only way of specifying a set of CPUs, I could have used "0,1". On bigger systems, with say 8 CPUs one could use "0-2,4,5", which would be the same as "0,1,2,4,5" or "0-2,4-5".
Making sure all of this gets set up after a reboot
This process will make sure /dev/cgroup is present at boot and correctly mounted:
- patch util-vserver (see above)
- mkdir /etc/vservers/.defaults/cgroup
- mkdir /lib/udev/devices/cgroup (this will mean that the /dev/cgroup is created early in the boot process)
- add the following line to /etc/fstab
vserver /dev/cgroup cgroup defaults 0 0