[Bug 578930] Re: Lucid qemu-kvm: ksmd default config is CPU hog
Nullzone
578930 at bugs.launchpad.net
Thu Aug 19 03:04:38 BST 2010
Hello Thierry,
I will try to help showing some information gathered from my lab server as you requested:
- Enviroment:
10 x Ubuntu 10.04 x86_64 with several services (SMTP, MySQL, Oracle, DB2, TeamSpeak servers, Tomcats, bind ... etc).
1 x Windows 7 x86_64
3 x Debian Lenny x86_64 (iscsitarget, pacemaker cluster testing)
RAM schema (assigned to the VMs): 3 vms 128 MB, 6 vms 256 MB, 2 vms 512 MB, 1 vm 768 MB, 2 vms 1GB
SMP: 5 vms with 2 SMP (num vCPU) configured, rest 1 SMP
- Hardware:
1 x Intel(R) Core(TM)2 Quad CPU @ 2.66GHz
8 GB RAM DDR2 800 Mhz / 25 ns
---------------------------------------------
* Summary:
Usefull information:
CPU % usage goes stable, also memory is examined and released after the 3rd full_scans
CPU % usage is specially high between 1st and 2nd scan (when ksm is merging pages at the same time that it is doing the second scan)
Memory starts being released after the 1st scan is completly and the 2nd is in progress (so past some minutes)
KSM default pages_to_scan:100 sleep_millisecs:20 CPU: ~11,67% Time_for_a_full_scan: 3m35.209s
KSM pages_to_scan:60 sleep_millisecs:40 CPU: ~3,44% Time_for_a_full_scan: 11m50.734s
KSM pages_to_scan:100 sleep_millisecs:50 CPU: ~4,21% Time_for_a_full_scan: 8m55.533s
KSM pages_to_scan:60 sleep_millisecs:60 CPU: ~2,00% Time_for_a_full_scan: 17m46.094s
KSM pages_to_scan:100 sleep_millisecs:120 CPU: ~1,67% Time_for_a_full_scan: 21m26.285s
Mem Optimization in all the tests (there before or after):
RAM available (reminder): 8 GB
Memory optimizacition thanks to ksm: 14-16% full available memory (~ 1.3 / 1.1 Gb of more free memory)
Conclusions:
ksm is good but need some tweaks to doesnt become a problem more than a help
I'd consider to off ksm in /etc/default/qemu-kvm for desktop versions of Ubuntu (availabe but off by default)
I'd consider to switch default value of sleep_millisecs: 120 even a bit higher in the ubuntu server version. More conservative...
-------------------------------------------
root at core:/sys/kernel/mm/ksm# echo 2 > /sys/kernel/mm/ksm/run
root at core:/sys/kernel/mm/ksm# sleep 30
root at core:/sys/kernel/mm/ksm# landscape-sysinfo
System load: 1.02 IP address for lo: xxx
Usage of /home: 66.1% of 2.68TB IP address for eth1: xxx
Memory usage: 60% IP address for eth1:0: xxx
Swap usage: 0% IP address for bond0: xxx
Processes: 199 IP address for br0: xxx
Users logged in: 1
root at core:/sys/kernel/mm/ksm# echo 1 > /sys/kernel/mm/ksm/run ; time /tmp/tmp.bash
real 3m35.209s
user 0m0.070s
sys 0m0.130s
[... /tmp/tmp.bash
#!/bin/bash
while true
do
SALIDA=`cat /sys/kernel/mm/ksm/full_scans`
if [ $SALIDA == "1" ]
then
break
else
sleep 5
fi
done
.... yeee... terrible but enough for a script done in 10 secs...]
[...waiting for a pair of aditional scans...]
root at core:/sys/kernel/mm/ksm# cat full_scans
3
root at core:/sys/kernel/mm/ksm# ps axuw | grep ksm | grep -v grep
root 53 10.7 0.0 0 0 ? SN 17:51 12:08 \_ [ksmd]
(Cycle/scan time is 3m35secs ~= 215 secs -- doing check each 2 secs so I'll use "iostat -p <pid> 2 215/2")
root at core:/sys/kernel/mm/ksm# pidstat -p 53 2 108
Linux 2.6.32-24-server (core) 18/08/10 _x86_64_ (4 CPU)
[...]
Media: 53 0,00 11,67 0,00 11,67 - ksmd
root at core:/sys/kernel/mm/ksm# cat full_scans
4
root at core:/sys/kernel/mm/ksm# cat max_kernel_pages
511518
root at core:/sys/kernel/mm/ksm# cat pages_shared
42045
root at core:/sys/kernel/mm/ksm# cat pages_sharing
287118
root at core:/sys/kernel/mm/ksm# cat pages_to_scan
100
root at core:/sys/kernel/mm/ksm# cat pages_unshared
622240
root at core:/sys/kernel/mm/ksm# cat pages_volatile
97214
root at core:/sys/kernel/mm/ksm# cat run
1
root at core:/sys/kernel/mm/ksm# cat sleep_millisecs
20
root at core:/sys/kernel/mm/ksm# cat /proc/meminfo
MemTotal: 8194212 kB
MemFree: 1000880 kB
Buffers: 432396 kB
Cached: 2865740 kB
SwapCached: 0 kB
[...]
root at core:/sys/kernel/mm/ksm# landscape-sysinfo
System load: 6.44 IP address for lo: xxx
Usage of /home: 66.1% of 2.68TB IP address for eth1: xxx
Memory usage: 44% IP address for eth1:0: xxx
Swap usage: 0% IP address for bond0: xxx
Processes: 199 IP address for br0: xxx
Users logged in: 1
------------------------------------------------------------
root at core:/sys/kernel/mm/ksm# echo 2 > /sys/kernel/mm/ksm/run
root at core:/sys/kernel/mm/ksm# sleep 30
root at core:/sys/kernel/mm/ksm# echo 40 > /sys/kernel/mm/ksm/sleep_millisecs
root at core:/sys/kernel/mm/ksm# echo 60 > /sys/kernel/mm/ksm/pages_to_scan
root at core:/sys/kernel/mm/ksm# echo 1 > /sys/kernel/mm/ksm/run ; time /tmp/tmp.bash
real 11m50.734s
user 0m0.270s
sys 0m0.390s
[...waiting for a pair of aditional scan...]
root at core:/sys/kernel/mm/ksm# cat full_scans
3
root at core:/sys/kernel/mm/ksm# pidstat -p 53 2 355
Linux 2.6.32-24-server (core) 19/08/10 _x86_64_ (4 CPU)
00:05:57 PID %usr %system %guest %CPU CPU Command
00:05:59 53 0,00 0,00 0,00 0,00 0 ksmd
00:06:01 53 0,00 4,00 0,00 4,00 0 ksmd
[...]
00:17:45 53 0,00 1,50 0,00 1,50 0 ksmd
00:17:47 53 0,00 0,50 0,00 0,50 0 ksmd
Media: 53 0,00 3,44 0,00 3,44 - ksmd
root at core:/sys/kernel/mm/ksm# cat full_scans
4
root at core:/sys/kernel/mm/ksm# cat /proc/meminfo
MemTotal: 8194212 kB
MemFree: 1027900 kB
Buffers: 478248 kB
Cached: 2684208 kB
SwapCached: 1020 kB
------------------------------------------------------------
root at core:/sys/kernel/mm/ksm# echo 2 > /sys/kernel/mm/ksm/run
root at core:/sys/kernel/mm/ksm# echo 50 > /sys/kernel/mm/ksm/sleep_millisecs
root at core:/sys/kernel/mm/ksm# echo 100 > /sys/kernel/mm/ksm/pages_to_scan
root at core:/sys/kernel/mm/ksm# cat /proc/meminfo
MemTotal: 8194212 kB
MemFree: 59480 kB
Buffers: 485952 kB
Cached: 2636984 kB
SwapCached: 1120 kB
[...]
root at core:/sys/kernel/mm/ksm# echo 1 > run; time /tmp/tmp.sh
real 8m55.533s
user 0m0.180s
sys 0m0.380s
[...waiting for a pair of aditional scans...]
root at core:/sys/kernel/mm/ksm# cat full_scans
3
root at core:/sys/kernel/mm/ksm# cat /proc/meminfo
MemTotal: 8194212 kB
MemFree: 1026668 kB
Buffers: 482376 kB
Cached: 2642836 kB
SwapCached: 1316 kB
root at core:/sys/kernel/mm/ksm# pidstat -p 53 2 267
Linux 2.6.32-24-server (core) 19/08/10 _x86_64_ (4 CPU)
00:40:40 PID %usr %system %guest %CPU CPU Command
00:40:42 53 0,00 1,50 0,00 1,50 2 ksmd
00:40:44 53 0,00 11,00 0,00 11,00 2 ksmd
[...]
00:49:32 53 0,00 6,00 0,00 6,00 2 ksmd
00:49:34 53 0,00 6,00 0,00 6,00 2 ksmd
Media: 53 0,00 4,21 0,00 4,21 - ksmd
root at core:/sys/kernel/mm/ksm# cat full_scans
4
------------------------------------------------------------
root at core:/sys/kernel/mm/ksm# echo 2 > /sys/kernel/mm/ksm/run
root at core:/sys/kernel/mm/ksm# echo 60 > /sys/kernel/mm/ksm/sleep_millisecs
root at core:/sys/kernel/mm/ksm# echo 60 > /sys/kernel/mm/ksm/pages_to_scan
root at core:/sys/kernel/mm/ksm# cat /proc/meminfo
MemTotal: 8194212 kB
MemFree: 63684 kB
Buffers: 501664 kB
Cached: 2606948 kB
SwapCached: 1316 kB
[...]
root at core:/sys/kernel/mm/ksm# echo 1 > run; time /tmp/tmp.sh
real 17m46.094s
user 0m0.400s
sys 0m0.540s
[...waiting for a pair of aditional scans...]
root at core:/sys/kernel/mm/ksm# cat full_scans
3
root at core:/sys/kernel/mm/ksm# pidstat -p 53 2 533
Linux 2.6.32-24-server (core) 19/08/10 _x86_64_ (4 CPU)
01:48:30 PID %usr %system %guest %CPU CPU Command
01:48:32 53 0,00 0,00 0,00 0,00 1 ksmd
01:48:34 53 0,00 3,00 0,00 3,00 2 ksmd
[...]
02:06:14 53 0,00 0,50 0,00 0,50 2 ksmd
02:06:16 53 0,00 3,50 0,00 3,50 2 ksmd
Media: 53 0,00 2,00 0,00 2,00 - ksmd
root at core:/sys/kernel/mm/ksm# cat full_scans
4
root at core:/sys/kernel/mm/ksm# cat /proc/meminfo
MemTotal: 8194212 kB
MemFree: 938949 kB
Buffers: 525132 kB
Cached: 2615487 kB
SwapCached: 1184 kB
------------------------------------------------------------
root at core:/sys/kernel/mm/ksm# echo 2 > /sys/kernel/mm/ksm/run
root at core:/sys/kernel/mm/ksm# echo 120 > /sys/kernel/mm/ksm/sleep_millisecs
root at core:/sys/kernel/mm/ksm# echo 100 > /sys/kernel/mm/ksm/pages_to_scan
root at core:/sys/kernel/mm/ksm# cat /proc/meminfo
MemTotal: 8194212 kB
MemFree: 58696 kB
Buffers: 464916 kB
Cached: 2637064 kB
SwapCached: 1184 kB
[...]
root at core:/sys/kernel/mm/ksm# landscape-sysinfo
System load: 2.44 IP address for lo: xxx
Usage of /home: 66.1% of 2.68TB IP address for eth1: xxx
Memory usage: 64% IP address for eth1:0: xxx
Swap usage: 0% IP address for bond0: xxx
Processes: 204 IP address for br0: xxx
Users logged in: 1
root at core:/sys/kernel/mm/ksm# echo 1 > run; time /tmp/tmp.sh
real 21m26.285s
user 0m0.520s
sys 0m0.990s
[...waiting for a pair of aditional scans...]
root at core:/sys/kernel/mm/ksm# cat full_scans
2
root at core:/sys/kernel/mm/ksm# pidstat -p 53 2 643
Linux 2.6.32-24-server (core) 19/08/10 _x86_64_ (4 CPU)
03:17:21 PID %usr %system %guest %CPU CPU Command
03:17:23 53 0,00 2,00 0,00 2,00 0 ksmd
03:17:25 53 0,00 0,00 0,00 0,00 0 ksmd
[...]
03:38:45 53 0,00 1,50 0,00 1,50 0 ksmd
03:38:47 53 0,00 0,00 0,00 0,00 0 ksmd
Media: 53 0,00 1,67 0,00 1,67 - ksmd
root at core:/sys/kernel/mm/ksm# cat full_scans
3
root at core:/sys/kernel/mm/ksm# pidstat -p 53 2 643
Linux 2.6.32-24-server (core) 19/08/10 _x86_64_ (4 CPU)
03:38:47 PID %usr %system %guest %CPU CPU Command
03:38:49 53 0,00 0,00 0,00 0,00 0 ksmd
03:38:51 53 0,00 0,50 0,00 0,50 0 ksmd
[...]
04:00:13 53 0,00 0,00 0,00 0,00 0 ksmd
Media: 53 0,00 1,67 0,00 1,67 - ksmd
root at core:/sys/kernel/mm/ksm# cat full_scans
4
--
Lucid qemu-kvm: ksmd default config is CPU hog
https://bugs.launchpad.net/bugs/578930
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu-kvm in ubuntu.
More information about the Ubuntu-server-bugs
mailing list