building a list of KVM workloads

Ahmed Kamal kim0 at ubuntu.com
Wed Sep 14 14:23:25 UTC 2011


On 09/14/2011 04:14 PM, Ahmed Kamal wrote:
> On Wed 14 Sep 2011 03:44:10 PM EET, Serge E. Hallyn wrote:
>> Quoting jurgen.depicker at let.be (jurgen.depicker at let.be):
>>>> From: "Serge E. Hallyn"<serge.hallyn at canonical.com>
>>>> To: ubuntu-server<ubuntu-server at lists.ubuntu.com>
>>>> Cc: jurgen.depicker at let.be, Mark Mims<mark.mims at canonical.com>
>>>> Date: 13/09/2011 17:07
>>>> Subject: Re: building a list of KVM workloads
>>>>
>>>> Thanks, guys.  Unfortunately I'm having a harder time thinking through
>>>> how to properly classify these by characteristics.  Here is an
>>> inadequate
>>>> attempt:
>>>>
>>>>    * source code hosting (github, gitosis, etc)
>>>>      - characteristics?
>>>>    * checkpointable (i.e. Mark's single point backup gitosis vms)
>>>>      - qcow2 or qed based for snapshotting?
>>>>    * web hosting
>>>>      - characteristics?
>>>>    * Network performance (hard to generalize)
>>>>      - vpn
>>>>      - various appliation layer/tiers
>>>>      - characteristics?
>>>>    * db hosting
>>>>      - characteristics?
>>>>    * desktop virtualization
>>>>      - ideally, using spice?
>>> Yes, but i haven't tried yet since installation is not 'standard' yet.
>>> http://www.linux-kvm.com/content/spice-ubuntu-wiki-available
>>>
>>>>      - should survive unexpected host reboots?
>>> This is something REALLY important which, as far as i know, is better
>>> managed with RedHat too :-(.  I nearly died when I accidentally typed
>>> 'reboot' in the wrong terminal (after which i installed molly-guard
>>> everywhere) and when i noticed there was no clean shutdown of the 
>>> guests;
>>
>> Note that as of very recently, all your libvirt-managed VMs at least
>> should cleanly shut down before the host finishes shutting down.
>>
>> Though I was thinking more of using caching and journaled filesystems,
>> and perhaps even the fs on the host.
>>
>>> more even: that reboot corrupted some of the running windows-Vms...
>>> I did some research on that, but didn't find time to properly 
>>> synthetise
>>> it and implement the stuff I found (basically, the init scripts used in
>>> redhat as far as i remember).
>>> https://exain.wordpress.com/2009/05/22/auto-shutdown-kvm-virtual-machines-on-system-shutdown/ 
>>>
>>> http://www.linux-kvm.com/content/stop-script-running-vms-using-virsh
>>> https://help.ubuntu.com/community/KVM/Managing#Suspend%20and%20resume%20a%20Virtual%20Machine 
>>>
>>>
>>>>    * windows workloads ?
>>>>      - characteristics?
>>>>
>>>> I'll probably put these up on the wiki soon so we can all edit, but in
>>>> the meantime if you have any suggestions for improving the grouping or
>>>> filling in characteristics, please speak up.
>>>
>>> I noticed that most of my load is due to cpu wait: disk IO I guess.  
>>> Most
>>> troubles with too much 'wait' are due to windows VMs.
>>>
>>> All my VMs use qcow2.  There is an option, when you create the disk 
>>> images
>>
>> When running kvm by hand, I almost always use raw.  The vm-tools which I
>> use very frequently use qcow2.  It's worth publicizing some (new) 
>> measurements
>> of performance with qcow, qed, and raw (both raw backing file and raw 
>> LVM
>> partition), upon which results we can base recommendations for these 
>> workloads.
>>
>> Any votes for which benchmark to use?
>>
>> Likewise, something like kernbench on two identical VMs, one with 
>> swap, and
>> one without, would be interesting.  Heck, memory and smp configurations,
>> smp=1/2/4/8 and -m 256,512,1024 would be intereseting.  Though we'll 
>> ignore
>> what I've used before, having -m 4096 and doing all work in tmpfs :)  
>> That
>> was nice and quick.
>>
>> Finally, virtio and fake scsi might have some different effects on 
>> the usual
>> filesystems, so maybe we should compare xfs, jfs, ext4, ext3, and 
>> ext2 with
>> each.
>>
>> That's getting to be a lot of things to measure, especially without an
>> automated system to do system 
>> install/setup/test/compile-results<cough>, but
>> heck, we'll see if I end up re-writing one  :)
>>
>>> manually, to 'preallocate', which is supposed to increase the 
>>> performance
>>> a lot: -o preallocation=metadata :
>>>  From KVM I/O slowness on RHEL 6
>>> http://www.ilsistemista.net/index.php/virtualization/11-kvm-io-slowness-on-rhel-6.html 
>>>
>>
>> Hm, this would be worth measuring and publicizing as a part of this.  I
>> always choose not to preallocate, and use cache=none.  Just how much
>> does performance change (in both directions) when I do preallcoate, or
>> use cache=writeback?
>>
>>> So, if you are using Red Hat Enterprise Linux or Fedora Linux as the 
>>> host
>>> operating system for you virtualization server and you plan to use the
>>> QCOW2 format, remember to manually create preallocated virtual disk 
>>> files
>>> and to use a “none” cache policy (you can also use a “write-back” 
>>> policy,
>>> but be warned that your guests will be more prone to data loss).
>>> If you can confirm this article, then I guess this should be a default
>>> option when creating disk images from the GUI VMManager
>>
>> If we can tie the results for certain configurations to particular 
>> workloads,
>> then we could perhaps go a bit further.
>>
>> thanks,
>> -serge
>>
>

I haven't been following this thread closely, but my understanding is 
that we're after testing KVM in lots of different situations? If that is 
something that can benefit from community contribution perhaps someone 
can start a matrix of workload testing needed, and I can bang some drums 
to try and get interested members to test. Hopefully this matrix will 
include the needed kvm's cli options mentioned or whatever is needed to 
make testing easy

Cheers




More information about the ubuntu-server mailing list