Precise: collapse gerneric and server into one flavour

Tim Gardner tim.gardner at canonical.com
Fri Oct 14 15:13:37 UTC 2011


On 10/14/2011 03:28 PM, Andy Whitcroft wrote:
> On Thu, Oct 13, 2011 at 11:13:47AM -0700, John Johansen wrote:
>
>>> config PREEMPT_NONE
>>>          bool "No Forced Preemption (Server)"
>>>          help
>>>            This is the traditional Linux preemption model, geared towards
>>>            throughput. It will still provide good latencies most of the
>>>            time, but there are no guarantees and occasional longer delays
>>>            are possible.
>>>
>>>            Select this option if you are building a kernel for a server or
>>>            scientific/computation system, or if you want to maximize the
>>>            raw processing power of the kernel, irrespective of scheduling
>>>            latencies.
>>>
>>> config PREEMPT_VOLUNTARY
>>>          bool "Voluntary Kernel Preemption (Desktop)"
>>>          help
>>>            This option reduces the latency of the kernel by adding more
>>>            "explicit preemption points" to the kernel code. These new
>>>            preemption points have been selected to reduce the maximum
>>>            latency of rescheduling, providing faster application reactions,
>>>            at the cost of slightly lower throughput.
>>>
>>>            This allows reaction to interactive events by allowing a
>>>            low priority process to voluntarily preempt itself even if it
>>>            is in kernel mode executing a system call. This allows
>>>            applications to run more 'smoothly' even when the system is
>>>            under load.
>>>
>>>            Select this if you are building a kernel for a desktop system.
>>>
>> This isn't necessarily bad for a server either.  Its been a few years
>> since I really looked at the scheduler choices, so its worth looking into
>> again but voluntary preempt didn't have near as much overhead associated
>> with it as full preempt.
>
> Perhaps we could do some comparative testing with these two, we did some
> timings before for HZ IIRC.  John was it you who did the HZ comparisons?
>
>>>> Please research the possibilities and let me know if I've overlooked any
>>>> reasons to _not_ do this. Note that there will be knock on effects in
>>>> meta packages, upgrade issues from 10.04 to 12.04, and upgrade issues
>>>> from 11.10 to 12.04.
>>>>
>>>> As an alternative we could drop the virtual flavour altogether and make
>>>> the appropriate changes to support virtual in the server flavour.
>>>
>>> We do keep getting requests to add support for drivers to the -virtual
>>> flavor which are already included in the -server flavor.  So I could see
>>> us wanting to fold in the -virtual flavor to -server.  One of the issues
>>> I do see here is with regards to size and what kinds of pushback we'd
>>> see because of it.  Also, we support a -virtual i386 flavor which we'd
>>> have to fold into the -generic i386 flavor as there is no -server flavor
>>> for i386.  My question here is are we able to support an arch-flavor
>>> specific update/upgrade path, ie virtual.amd64 ->  server.amd64 but
>>> virtual.i386->  generic.i386?
>>>
>> Yeah size is the big concern.  There are people trying to run some really
>> tiny VMs but at the same time we have the conflicting desire of people
>> always wanting more modules.
>>
>> In some way -virtual's requirements call out for split packaging, an
>> absolutely minimal kernel, and an extra modules package of some sort.
>
> We do now have the linux-image-extra-XXX-virtual package which holds the
> non-core modules.  So I do think a split is helpful, and that is hard to
> achieve without a separate flavour for -virtual.  As virtual is a
> separate flavour from -server obviouly that doesn't preclude commonising
> -generic and -server.
>
> -apw
>

With the changes in scheduler we might be hard pressed to tell the 
difference between preempt and voluntary. I think its worth a try to 
commonize generic and server on amd64. Especially now that normal 
developer workloads are isolated in their own cgroup.

rtg
-- 
Tim Gardner tim.gardner at canonical.com




More information about the kernel-team mailing list