Is this possible?

Peter Silva peter at bsqt.homeip.net
Thu Oct 6 11:26:50 UTC 2016


On Thu, Oct 6, 2016 at 5:09 AM, Bob <ubuntu-qygzanxc at listemail.net> wrote:
> ** Reply to message from Peter Silva <peter at bsqt.homeip.net> on Wed, 5 Oct 2016
> 20:27:08 -0400
>
>> On Wed, Oct 5, 2016 at 4:24 PM, Bob <ubuntu-qygzanxc at listemail.net> wrote:
>> > ** Reply to message from Peter Silva <peter at bsqt.homeip.net> on Wed, 5 Oct 2016
>> > 07:46:09 -0400
>> >
>> >> "swap is maxed out"

> I agree that overloading a system is bad but it seems many people here advocate
> setting swap to zero or memory size.  I am used to a large swap size to allow
> for peak memory usage and that is how I set up my system.  My current swap size
> is 4 times my memory size and consider that a bit small.  I have never tracked
> max swap usage so I don't know what it has been but current swap usage is 60mb.
> I have some long running number cruncher programs but limit the number running
> to the number of cores.  I have not noticed any performance problems using
> several CLI and/or GUI programs while everything else is running.
>on Churchill
>
>

I agree that no swap is stupid, you just waste memory.   How much swap
to allocate isn't really a fixed amount.  It is device  and workload
dependent. The faster your swap device is relative to your memory
speed, the larger the capacity of swap device that makes sense.    if
you are running a disk that is 100x slower than memory, perhaps only
2x memory size makes sense.  If you have an SSD that is only 10x
slower than memory, then a larger capacity like 4x might make sense.

Workload-wise, you don't want to thrash, where multiple processes are
fighting for memory pages with one another.  In a mainframe/batch
subsystem world, each job has to say how much memory it will use, and
if they went over, they would get killed.  In that environment the job
queue structure (what allows jobs to run at once) is set out to limit
the total amount of memory allocated at once.  In environments I have
seen with users that really plan their work, it was kind of routine in
that environment to plan for about 1.5 to 2x physical memory
overcommitment for jobs,  any less in the jobs mixes we had, and we
had (expensive) free memory.  Any more, and there was a risk of
thrashing.

This was on a system with virtual memory, but no demand paging.
Demand paging likely ups the over-commitment that makes sense as well.
  It also helps that the machine was always full, so users who wanted
to run would need to pick the memory size that most closely matched
their job, to get the maximum number of run slots.

If people have jobs with different profiles, like a lot of code that
runs at the beginning, and then changes, so all that executable code
becomes useless (and can be paged out) A different example would be a
large "in-memory" db, where a lot of the db rarely accessed, will work
fine with a large amount of swap, in the low usage case.  In this
case, one could code something to better handle the explicitly
out-of-memory work to be faster, but it is trading ease of programming
against ultimate performance.  If the performance is sufficient, it is
perhaps not worth the human time to optimize.

So in practice, the number can be as little as 1x and I'd say the
upper-bound is likely 4x, but between those two extremes, as they say
YMMV.




More information about the ubuntu-users mailing list