Is this possible?
Joel Rees
joel.rees at gmail.com
Thu Oct 6 09:55:32 UTC 2016
On Thu, Oct 6, 2016 at 6:09 PM, Bob <ubuntu-qygzanxc at listemail.net> wrote:
> ** Reply to message from Peter Silva <peter at bsqt.homeip.net> on Wed, 5 Oct 2016
> 20:27:08 -0400
>
>> On Wed, Oct 5, 2016 at 4:24 PM, Bob <ubuntu-qygzanxc at listemail.net> wrote:
>> > ** Reply to message from Peter Silva <peter at bsqt.homeip.net> on Wed, 5 Oct 2016
>> > 07:46:09 -0400
>> >
>> >> "swap is maxed out"
>> >>
>> >> uh... if that's true, it doesn't matter how much or what kind of cpu
>> >> you have, it will crawl and die from time to time. your machine is
>> >> sitting in wait i/o.
>> >
>> > true
>> >
>> >
>> >> When "swap is maxed out" the kernel will kill
>> >> processes randomly, (OOM Killer) you cannot expect a PC with it's
>> >> memory (including swap space) entirely full to run correctly.
>> >
>> > If this is what Linux does that is a very bad design. I would never have
>> > thought the system would do that.
>> >
>>
>> OK, I said randomly, I meant that loosely, in terms of the user being
>> unlikely to understand what is being killed or why. An explanation
>> of the algorithm is here: https://linux-mm.org/OOM_Killer
>
> Thanks for the link.
>
>
>> It isn't bad design, it's completely normal and a logical consequence
>> of an aggressively modern virtual memory system. Detailed
>> explanation here:
>>
>> http://www.linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html
>
> I still think it is a bad design.
In mainframe land, the sysop doesn't let a bunch of poorly designed
applications run together. (And the sysop may, in fact, arbitrarily
kill low priority processes, especially those that seem to be
misbehaving.)
> I come from many many years of mainframe experience and am fairly new to Linux
> but I still think it is a bad design.
Welcome to the cutting edge, where the most popular applications are
written by programmers who have never heard or thought of designing an
application to degrade gracefully.
And much of the OS, especially "utility" class tools, are written by
programmers who just finally heard the magic words yesterday. (And we
shall not mention systemd. And, no, MSWindows is not any better about
this.)
>> Example, you start up two processes (same executable) they start out the same,
>> so they share memory, then a process needs to write to a memory page,
>> so that page cannot be shared anymore, OS needs to allocate a new
>> page. Nobody malloc'd anything, and it was way faster to start up if
>> you don't copy everything just because two processes are using them.
>> copy-on-write...
>>
>> Example, when you do a malloc, and the value isn't initialized, it may
>> just succeed (as long as the memory would fit into process and/or
>> virtual memory limits.) when the process actually writes to it,
>> ahh... then you need it to really exist, but if you don't actually
>> have the memory (and/or swap) available... (b)OOM.
>>
>> People are better off not overloading their systems, and never
>> encountering OOM, but Linux is actually as smart as possible given a
>> really poor situation.
>
> I agree that overloading a system is bad but it seems many people here advocate
> setting swap to zero or memory size. I am used to a large swap size to allow
> for peak memory usage and that is how I set up my system. My current swap size
> is 4 times my memory size and consider that a bit small. I have never tracked
> max swap usage so I don't know what it has been but current swap usage is 60mb.
> I have some long running number cruncher programs but limit the number running
> to the number of cores. I have not noticed any performance problems using
> several CLI and/or GUI programs while everything else is running.
>
Definitely concur with that.
Twice RAM is my minimum, and I usually go with five times.
Of course, if I actually have a process set that consistently has as
much swap as RAM allocated, I know I'm in for a thrashing.
(Unless most of that is a class of program that tends to keep a lot of
inactive and not currently referenced state around -- all the active
data is in the top several GB of heap and heap grows and shrinks as it
parses its way through a data set, leaving a lot of stuff on the
bottom of heap that won't get accessed for a while, or similar
patterns.)
But most PC software is "architected" by engineers who wouldn't have a
clue about what I just said. PC software is more than a little like
pulp fiction or pop music. Not inherently bad, just a different style.
And you need to take a break from it periodically. :)
> --
> Robert Blair
>
>
> The inherent vice of capitalism is the unequal sharing of the blessings. The inherent blessing of socialism is the equal sharing of misery. -- Winston Churchill
>
--
Joel Rees
I'm imagining I'm a novelist:
http://joel-rees-economics.blogspot.com/2016/04/economics-101-novel-rough-draft-index.html
More information about the ubuntu-users
mailing list