Can some throw bright light on Linux Memory Allocation

Rashkae ubuntu at tigershaunt.com
Thu Aug 7 14:32:47 UTC 2008


MirJafar Aki wrote:
> Hello,
> 
> I have installed Ubuntu on a machine with following
> config.
> 
> RAM :  2 GB
> Machine:  Intel Core 2 Duo
> OS:  Ubuntu 64 bit.
> Disk:  250 GB.
> 
> 
> I have attached a simple program to test memory allocation on linux and
> something strange is happening.
> 
> 1.    If I invoke large_memory function then maximum
>        size I am allowed is 245GB.
> 
> 2.    If I use less than 245GB  ( say 244 GB) then the
>        second stage is also passed.  Where from OS is
>        allocating 244GB ?
> 
> 3.   Strange enough, if I use chuck memory, I can
>       goto 3817GB before I get segmentation fault.
>       ( and buf == NULL is never reached ).
> 
> 4.   Much more strange, GoogleHeapProfiler, and
>       ValGrindHeap profile also add to my uneasiness
>       and report that 3817GB allocation is successful.
> 
> What is happening ? Please help.
> 
> Mir
> 
> 

It's been a while since I read up on this and and my terminology will be
way off, (as well, some of my information will be out of date.)

The short of it, Linux doesn't allocate memory absolutely on malloc.
You can malloc more memory than you have available, but will fail when
you application tries to fill this memory and the system runs out.  This
is a deliberate design decision done for performance and or other
esoteric reasons beyond my ken.  I also had some sample code of an app
that would malloc large memory, then try to fill that memory and the app
itself would in theory end gracefully if the memory proved unavailable.
 In practice, bad things tend to happen to the whole system once you hit
OOM.




More information about the ubuntu-users mailing list