473,407 Members | 2,359 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,407 software developers and data experts.

Memory size?

Hi folks,

how can I determine the total main-memory size and the size of free
memory available in bytes?

I tried to use mallinfo() from malloc.h - resulting some strange values
in Windows (cygwin, gcc 3.3.1) and always 0 in Linux (2.4.22, gcc
3.3.2).

Thanks in advance,
Jörg

Nov 14 '05 #1
50 2901
Joerg Schwerdtfeger writes:
how can I determine the total main-memory size and the size of free
memory available in bytes?

I tried to use mallinfo() from malloc.h - resulting some strange values
in Windows (cygwin, gcc 3.3.1) and always 0 in Linux (2.4.22, gcc
3.3.2).


Use the API for your OS. Two OSes then two different solutions.

Be sure to find out what free memory *means* too, on a paging machine.
Nov 14 '05 #2
In <c9*************@news.t-online.com> "Joerg Schwerdtfeger" <sc*******@gmx.de> writes:
how can I determine the total main-memory size and the size of free
memory available in bytes?
What do you need this information for? Do you realise that the
information about the free memory is extremely volatile and, by the time
you get a chance to use it, it may already be incorrect?
I tried to use mallinfo() from malloc.h - resulting some strange values
in Windows (cygwin, gcc 3.3.1) and always 0 in Linux (2.4.22, gcc
3.3.2).


No such function in the standard C library, so whatever results you got
were the correct ones (your program invoked undefined behaviour by
calling a function that was neither defined by it nor part of the standard
C library).

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #3
On Thu, 27 May 2004 15:34:17 +0200, in comp.lang.c , "Joerg Schwerdtfeger"
<sc*******@gmx.de> wrote:
Hi folks,

how can I determine the total main-memory size and the size of free
memory available in bytes?


there's no standard way to do that. You may not be permitted to by your OS.
It might not even be a meaningful question, in a multiuser virtual memory
system.

Ask in a group dedicated to your compiler and/or OS.
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>
----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
Nov 14 '05 #4
Dan Pop wrote:
how can I determine the total main-memory size and the size of free
memory available in bytes? What do you need this information for? Do you realise that the
information about the free memory is extremely volatile and, by the

time you get a chance to use it, it may already be incorrect?


I need a data structure with exactly 2^i elements, where i is an element
of IN. With two conditions: 1. the structure must fit in the heap, 2.
the number of elements should be the maximum possible amount. I. e. if
1024 MB main memory is present and about 300 MB is in use, and
sizeof(element)=4 byte, the program should allocate memory for 2^27
Elements (512 MB).

Joerg

Nov 14 '05 #5
"Joerg Schwerdtfeger" <sc*******@gmx.de> wrote in message
news:c9*************@news.t-online.com...
Dan Pop wrote:
how can I determine the total main-memory size and the size of free
memory available in bytes? What do you need this information for? Do you realise that the
information about the free memory is extremely volatile and, by the

time
you get a chance to use it, it may already be incorrect?


I need a data structure with exactly 2^i elements,


So define or allocate one. If your implementation cannot
handle the definition, it will (should) emit a diagnostic.

If your implementation's memory allocation function (e.g. 'malloc())'
cannot succeed, it will return NULL.
where i is an element
of IN.
What is 'IN'?

With two conditions: 1. the structure must fit in the heap,
C does not define 'heap'.
2.
the number of elements should be the maximum possible amount.
The C language cannot determine this in advance. All you can
do is try various sizes until it fails (and note that on a
multitasking system, this 'maximum possible amount' can easily
vary widely each time you try). What you're asking about is
in the domain of your operating system, not the C language.
I. e. if
1024 MB main memory is present and about 300 MB is in use,
Those are platform and OS issues, not C issues.
and
sizeof(element)=4 byte, the program should allocate memory for 2^27
Elements (512 MB).


This *calculation* can be easily done with C. *BUT*: the values
of 'main memory size' and 'memory in use' cannot be determined
with standard C. You'll need to use operating system specific
features. Check your documentation, and/or ask about this
in a forum about your platform.

-Mike
Nov 14 '05 #6


Joerg Schwerdtfeger wrote:

Dan Pop wrote:
how can I determine the total main-memory size and the size of free
memory available in bytes?

What do you need this information for? Do you realise that the
information about the free memory is extremely volatile and, by the

time
you get a chance to use it, it may already be incorrect?


I need a data structure with exactly 2^i elements, where i is an element
of IN. With two conditions: 1. the structure must fit in the heap, 2.
the number of elements should be the maximum possible amount. I. e. if
1024 MB main memory is present and about 300 MB is in use, and
sizeof(element)=4 byte, the program should allocate memory for 2^27
Elements (512 MB).

Joerg


So why not just malloc() that amount? if the malloc returns NULL, you do
not have enough memory to solve the task.
--
Fred L. Kleinschmidt
Boeing Associate Technical Fellow
Technical Architect, Common User Interface Services
M/S 2R-94 (206)544-5225
Nov 14 '05 #7
Joerg Schwerdtfeger wrote:

how can I determine the total main-memory size and the size of
free memory available in bytes?

I tried to use mallinfo() from malloc.h - resulting some strange
values in Windows (cygwin, gcc 3.3.1) and always 0 in Linux
(2.4.22, gcc 3.3.2).


I can't answer for Linux or Cygwin, but for DJGPP you can use
nmalloc.zip and the malldbg module. mallinfo.h specifies the
interface, and an info source file nmalloc.txh documents it all.
Available at:

<http://cbfalconer.home.att.net/download/nmalloc.zip>

For all I know the source might function under Cygwin or Linux,
compiled with gcc, but that has not been verified. It sticks
quite closely to standard C, but has some deviations. To use it
you will have to link the malloc.o and malldbg modules before the
standard library.

Bear in mind that it will not show main-memory size etc. in these
days of virtual memory. It will show you how much you have used,
and how much you have freed.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #8
In <c9*************@news.t-online.com> "Joerg Schwerdtfeger" <sc*******@gmx.de> writes:
Dan Pop wrote:
how can I determine the total main-memory size and the size of free
memory available in bytes?

What do you need this information for? Do you realise that the
information about the free memory is extremely volatile and, by the

time
you get a chance to use it, it may already be incorrect?


I need a data structure with exactly 2^i elements, where i is an element
of IN. With two conditions: 1. the structure must fit in the heap, 2.
the number of elements should be the maximum possible amount. I. e. if
1024 MB main memory is present and about 300 MB is in use, and
sizeof(element)=4 byte, the program should allocate memory for 2^27
Elements (512 MB).


Are you sure you understand what virtual memory is and how it works?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #9
In <40***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
Joerg Schwerdtfeger wrote:

how can I determine the total main-memory size and the size of ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ free memory available in bytes?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I tried to use mallinfo() from malloc.h - resulting some strange
values in Windows (cygwin, gcc 3.3.1) and always 0 in Linux
(2.4.22, gcc 3.3.2).


I can't answer for Linux or Cygwin, but for DJGPP you can use
nmalloc.zip and the malldbg module. mallinfo.h specifies the
interface, and an info source file nmalloc.txh documents it all.
Available at:

<http://cbfalconer.home.att.net/download/nmalloc.zip>

For all I know the source might function under Cygwin or Linux,
compiled with gcc, but that has not been verified. It sticks
quite closely to standard C, but has some deviations. To use it
you will have to link the malloc.o and malldbg modules before the
standard library.

Bear in mind that it will not show main-memory size etc. in these
days of virtual memory. It will show you how much you have used,
and how much you have freed.


I.e. something completely different from what the OP wants to know...

But who cares, as long as it gives you the opportunity to post yet another
piece of self advertising ;-)

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #10
"Fred L. Kleinschmidt" wrote:
Joerg Schwerdtfeger wrote:
.... snip ...
I need a data structure with exactly 2^i elements, where i is an
element of IN. With two conditions: 1. the structure must fit in
the heap, 2. the number of elements should be the maximum
possible amount. I. e. if 1024 MB main memory is present and
about 300 MB is in use, and sizeof(element)=4 byte, the program
should allocate memory for 2^27 Elements (512 MB).


So why not just malloc() that amount? if the malloc returns NULL,
you do not have enough memory to solve the task.


Won't work. He can't do what he wants in portable standard C.
The reason being that many systems will happily return a valid
pointer when usage exceeds main memory, and then thrash data in an
out of disk storage when accessed. You detect all this as the
program slows to a crawl and the disk access light gets brighter.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #11

"CBFalconer" <cb********@yahoo.com> wrote in message
news:40***************@yahoo.com...
"Fred L. Kleinschmidt" wrote:
Joerg Schwerdtfeger wrote:
... snip ...
I need a data structure with exactly 2^i elements, where i is an
element of IN. With two conditions: 1. the structure must fit in
the heap, 2. the number of elements should be the maximum
possible amount. I. e. if 1024 MB main memory is present and
about 300 MB is in use, and sizeof(element)=4 byte, the program
should allocate memory for 2^27 Elements (512 MB).
So why not just malloc() that amount? if the malloc returns NULL,
you do not have enough memory to solve the task.


Won't work. He can't do what he wants in portable standard C.
The reason being that many systems will happily return a valid
pointer when usage exceeds main memory, and then thrash data in an
out of disk storage when accessed.


So it depends upon what you mean by 'memory'. Does 'virtual'
(e.g. disk-paged) memory qualify? 'malloc()' doesn't depend
upon how/where the storage comes from, it simply either succeeds
or fails.
You detect all this as the
program slows to a crawl and the disk access light gets brighter.


Yes, there'd likely be a performance issue with 'virtual' memory schemes.

-Mike
Nov 14 '05 #12
"Joerg Schwerdtfeger" <sc*******@gmx.de> wrote:
I need a data structure with exactly 2^i elements, where i is an element
of IN. With two conditions: 1. the structure must fit in the heap, 2.
the number of elements should be the maximum possible amount. I. e. if
1024 MB main memory is present and about 300 MB is in use, and
sizeof(element)=4 byte, the program should allocate memory for 2^27
Elements (512 MB).


So... even disregarding the existence of virtual memory, what should
happen if two of these programs are executed at _exactly_ the same time?

Richard
Nov 14 '05 #13
On Thu, 27 May 2004, Joerg Schwerdtfeger wrote:

JS>Dan Pop wrote:
JS>
JS>>>how can I determine the total main-memory size and the size of free
JS>>>memory available in bytes?
JS>> What do you need this information for? Do you realise that the
JS>> information about the free memory is extremely volatile and, by the
JS>time
JS>> you get a chance to use it, it may already be incorrect?
JS>
JS>I need a data structure with exactly 2^i elements, where i is an element
JS>of IN. With two conditions: 1. the structure must fit in the heap, 2.
JS>the number of elements should be the maximum possible amount. I. e. if
JS>1024 MB main memory is present and about 300 MB is in use, and
JS>sizeof(element)=4 byte, the program should allocate memory for 2^27
JS>Elements (512 MB).

Find out what the pointer size of your machine is (16, 32, 64), say w.
Then do a loop like:

w = w - 1 - log2_of_item_size;
ptr = NULL;
while (w > 0) {
ptr = malloc(1 >> w);
if (ptr != NULL)
break;
w--;
}

This should give you what you want for some definition of 'available'. On
a POSIX system you should be able to use all the memory you got. Be beware
that overcommitting systems like BSD and Linux may signal your program
when they run out of paging space (or just freeze like I oberseved on a
Linux system).

harti
Nov 14 '05 #14
In article <40***************@yahoo.com>,
CBFalconer <cb********@worldnet.att.net> wrote:
Won't work. He can't do what he wants in portable standard C.
The reason being that many systems will happily return a valid
pointer when usage exceeds main memory, and then thrash data in an
out of disk storage when accessed.


It's worse than that. Many operating systems overcommit memory,
returning a valid pointer from malloc() and then killing the program
when it tries to access too much of it.

-- Richard
Nov 14 '05 #15
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
In article <40***************@yahoo.com>,
CBFalconer <cb********@worldnet.att.net> wrote:
Won't work. He can't do what he wants in portable standard C.
The reason being that many systems will happily return a valid
pointer when usage exceeds main memory, and then thrash data in an
out of disk storage when accessed.


It's worse than that. Many operating systems overcommit memory,
returning a valid pointer from malloc() and then killing the program
when it tries to access too much of it.


Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory), is there _any_ good reason for this practice? I'd have
thought simply telling your user that he can't have that much memory is
preferable to pretending that he can, and then unceremoniously dumping
him in it when he tries to use it. I mean, under OSes like that, what's
the use of all our precautions of checking that malloc() returned
succesfully?

Richard
Nov 14 '05 #16
Richard Bos writes:
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
In article <40***************@yahoo.com>,
CBFalconer <cb********@worldnet.att.net> wrote:
Won't work. He can't do what he wants in portable standard C.
The reason being that many systems will happily return a valid
pointer when usage exceeds main memory, and then thrash data in an
out of disk storage when accessed.


It's worse than that. Many operating systems overcommit memory,
returning a valid pointer from malloc() and then killing the program
when it tries to access too much of it.


Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory), is there _any_ good reason for this practice? I'd have
thought simply telling your user that he can't have that much memory is
preferable to pretending that he can, and then unceremoniously dumping
him in it when he tries to use it. I mean, under OSes like that, what's
the use of all our precautions of checking that malloc() returned
succesfully?


Until I see the name of the offending OS, I will take this to be an urban
legend.
Nov 14 '05 #17
On Tue, 1 Jun 2004, osmium wrote:

o>Richard Bos writes:
o>
o>> ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
o>>
o>> > In article <40***************@yahoo.com>,
o>> > CBFalconer <cb********@worldnet.att.net> wrote:
o>> >
o>> > >Won't work. He can't do what he wants in portable standard C.
o>> > >The reason being that many systems will happily return a valid
o>> > >pointer when usage exceeds main memory, and then thrash data in an
o>> > >out of disk storage when accessed.
o>> >
o>> > It's worse than that. Many operating systems overcommit memory,
o>> > returning a valid pointer from malloc() and then killing the program
o>> > when it tries to access too much of it.
o>>
o>> Quite apart from this rendering any C implementation on that platform
o>> unconforming (after all, if malloc() succeeds, the Standard says you own
o>> that memory), is there _any_ good reason for this practice? I'd have
o>> thought simply telling your user that he can't have that much memory is
o>> preferable to pretending that he can, and then unceremoniously dumping
o>> him in it when he tries to use it. I mean, under OSes like that, what's
o>> the use of all our precautions of checking that malloc() returned
o>> succesfully?
o>
o>Until I see the name of the offending OS, I will take this to be an urban
o>legend.

The orginal BSD VM did overcommitting and so do most BSD derivates. Try
to google for 'overcommit site:freebsd.org' for discussions.

harti
Nov 14 '05 #18
Richard Bos wrote:
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
CBFalconer <cb********@worldnet.att.net> wrote:
Won't work. He can't do what he wants in portable standard C.
The reason being that many systems will happily return a valid
pointer when usage exceeds main memory, and then thrash data in
and out of disk storage when accessed.


It's worse than that. Many operating systems overcommit memory,
returning a valid pointer from malloc() and then killing the
program when it tries to access too much of it.


Quite apart from this rendering any C implementation on that
platform unconforming (after all, if malloc() succeeds, the
Standard says you own that memory), is there _any_ good reason for
this practice? I'd have thought simply telling your user that he
can't have that much memory is preferable to pretending that he
can, and then unceremoniously dumping him in it when he tries to
use it. I mean, under OSes like that, what's the use of all our
precautions of checking that malloc() returned succesfully?


It is a matter of practicality. It allows the use of 'copy on
write' algorithms, for example, and the large economies they
provide. Another example is the program that wants a sparse
array, and implements it by mallocing a monster array. Only the
portions that are actually used are really assigned memory.

Under most normal circumstances the program will never encounter
any difficulties. However such events as filling of disc space
can prevent actually assigning virtual memory. Simply pausing the
program can lead to indefinite postponement and other nasties,
with nobody aware of any problem.

--
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Nov 14 '05 #19
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Harti Brandt wrote:
| On Tue, 1 Jun 2004, osmium wrote:
|
| o>Richard Bos writes:
| o>
| o>> ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
| o>>
| o>> > In article <40***************@yahoo.com>,
| o>> > CBFalconer <cb********@worldnet.att.net> wrote:
| o>> >
| o>> > >Won't work. He can't do what he wants in portable standard C.
| o>> > >The reason being that many systems will happily return a valid
| o>> > >pointer when usage exceeds main memory, and then thrash data in an
| o>> > >out of disk storage when accessed.
| o>> >
| o>> > It's worse than that. Many operating systems overcommit memory,
| o>> > returning a valid pointer from malloc() and then killing the program
| o>> > when it tries to access too much of it.
| o>>
| o>> Quite apart from this rendering any C implementation on that platform
| o>> unconforming (after all, if malloc() succeeds, the Standard says
you own
| o>> that memory), is there _any_ good reason for this practice? I'd have
| o>> thought simply telling your user that he can't have that much
memory is
| o>> preferable to pretending that he can, and then unceremoniously dumping
| o>> him in it when he tries to use it. I mean, under OSes like that,
what's
| o>> the use of all our precautions of checking that malloc() returned
| o>> succesfully?
| o>
| o>Until I see the name of the offending OS, I will take this to be an
urban
| o>legend.
|
| The orginal BSD VM did overcommitting and so do most BSD derivates. Try
| to google for 'overcommit site:freebsd.org' for discussions.
|
| harti

Linux as well by default, so this is no urban legend.

Ross
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFAvMwK9bR4xmappRARAtQtAJ45k8iItmlHo2H9kq6dba x1lQfv+gCg47ci
JkDft/IwvaSRNj9DL4SyAUA=
=YnWV
-----END PGP SIGNATURE-----
Nov 14 '05 #20
osmium <r1********@comcast.net> wrote:
Richard Bos writes:
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
> In article <40***************@yahoo.com>,
> CBFalconer <cb********@worldnet.att.net> wrote:
>
> >Won't work. He can't do what he wants in portable standard C.
> >The reason being that many systems will happily return a valid
> >pointer when usage exceeds main memory, and then thrash data in an
> >out of disk storage when accessed.
>
> It's worse than that. Many operating systems overcommit memory,
> returning a valid pointer from malloc() and then killing the program
> when it tries to access too much of it.


Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory), is there _any_ good reason for this practice? I'd have
thought simply telling your user that he can't have that much memory is
preferable to pretending that he can, and then unceremoniously dumping
him in it when he tries to use it. I mean, under OSes like that, what's
the use of all our precautions of checking that malloc() returned
succesfully?

Until I see the name of the offending OS, I will take this to be an urban
legend.


Linux does exactly this. It makes for fun debugging.

--
Alex Monjushko (mo*******@hotmail.com)
Nov 14 '05 #21
In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
osmium <r1********@comcast.net> wrote:
Richard Bos writes:
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
> It's worse than that. Many operating systems overcommit memory,
> returning a valid pointer from malloc() and then killing the program
> when it tries to access too much of it.

Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory), is there _any_ good reason for this practice? I'd have
thought simply telling your user that he can't have that much memory is
preferable to pretending that he can, and then unceremoniously dumping
him in it when he tries to use it. I mean, under OSes like that, what's
the use of all our precautions of checking that malloc() returned
succesfully?

Until I see the name of the offending OS, I will take this to be an urban
legend.


Linux does exactly this. It makes for fun debugging.


echo 0 > /proc/sys/vm/overcommit_memory

Linux represents a perpetual battleground for standards-compliant-pedants
vs. get-the-job-done-pragmatists. Linus's biggest successes are when
he manages to satisfy both simultaneously; cases like this are second
place, when each can configure the system to their liking at a whim.

- Larry
Nov 14 '05 #22
Larry Doolittle <ld******@recycle.lbl.gov> wrote:
In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
osmium <r1********@comcast.net> wrote:
Richard Bos writes:
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
> It's worse than that. Many operating systems overcommit memory,
> returning a valid pointer from malloc() and then killing the program
> when it tries to access too much of it.

Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory), is there _any_ good reason for this practice? I'd have
thought simply telling your user that he can't have that much memory is
preferable to pretending that he can, and then unceremoniously dumping
him in it when he tries to use it. I mean, under OSes like that, what's
the use of all our precautions of checking that malloc() returned
succesfully?
Until I see the name of the offending OS, I will take this to be an urban
legend.


Linux does exactly this. It makes for fun debugging.

echo 0 > /proc/sys/vm/overcommit_memory Linux represents a perpetual battleground for standards-compliant-pedants
vs. get-the-job-done-pragmatists. Linus's biggest successes are when
he manages to satisfy both simultaneously; cases like this are second
place, when each can configure the system to their liking at a whim.


This is pointless from the vendor perspective. If I ship a program,
I want to have control over the semantics of memory allocation. The
end-user is not likely to be sufficiently informed to make the decision
for me. I fail to see the pragmatism.

--
Alex Monjushko (mo*******@hotmail.com)
Nov 14 '05 #23
Alex Monjushko<mo*******@hotmail.com> writes:
Larry Doolittle <ld******@recycle.lbl.gov> wrote:
In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
echo 0 > /proc/sys/vm/overcommit_memory

Linux represents a perpetual battleground for standards-compliant-pedants
vs. get-the-job-done-pragmatists. Linus's biggest successes are when
he manages to satisfy both simultaneously; cases like this are second
place, when each can configure the system to their liking at a whim.


This is pointless from the vendor perspective. If I ship a program,
I want to have control over the semantics of memory allocation. The
end-user is not likely to be sufficiently informed to make the decision
for me. I fail to see the pragmatism.


What happens when two different vendor perspectives clash, each having
choosen to go their own, distinct way and stretching the poor user to
fit their own conception of "pragmatism"?

- Giorgos

Nov 14 '05 #24
Giorgos Keramidas <ke******@ceid.upatras.gr> wrote:
Alex Monjushko<mo*******@hotmail.com> writes:
Larry Doolittle <ld******@recycle.lbl.gov> wrote:
In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
echo 0 > /proc/sys/vm/overcommit_memory

Linux represents a perpetual battleground for standards-compliant-pedants
vs. get-the-job-done-pragmatists. Linus's biggest successes are when
he manages to satisfy both simultaneously; cases like this are second
place, when each can configure the system to their liking at a whim.


This is pointless from the vendor perspective. If I ship a program,
I want to have control over the semantics of memory allocation. The
end-user is not likely to be sufficiently informed to make the decision
for me. I fail to see the pragmatism.


What happens when two different vendor perspectives clash, each having
choosen to go their own, distinct way and stretching the poor user to
fit their own conception of "pragmatism"?


That's just the point, isn't it? One program should not be able to force
the allocation strategy for other programs, nor require the user to do
so. If I know my program will need what it allocates, I should be able
to depend on malloc() behaving Standard-conformingly; this need not and
should not influence the way other programs get their memory.

Richard
Nov 14 '05 #25
In <2i************@uni-berlin.de> "osmium" <r1********@comcast.net> writes:
Richard Bos writes:
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
> In article <40***************@yahoo.com>,
> CBFalconer <cb********@worldnet.att.net> wrote:
>
> >Won't work. He can't do what he wants in portable standard C.
> >The reason being that many systems will happily return a valid
> >pointer when usage exceeds main memory, and then thrash data in an
> >out of disk storage when accessed.
>
> It's worse than that. Many operating systems overcommit memory,
> returning a valid pointer from malloc() and then killing the program
> when it tries to access too much of it.


Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory), is there _any_ good reason for this practice? I'd have
thought simply telling your user that he can't have that much memory is
preferable to pretending that he can, and then unceremoniously dumping
him in it when he tries to use it. I mean, under OSes like that, what's
the use of all our precautions of checking that malloc() returned
succesfully?


Until I see the name of the offending OS, I will take this to be an urban
legend.


An incomplete list includes AIX, Digital Unix in lazy swap allocation mode
and Linux. Unlike the other systems, Linux checks that the allocated
memory is available at the time when the allocation request is made (which
of course, doesn't guarantee that it will still be available when the
program actually needs to use it, but, at least, requests exceeding the
system's capabilities are immediately rejected).

There are perfectly good reasons for this strategy and, on platforms where
they have a choice, users prefer the unsafe mode. Here are some examples:

1. Most applications overallocate memory. Back when swap space was a
limited resource (the whole disk had less than 1 GB), it was quite
easy to run out of virtual memory after starting only a few
applications, although most of it was *unused* (but allocated).
Switching to lazy swap allocation mode made an impressive difference.

2. Sparse arrays can be handled as ordinary arrays. The unused parts of
the array don't consume any resource except virtual memory address
space.

3. Large buffers come for free: the unused parts don't waste any
resources.

From a pragmatic point of view, a system running out of (virtual) memory
becomes unusable, anyway. Lazy swap space allocation delays this moment,
sometimes by a significant factor. Which is why users prefer it.

Of course, on a high reliability server, lazy swap space allocation may
not be an acceptable option.

OTOH, the sizes of the current disks make the issue far less important
than it was a decade ago.

As for the conformance of the C implementations on such systems, I can
find no requirement that the one and only program that needs to be
correctly translated and executed *must* contain malloc and friends calls
;-)

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #26
In article <40****************@news.individual.net>, Richard Bos wrote:
Giorgos Keramidas <ke******@ceid.upatras.gr> wrote:
Alex Monjushko<mo*******@hotmail.com> writes:
>Larry Doolittle <ld******@recycle.lbl.gov> wrote:
>> In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
>> echo 0 > /proc/sys/vm/overcommit_memory
>>
>> Linux represents a perpetual battleground for standards-compliant-pedants
>> vs. get-the-job-done-pragmatists. Linus's biggest successes are when
>> he manages to satisfy both simultaneously; cases like this are second
>> place, when each can configure the system to their liking at a whim.
>
> This is pointless from the vendor perspective. If I ship a program,
> I want to have control over the semantics of memory allocation. The
> end-user is not likely to be sufficiently informed to make the decision
> for me. I fail to see the pragmatism.

#!/bin/sh
if [ `uname -s` = "Linux" -a `cat /proc/sys/vm/overcommit_memory` != 0 ]; then
echo "This program won't run on a Linux system with"
echo "an over-commit memory policy."
echo "As root, run the shell command:"
echo " echo 0 > /proc/sys/vm/overcommit_memory"
echo "and then try running this program again"
exit 1
fi
echo "run your program here"
What happens when two different vendor perspectives clash, each having
choosen to go their own, distinct way and stretching the poor user to
fit their own conception of "pragmatism"?


That's just the point, isn't it? One program should not be able to force
the allocation strategy for other programs, nor require the user to do
so. If I know my program will need what it allocates, I should be able
to depend on malloc() behaving Standard-conformingly; this need not and
should not influence the way other programs get their memory.


The distinction is normally only important when programs are real
memory pigs. At that level, it is probably useful to suggest that
the user buy two machines, and segregate applications. If the
problem is theoretical rather than actual,
echo 0 > /proc/sys/vm/overcommit_memory ,
give yourself a huge swap space, and be done with it. There is no way
a program's proper functionality can depend on memory overcommittment.

The lkml has thrashed through this territory innumerable times. The
only relevant observation here is that Linux _has_ a standard-conforming
mode, which can be set by the admin, and tested for by a mortal user or
program.

- Larry
Nov 14 '05 #27
Giorgos Keramidas <ke******@ceid.upatras.gr> wrote:
Alex Monjushko<mo*******@hotmail.com> writes:
Larry Doolittle <ld******@recycle.lbl.gov> wrote:
In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
echo 0 > /proc/sys/vm/overcommit_memory

Linux represents a perpetual battleground for standards-compliant-pedants
vs. get-the-job-done-pragmatists. Linus's biggest successes are when
he manages to satisfy both simultaneously; cases like this are second
place, when each can configure the system to their liking at a whim.
This is pointless from the vendor perspective. If I ship a program,
I want to have control over the semantics of memory allocation. The
end-user is not likely to be sufficiently informed to make the decision
for me. I fail to see the pragmatism.

What happens when two different vendor perspectives clash, each having
choosen to go their own, distinct way and stretching the poor user to
fit their own conception of "pragmatism"?


You misunderstood. I want to have control over the semantics of
memory allocation in /my/ program. I don't want my program to
force these semantics for other programs or vice-versa.

For what it's worth, in some cases, I have found it to useful to
explicitly use all allocated memory right off the bat, to make
sure that I would be able to safely use it later.

--
Alex Monjushko (mo*******@hotmail.com)
Nov 14 '05 #28
Larry Doolittle <ld******@recycle.lbl.gov> wrote:
In article <40****************@news.individual.net>, Richard Bos wrote:
Giorgos Keramidas <ke******@ceid.upatras.gr> wrote:
Alex Monjushko<mo*******@hotmail.com> writes:
>Larry Doolittle <ld******@recycle.lbl.gov> wrote:
>> In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
>> echo 0 > /proc/sys/vm/overcommit_memory
>>
>> Linux represents a perpetual battleground for standards-compliant-pedants
>> vs. get-the-job-done-pragmatists. Linus's biggest successes are when
>> he manages to satisfy both simultaneously; cases like this are second
>> place, when each can configure the system to their liking at a whim.
>
> This is pointless from the vendor perspective. If I ship a program,
> I want to have control over the semantics of memory allocation. The
> end-user is not likely to be sufficiently informed to make the decision
> for me. I fail to see the pragmatism.
#!/bin/sh
if [ `uname -s` = "Linux" -a `cat /proc/sys/vm/overcommit_memory` != 0 ]; then
echo "This program won't run on a Linux system with"
echo "an over-commit memory policy."
echo "As root, run the shell command:"
echo " echo 0 > /proc/sys/vm/overcommit_memory"
echo "and then try running this program again"
exit 1
fi
echo "run your program here"


Right, and break it for everybody else? Suppose that another
program has this:

#!/bin/sh
if [ `uname -s` = "Linux" \
-a `cat /proc/sys/vm/overcommit_memory` -eq 0 ]; then
echo "This program won't run on a Linux system without"
echo "an over-commit memory policy."
echo "As root, run the shell command:"
echo " echo 1 > /proc/sys/vm/overcommit_memory"
echo "and then try running this program again"
exit 1
fi
echo "run your program here"

Not nice, is it?

A global memory management policy with an esoteric default just
does not make much sense to me.

--
Alex Monjushko (mo*******@hotmail.com)
Nov 14 '05 #29

In article <40****************@news.individual.net>, rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
It's worse than that. Many operating systems overcommit memory,
returning a valid pointer from malloc() and then killing the program
when it tries to access too much of it.
Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory),


[We just hashed this out - again - in February. And before that in
January. Google for the thread if you like; searching for "lazy
allocation" should do it.]

It doesn't guarantee that you can use it. If it did, all implementa-
tions on virtual-memory OSes would potentially be non-conforming,
since the OS could lose backing store (eg due to disk failure)
between the time that malloc succeeded and the program tried to use
it.
is there _any_ good reason for this practice?
Sparse arrays, for one.
I mean, under OSes like that, what's
the use of all our precautions of checking that malloc() returned
succesfully?


In Unix OSes with lazy allocation, malloc may still fail for other
reasons - most notably because a ulimit has been reached.

More generally, programs may fail *at any time* due to problems in
the environment - and running out of virtual storage capacity counts
as one of those. That doesn't absolve programs from performing
normal error detection and handling.

--
Michael Wojcik mi************@microfocus.com

[After the lynching of George "Big Nose" Parrot, Dr. John] Osborne
had the skin tanned and made into a pair of shoes and a medical bag.
Osborne, who became governor, frequently wore the shoes.
-- _Lincoln [Nebraska] Journal Star_
Nov 14 '05 #30
Alex Monjushko wrote:
>Larry Doolittle <ld******@recycle.lbl.gov> wrote:
>> echo 0 > /proc/sys/vm/overcommit_memory

A global memory management policy with an esoteric default just
does not make much sense to me.


Just FYI, the default is, at least on the version I run, not to
overcommit. (Except, AFAICS, under unlikely circumstances with
one architecture.)

--
++acr@,ka"
Nov 14 '05 #31
Da*****@cern.ch (Dan Pop) writes:

|> There are perfectly good reasons for this strategy and, on platforms
|> where they have a choice, users prefer the unsafe mode.

They prefer it so much that IBM was forced to abandon it as the default
mode on the AIX.

There are some users, particularly those doing mathematical calculations
in research centers, for whom it might not be a real hinderness -- in
fact, it might even make like a very little bit simpler. But there are
a lot of users whose programs have to work correctly, and back out
correctly if the resources aren't there to support it. None of my
customers would ever knowingly have accepted such behavior.

--
James Kanze
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34
Nov 14 '05 #32
Larry Doolittle <ld******@recycle.lbl.gov> writes:

|> In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
|> > osmium <r1********@comcast.net> wrote:
|> >> Richard Bos writes:
|> >>> ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
|> >>> > It's worse than that. Many operating systems overcommit
|> >>> > memory, returning a valid pointer from malloc() and then
|> >>> > killing the program when it tries to access too much of it.

|> >>> Quite apart from this rendering any C implementation on that
|> >>> platform unconforming (after all, if malloc() succeeds, the
|> >>> Standard says you own that memory), is there _any_ good reason
|> >>> for this practice? I'd have thought simply telling your user
|> >>> that he can't have that much memory is preferable to pretending
|> >>> that he can, and then unceremoniously dumping him in it when he
|> >>> tries to use it. I mean, under OSes like that, what's the use of
|> >>> all our precautions of checking that malloc() returned
|> >>> succesfully?

|> >> Until I see the name of the offending OS, I will take this to be
|> >> an urban legend.

|> > Linux does exactly this. It makes for fun debugging.

AIX used to, and can still be made to do so as well.

|> echo 0 > /proc/sys/vm/overcommit_memory

Does this turn lazy commit on, or off? A quick check on my Linux box
(Mandrake 10.0, default installation) shows that there is such a file,
and it contains 0. Has Mandrake corrected something, or is this the
default, or...

|> Linux represents a perpetual battleground for
|> standards-compliant-pedants vs. get-the-job-done-pragmatists.
|> Linus's biggest successes are when he manages to satisfy both
|> simultaneously; cases like this are second place, when each can
|> configure the system to their liking at a whim.

The real problem here is what is the job that needs getting done: a
program that works reliably, or one that pushes the limit, working most
of the time, failing unaccountably on rare occasions, but being able on
the average to handle bigger data sets that it otherwise could.

For 99% of commercial applications, the program has to work reliably,
and the programs don't have to deal with large data sets (at least not
in memory). None of my customers would ever knowingly accept
overcommitting. They want to be sure that the job gets done, or that
the program backs out cleanly, freeing such resources as file locks, if
the resources aren't present. Customers like these were the commercial
presure which forced IBM to change the default mode for the AIX.

As to configurability: it is worthless on a processor level. The AIX
still retains the ability to overcommit, but it only does so if a
specific shell variable is set in the process. So you have to
explicitly ask for it, and one process asking for it doesn't affect
other processes.

--
James Kanze
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34
Nov 14 '05 #33
James Kanze <ka***@gabi-soft.fr> wrote:
Da*****@cern.ch (Dan Pop) writes:

|> There are perfectly good reasons for this strategy and, on platforms
|> where they have a choice, users prefer the unsafe mode.

They prefer it so much that IBM was forced to abandon it as the default
mode on the AIX.

There are some users, particularly those doing mathematical calculations
in research centers, for whom it might not be a real hinderness -- in
fact, it might even make like a very little bit simpler. But there are
a lot of users whose programs have to work correctly, and back out
correctly if the resources aren't there to support it. None of my
customers would ever knowingly have accepted such behavior.


Neither would I, and nor would my users. Telling them that I'm very
sorry, but their last hour's entered text is completely lost because
their system doesn't have a big enough disk is simply not acceptable.

Richard
Nov 14 '05 #34
mw*****@newsguy.com (Michael Wojcik) wrote:
In article <40****************@news.individual.net>, rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:
Quite apart from this rendering any C implementation on that platform
unconforming (after all, if malloc() succeeds, the Standard says you own
that memory),


[We just hashed this out - again - in February. And before that in
January. Google for the thread if you like; searching for "lazy
allocation" should do it.]

It doesn't guarantee that you can use it. If it did, all implementa-
tions on virtual-memory OSes would potentially be non-conforming,
since the OS could lose backing store (eg due to disk failure)
between the time that malloc succeeded and the program tried to use
it.


And as I said back then, that's no argument - failing hardware can
render _everything_ unconforming if you allow it to count. After all, at
any moment a cosmic ray could flip a bit in your program's memory and
turn that valid double you held there into a trap representation.
is there _any_ good reason for this practice?


Sparse arrays, for one.


Rare enough that one should not make _all_ programs unsafe because of
it. OTOH, a facility whereby a program that uses directly allocated
sparse arrays could tell the OS that _it_ can tolerate over-committing
would be useful.
I mean, under OSes like that, what's
the use of all our precautions of checking that malloc() returned
succesfully?


In Unix OSes with lazy allocation, malloc may still fail for other
reasons - most notably because a ulimit has been reached.

More generally, programs may fail *at any time* due to problems in
the environment - and running out of virtual storage capacity counts
as one of those.


Yes, but those are unavoidable. This is by design - if the system didn't
over-commit, this would be one less unnecessary worry for the user.

Richard
Nov 14 '05 #35
On Wed, 2 Jun 2004, Alex Monjushko wrote:

AM>Larry Doolittle <ld******@recycle.lbl.gov> wrote:
AM>> In article <40****************@news.individual.net>, Richard Bos wrote:
AM>>> Giorgos Keramidas <ke******@ceid.upatras.gr> wrote:
AM>>>
AM>>>> Alex Monjushko<mo*******@hotmail.com> writes:
AM>>>> >Larry Doolittle <ld******@recycle.lbl.gov> wrote:
AM>>>> >> In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
AM>>>> >> echo 0 > /proc/sys/vm/overcommit_memory
AM>>>> >>
AM>>>> >> Linux represents a perpetual battleground for standards-compliant-pedants
AM>>>> >> vs. get-the-job-done-pragmatists. Linus's biggest successes are when
AM>>>> >> he manages to satisfy both simultaneously; cases like this are second
AM>>>> >> place, when each can configure the system to their liking at a whim.
AM>>>> >
AM>>>> > This is pointless from the vendor perspective. If I ship a program,
AM>>>> > I want to have control over the semantics of memory allocation. The
AM>>>> > end-user is not likely to be sufficiently informed to make the decision
AM>>>> > for me. I fail to see the pragmatism.
AM>
AM>> #!/bin/sh
AM>> if [ `uname -s` = "Linux" -a `cat /proc/sys/vm/overcommit_memory` != 0 ]; then
AM>> echo "This program won't run on a Linux system with"
AM>> echo "an over-commit memory policy."
AM>> echo "As root, run the shell command:"
AM>> echo " echo 0 > /proc/sys/vm/overcommit_memory"
AM>> echo "and then try running this program again"
AM>> exit 1
AM>> fi
AM>> echo "run your program here"
AM>
AM>Right, and break it for everybody else? Suppose that another
AM>program has this:
AM>
AM>#!/bin/sh
AM>if [ `uname -s` = "Linux" \
AM> -a `cat /proc/sys/vm/overcommit_memory` -eq 0 ]; then
AM> echo "This program won't run on a Linux system without"
AM> echo "an over-commit memory policy."
AM> echo "As root, run the shell command:"
AM> echo " echo 1 > /proc/sys/vm/overcommit_memory"
AM> echo "and then try running this program again"
AM> exit 1
AM>fi
AM>echo "run your program here"
AM>
AM>Not nice, is it?
AM>
AM>A global memory management policy with an esoteric default just
AM>does not make much sense to me.

Just make your malloc() touching all pages it allocates and make this
behaviour settable via an environment variable.

harti
Nov 14 '05 #36
On Wed, 2 Jun 2004, Michael Wojcik wrote:

MW>
MW>In article <40****************@news.individual.net>, rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:
MW>> ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
MW>>
MW>> > It's worse than that. Many operating systems overcommit memory,
MW>> > returning a valid pointer from malloc() and then killing the program
MW>> > when it tries to access too much of it.
MW>>
MW>> Quite apart from this rendering any C implementation on that platform
MW>> unconforming (after all, if malloc() succeeds, the Standard says you own
MW>> that memory),
MW>
MW>[We just hashed this out - again - in February. And before that in
MW>January. Google for the thread if you like; searching for "lazy
MW>allocation" should do it.]
MW>
MW>It doesn't guarantee that you can use it. If it did, all implementa-
MW>tions on virtual-memory OSes would potentially be non-conforming,
MW>since the OS could lose backing store (eg due to disk failure)
MW>between the time that malloc succeeded and the program tried to use
MW>it.

I don't think that this is a valid argument since POSIX doesn't address
faulty hardware. I don't think that there is a place in POSIX that
explicitly says that hardware must be non-faulty, but I'd say this should
be clear.

MW>
MW>> is there _any_ good reason for this practice?
MW>
MW>Sparse arrays, for one.
MW>
MW>> I mean, under OSes like that, what's
MW>> the use of all our precautions of checking that malloc() returned
MW>> succesfully?
MW>
MW>In Unix OSes with lazy allocation, malloc may still fail for other
MW>reasons - most notably because a ulimit has been reached.

The difference is that you can usually handle malloc() returning NULL.
In the case of running out of swap space after malloc() has returned
a non-NULL value this is generally harder. Last year it tooks us an entire
month to find out why www.berlioz.de periodically was frozen to the point
that only the power switch helped (this was a 2-CPU Linux running apache).
The problem was that the apaches consumed all memory and swap space and
the system could not get out of this situation. Now that is a Sun Solaris
with apache and no problems so far. Other systems instead of freezing
start to more or less randomly kill processes, but it is not so easy for a
well behaved process (one that is just trying to use the malloced() space)
to react to this.

MW>More generally, programs may fail *at any time* due to problems in
MW>the environment - and running out of virtual storage capacity counts
MW>as one of those. That doesn't absolve programs from performing
MW>normal error detection and handling.

According to POSIX this is not an environmental condition.

Note, that I'm not arguing that overcommitting is bad, just that your
arguments are not really good.

harti
Nov 14 '05 #37
About laze memory allocation:

In article <40***************@news.individual.net> rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:
James Kanze <ka***@gabi-soft.fr> wrote:
Da*****@cern.ch (Dan Pop) writes:
|> There are perfectly good reasons for this strategy and, on platforms
|> where they have a choice, users prefer the unsafe mode.

They prefer it so much that IBM was forced to abandon it as the default
mode on the AIX.


Neither would I, and nor would my users. Telling them that I'm very
sorry, but their last hour's entered text is completely lost because
their system doesn't have a big enough disk is simply not acceptable.


I know that on my desktop the X server had regular crashes until we
switched off lazy memory allocation.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #38
In <40***************@news.individual.net> rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:
James Kanze <ka***@gabi-soft.fr> wrote:
Da*****@cern.ch (Dan Pop) writes:

|> There are perfectly good reasons for this strategy and, on platforms
|> where they have a choice, users prefer the unsafe mode.

They prefer it so much that IBM was forced to abandon it as the default
mode on the AIX.

There are some users, particularly those doing mathematical calculations
in research centers, for whom it might not be a real hinderness -- in
fact, it might even make like a very little bit simpler. But there are
a lot of users whose programs have to work correctly, and back out
correctly if the resources aren't there to support it. None of my
customers would ever knowingly have accepted such behavior.


Neither would I, and nor would my users. Telling them that I'm very
sorry, but their last hour's entered text is completely lost because
their system doesn't have a big enough disk is simply not acceptable.


You must be really dense if you still haven't figured out that, in most
cases, the programmer doesn't have the control over the system behaviour.
It's either the implementor, or the sysadmin or even the user that
controls this aspect of the execution environment. And statically
allocated memory is affected just as well as dynamically allocated
memory.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #39
In <Pine.GSO.4.60.0406030935260.5115@zeus> Harti Brandt <br****@dlr.de> writes:
On Wed, 2 Jun 2004, Alex Monjushko wrote:

AM>Larry Doolittle <ld******@recycle.lbl.gov> wrote:
AM>> In article <40****************@news.individual.net>, Richard Bos wrote:
AM>>> Giorgos Keramidas <ke******@ceid.upatras.gr> wrote:
AM>>>
AM>>>> Alex Monjushko<mo*******@hotmail.com> writes:
AM>>>> >Larry Doolittle <ld******@recycle.lbl.gov> wrote:
AM>>>> >> In article <2i************@uni-berlin.de>, Alex Monjushko wrote:
AM>>>> >> echo 0 > /proc/sys/vm/overcommit_memory
AM>>>> >>
AM>>>> >> Linux represents a perpetual battleground for standards-compliant-pedants
AM>>>> >> vs. get-the-job-done-pragmatists. Linus's biggest successes are when
AM>>>> >> he manages to satisfy both simultaneously; cases like this are second
AM>>>> >> place, when each can configure the system to their liking at a whim.
AM>>>> >
AM>>>> > This is pointless from the vendor perspective. If I ship a program,
AM>>>> > I want to have control over the semantics of memory allocation. The
AM>>>> > end-user is not likely to be sufficiently informed to make the decision
AM>>>> > for me. I fail to see the pragmatism.
AM>
AM>> #!/bin/sh
AM>> if [ `uname -s` = "Linux" -a `cat /proc/sys/vm/overcommit_memory` != 0 ]; then
AM>> echo "This program won't run on a Linux system with"
AM>> echo "an over-commit memory policy."
AM>> echo "As root, run the shell command:"
AM>> echo " echo 0 > /proc/sys/vm/overcommit_memory"
AM>> echo "and then try running this program again"
AM>> exit 1
AM>> fi
AM>> echo "run your program here"
AM>
AM>Right, and break it for everybody else? Suppose that another
AM>program has this:
AM>
AM>#!/bin/sh
AM>if [ `uname -s` = "Linux" \
AM> -a `cat /proc/sys/vm/overcommit_memory` -eq 0 ]; then
AM> echo "This program won't run on a Linux system without"
AM> echo "an over-commit memory policy."
AM> echo "As root, run the shell command:"
AM> echo " echo 1 > /proc/sys/vm/overcommit_memory"
AM> echo "and then try running this program again"
AM> exit 1
AM>fi
AM>echo "run your program here"
AM>
AM>Not nice, is it?
AM>
AM>A global memory management policy with an esoteric default just
AM>does not make much sense to me.

Just make your malloc() touching all pages it allocates and make this
behaviour settable via an environment variable.


Doesn't help much if the program crashes while doing that.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #40
In <m2************@thomas-local.gabi-soft.fr> James Kanze <ka***@gabi-soft.fr> writes:
Da*****@cern.ch (Dan Pop) writes:

|> There are perfectly good reasons for this strategy and, on platforms
|> where they have a choice, users prefer the unsafe mode.

They prefer it so much that IBM was forced to abandon it as the default
mode on the AIX.
Do you know for a fact that the change was due to customer complaints?
There are some users, particularly those doing mathematical calculations
in research centers, for whom it might not be a real hinderness -- in
fact, it might even make like a very little bit simpler.
I selected lazy swap allocation on my old Digital Unix box for reasons
that have exactly zilch to do with mathematical calculations and
everything to do with the fact that far too many programs allocate plenty
of memory they never use and eager swap allocation severely reduced the
usability of my system.
But there are
a lot of users whose programs have to work correctly, and back out
correctly if the resources aren't there to support it. None of my
customers would ever knowingly have accepted such behavior.


Only someone who has experimented with both strategies can make an
*informed* choice.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #41

In article <m2************@thomas-local.gabi-soft.fr>, James Kanze <ka***@gabi-soft.fr> writes:
Da*****@cern.ch (Dan Pop) writes:
|> There are perfectly good reasons for this strategy and, on platforms
|> where they have a choice, users prefer the unsafe mode.

They prefer it so much that IBM was forced to abandon it as the default
mode on the AIX.


Oh, really? When was that? The AIX 5.1L documentation still says:

The operating system uses the PSALLOC environment variable to
determine the mechanism used for memory and paging space allocation.
If the PSALLOC environment variable is not set, is set to null, or is
set to any value other than early, the system uses the default late
allocation algorithm.[1]

Perhaps you're thinking of the newer tree-style allocator, which is now
the default, versus the "3.1" buddy-system allocator.

1. http://publibn.boulder.ibm.com/doc_l....htm#HDRA8F021

--
Michael Wojcik mi************@microfocus.com

This record comes with a coupon that wins you a trip around the world.
-- Pizzicato Five
Nov 14 '05 #42
rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:

|> James Kanze <ka***@gabi-soft.fr> wrote:

|> > Da*****@cern.ch (Dan Pop) writes:

|> > |> There are perfectly good reasons for this strategy and, on
|> > |> platforms where they have a choice, users prefer the unsafe
|> > |> mode.

|> > They prefer it so much that IBM was forced to abandon it as the
|> > default mode on the AIX.

|> > There are some users, particularly those doing mathematical
|> > calculations in research centers, for whom it might not be a real
|> > hinderness -- in fact, it might even make like a very little bit
|> > simpler. But there are a lot of users whose programs have to work
|> > correctly, and back out correctly if the resources aren't there to
|> > support it. None of my customers would ever knowingly have
|> > accepted such behavior.

|> Neither would I, and nor would my users. Telling them that I'm very
|> sorry, but their last hour's entered text is completely lost because
|> their system doesn't have a big enough disk is simply not
|> acceptable.

You don't get to tell them. The way it worked with the AIX was that
your program core dumped. My users like that even less.

--
James Kanze
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34
Nov 14 '05 #43
James Kanze <ka***@gabi-soft.fr> wrote:
rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:

|> Neither would I, and nor would my users. Telling them that I'm very
|> sorry, but their last hour's entered text is completely lost because
|> their system doesn't have a big enough disk is simply not
|> acceptable.

You don't get to tell them. The way it worked with the AIX was that
your program core dumped. My users like that even less.


My users know where to find me; I'm an in-house programmer. Believe me,
I'd have to tell them how, why, and what I think I'm going to do about
it - _and_ who is about to re-enter all that data.
Although, to be fair, usually I would be able to put the blame on
Windows. All too often, that would be justified, too :-/

Richard
Nov 14 '05 #44
In <m2************@lns-th2-13-82-64-68-115.adsl.proxad.net> James Kanze <ka***@gabi-soft.fr> writes:
rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:

|> Neither would I, and nor would my users. Telling them that I'm very
|> sorry, but their last hour's entered text is completely lost because
|> their system doesn't have a big enough disk is simply not
|> acceptable.

You don't get to tell them. The way it worked with the AIX was that
your program core dumped. My users like that even less.


IIRC, AIX actually sent a signal to the program, when this happened.
If you didn't catch it and take whatever measures were appropriate,
you have only yourself to blame.

The typical behaviour behaviour of other OSs using lazy swap allocation
was to kill *other* processes, in order to gain memory for the one
needing it. So, when I started losing xterms, I knew that I have to
terminate my netscape session ASAP ;-)

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #45

In article <ca**********@sunnews.cern.ch>, Da*****@cern.ch (Dan Pop) writes:

IIRC, AIX actually sent a signal to the program, when this happened.
If you didn't catch it and take whatever measures were appropriate,
you have only yourself to blame.


Correct. When paging space runs low, AIX sends SIGDANGER to every
user process. A few seconds later, if memory consumption has not
been reduced, it begins to send SIGKILL to those processes which *did
not catch SIGDANGER* and are consuming the most virtual memory. It
only begins killing processes which received SIGDANGER and caught it
if the low-memory condition continues.

Processes can also poll for low-paging-space conditions using the
psdanger system call.

In other words, there's ample opportunity for every program to detect
the low-memory condition and take appropriate action, like
checkpointing its state and perhaps releasing some memory. The
latter is more feasible for some programs than for others, of course,
but most AIX programs can trivially return some memory to the system
using the mallopt function with the M_DISCLAIM command, which tells
the malloc subsystem to return freed pages to the OS (via the
disclaim system call). This is completely transparent to the
application - they remain mapped in the app's address space, and will
be reallocated from the VMM pool if and when they're needed to
satisfy future malloc requests.

This is all in the AIX documentation. Much of it is right in the
malloc man page.

Frankly, I'd be pretty suspicious of any application that runs for an
hour without checkpointing, as in Richard's example. Low virtual
memory is hardly the only condition which might interrupt it.

--
Michael Wojcik mi************@microfocus.com

I will shoue the world one of the grate Wonders of the world in 15
months if Now man mourders me in Dors or out Dors
-- "Lord" Timothy Dexter, _A Pickle for the Knowing Ones_
Nov 14 '05 #46
mw*****@newsguy.com (Michael Wojcik) wrote:
In article <ca**********@sunnews.cern.ch>, Da*****@cern.ch (Dan Pop) writes:

IIRC, AIX actually sent a signal to the program, when this happened.
If you didn't catch it and take whatever measures were appropriate,
you have only yourself to blame.
Correct. When paging space runs low, AIX sends SIGDANGER to every
user process.


That's not an ISO C signal, so an ISO C program cannot be expected to
catch it.
Processes can also poll for low-paging-space conditions using the
psdanger system call.
Which is not an ISO C function.
In other words, there's ample opportunity for every program to detect
the low-memory condition and take appropriate action,
Provided they are not ISO C programs.

In other words, programs portable nowhere else but to AIX have a
fighting chance. Correct, portable, ISO C programs are the first to be
dumped unceremoniously in the dunghill. Is this supposed to make me feel
happier about it?
Frankly, I'd be pretty suspicious of any application that runs for an
hour without checkpointing, as in Richard's example. Low virtual
memory is hardly the only condition which might interrupt it.


So would I, but
- we are not all in a position where we can avoid using programs like M$
Word, whose checkpoint files I have never found very useful (unlike,
say, those written by WordPerfect);
- losing only ten minutes' work is also irritating, if it happens often
enough. For some people, "often enough" is "twice", and sometimes with
good reason.

Richard
Nov 14 '05 #47
In <40****************@news.individual.net> rl*@hoekstra-uitgeverij.nl (Richard Bos) writes:
mw*****@newsguy.com (Michael Wojcik) wrote:
In article <ca**********@sunnews.cern.ch>, Da*****@cern.ch (Dan Pop) writes:
>
> IIRC, AIX actually sent a signal to the program, when this happened.
> If you didn't catch it and take whatever measures were appropriate,
> you have only yourself to blame.


Correct. When paging space runs low, AIX sends SIGDANGER to every
user process.


That's not an ISO C signal, so an ISO C program cannot be expected to
catch it.


Are you gaining your living by writing ISO C programs? Are your
customers happy when you tell them that half of their specification cannot
be implemented in ISO C and that the program's interactive I/O sucks
because this is the best one can do in ISO C?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #48
# Which is not an ISO C function.
#
# > In other words, there's ample opportunity for every program to detect
# > the low-memory condition and take appropriate action,
#
# Provided they are not ISO C programs.
#
# In other words, programs portable nowhere else but to AIX have a
# fighting chance. Correct, portable, ISO C programs are the first to be
# dumped unceremoniously in the dunghill. Is this supposed to make me feel
# happier about it?

What a whiner. Until we finish the Cray-INF with unlimited virtual memory,
how best to deal with running out of resources? Unceremoniously kill
everyone because that way we get uniform behaviour? Or give programs
an opportunity for a graceful exit, if they choose to exploit it?

--
SM Ryan http://www.rawbw.com/~wyrmwif/
Who's leading this mob?
Nov 14 '05 #49
On Thu, 10 Jun 2004 15:55:17 -0000, in comp.lang.c , SM Ryan
<wy*****@tango-sierra-oscar-foxtrot-tango-dot-charlie-oscar-mike.fake.org>
wrote:
# Which is not an ISO C function.
#
# > In other words, there's ample opportunity for every program to detect
# > the low-memory condition and take appropriate action,
#
# Provided they are not ISO C programs.
#
# In other words, programs portable nowhere else but to AIX have a
# fighting chance. Correct, portable, ISO C programs are the first to be
# dumped unceremoniously in the dunghill. Is this supposed to make me feel
# happier about it?

What a whiner.
Not.
Until we finish the Cray-INF with unlimited virtual memory,
how best to deal with running out of resources?
In platform specific ways, which are utterly offtopic here.
Unceremoniously kill
everyone because that way we get uniform behaviour? Or give programs
an opportunity for a graceful exit, if they choose to exploit it?


Ive seen both. And many other possibilities.
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>
----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
Nov 14 '05 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Andreas Suurkuusk | last post by:
Hi, I just noticed your post in the "C# memory problem: no end for our problem?" thread. In the post you implied that I do not how the garbage collector works and that I mislead people. Since...
6
by: Tom | last post by:
We have a VERY simple .NET C# Form Application, that has about a 23MB Memory Footprint. It starts a window runs a process and does a regular expression. I have done a GC.Collect to make sure that,...
18
by: Tron Thomas | last post by:
Given the following information about memory management in C++: ----- The c-runtime dynamic memory manager (and most other commercial memory managers) has issues with fragmentation similar to a...
22
by: xixi | last post by:
hi, we are using db2 udb v8.1 for windows, i have changed the buffer pool size to accommadate better performance, say size 200000, if i have multiple connection to the same database from...
4
by: xixi | last post by:
i have a very serious memory problem, we have db2 udb v8.1 load on a HP titanium machine with 4 G memory, it is 64bit machine, currently on DB2 instance , i have three databases, but only one is...
5
by: RoSsIaCrIiLoIA | last post by:
why not to build a malloc_m() and a free_m() that *check* (if memory_debug=1) if 1) there are some errors in bounds of *all* allocated arrays from them (and trace-print the path of code that make...
1
by: Teemu Keiski | last post by:
Hi, I have following type of scenario (also explained here http://blogs.aspadvice.com/joteke/archive/2005/01/10/2196.aspx ) We have problematic web server (wink2 Standard, 1.5GB of physical...
1
by: Nick Craig-Wood | last post by:
I've been dumping a database in a python code format (for use with Python on S60 mobile phone actually) and I've noticed that it uses absolutely tons of memory as compared to how much the data...
5
by: kumarmdb2 | last post by:
Hi guys, For last few days we are getting out of private memory error. We have a development environment. We tried to figure out the problem but we believe that it might be related to the OS...
22
by: Peter | last post by:
I am using VS2008. I have a Windows Service application which creates Crystal Reports. This is a multi theaded application which can run several reports at one time. My problem - there is a...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.