By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,854 Members | 869 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,854 IT Pros & Developers. It's quick & easy.

Questions about malloc()

P: n/a
Greetings.
I have read that the mistake of calling free(some_ptr) twice on
malloc(some_data) can cause program malfunction. Why is this?
With thanks.
Joseph Casey.

Nov 14 '05 #1
Share this Question
Share on Google+
50 Replies


P: n/a

"Joseph Casey" <jo******@indigo.ie> wrote in message
news:e3*******************@news.indigo.ie...
Greetings.
I have read that the mistake of calling free(some_ptr) twice on
malloc(some_data) can cause program malfunction. Why is this?
With thanks.


Coz it screws the heap. The 'heap' is a collection of coherent
datastructures keeping track of free memory. If you call free twice on the
same block, the internal administration will messed up. The details (of
course) depend heavily on the actual implementation and hence the libc you
have.

For an insight on heaps, consult Knuth TAoCP.
Nov 14 '05 #2

P: n/a

Joseph Casey wrote:
Greetings.
I have read that the mistake of calling free(some_ptr) twice on
malloc(some_data) can cause program malfunction. Why is this?
With thanks.
Joseph Casey.


Yes its sounds funny ...but its actually un-defined by standard !

Acc. to standard:
Description - Free function

The free function causes the space pointed to by ptr to be
deallocated, that is, made available for further allocation. If ptr
is a null pointer, no action occurs. Otherwise, if the argument does
not match a pointer earlier returned by the calloc , malloc , or
realloc function, or if the space has been deallocated by a call to
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
free or realloc , the behavior is undefined.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

- Ravi

Nov 14 '05 #3

P: n/a
Joseph Casey wrote:
Greetings.
I have read that the mistake of calling free(some_ptr) twice on
malloc(some_data) can cause program malfunction. Why is this?
With thanks.


Imagine that both you and your landlord are forgetful
and unskilled at keeping good records. Since you intend
to be away on vacation on the day when your rent is due,
you give your landlord a check for the month's rent before
you leave. But it slips your mind (you're forgetful,
remember? REMEMBER?), and upon your return you give him
another check and an apology for the late payment.

A few days later, your bank sends you an overdraft
notice and socks you with a fee. Your landlord complains
about the bounced check, socks you with a bad-check fee,
and threatens eviction unless you can come up with the
rent Right Now. He won't accept your check (of course),
but fortunately you've made a big profit by selling a
stale cheese sandwich on eBay and you happen to have a
wad of cash in your pocket.

You've now paid the rent three times, paid two penalty
fees, and loused up your relationships with your landlord
and your bank. Now, if the landlord kept better records
this wouldn't have happened -- but (switching from the
analogy back to Standard C) the memory "landlord" isn't
required to be that careful. In fact, many "landlords"
do only the minimum amount of record-keeping, in pursuit
of greater efficiency. They're counting on you to remember
whether you have or haven't paid the rent, whether you have
or haven't free()d the memory. Sharpen your memory or
suffer the consequences.

--
Er*********@sun.com

Nov 14 '05 #4

P: n/a
>I have read that the mistake of calling free(some_ptr) twice on
malloc(some_data) can cause program malfunction. Why is this?
With thanks.
Joseph Casey.


Because the standard says it can.

Also, it would probably slow down malloc() and free() considerably
to require that they accept repeated calls, or any old garbage
pointer.

Gordon L. Burditt
Nov 14 '05 #5

P: n/a
Joseph Casey <jo******@indigo.ie> writes:
I have read that the mistake of calling free(some_ptr) twice on
malloc(some_data) can cause program malfunction. Why is this?


The simple answer is that free()ing the same pointer twice invokes
undefined behavior because the standard says it invokes undefined
behavior. ("Undefined behavior" means that the standard imposes no
requirements on what happens; it can quietly do nothing, it can crash
your program, or it can make demons fly out your nose.)

As for *why* it's undefined behavior, basically the system is getting
revenge because you lied to it. By calling free(some_ptr), you're
telling the system, "Here's a pointer to a chunk of allocated memory.
I'm done with it. You can have it back now." By calling
free(some_ptr) again, you're telling the system the same thing -- but
now the chunk of memory is no longer allocated, and since you already
told the system that you were done with it, you have no right to do
anything with it. C tends to assume that you (the programmer) know
what you're doing, so it doesn't necessarily spend much extra effort
checking for your mistakes; it just takes your word for it.

The most dangerous thing about undefined behavior is that it can show
up as whatever you might naively expect. On the second call to
free(some_ptr), the system is *allowed* to say, "You've already freed
this pointer; I'll just pretend you didn't try to free it again". In
particular, it's allowed to do this during testing, so you never
detect the bug, and then blow up in your face when you're doing a demo
for an important customer.

One possible scenario is that, between the first and second calls to
free(some_ptr), you call malloc() in some other part of your program,
and it happens to re-use the chunk of memory you freed. On the second
call to free(some_ptr), some_ptr happens to point to a chunk of memory
that's being used somewhere else. The system assumes you know what
you're doing, and makes that chunk available for re-use. Later, yet
another part of your program grabs that chunk of memory *again* and
writes its own data to it, clobbering your carefully crafted dynamic
data structure. Note that the part of your program where the symptoms
appear can be very distant from the part where you caused the problem.
Hilarity ensues.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #6

P: n/a

"Ravi Uday" <ra******@gmail.com> wrote in message
news:41**************@gmail.com...

Joseph Casey wrote:
Greetings.
I have read that the mistake of calling free(some_ptr) twice on
malloc(some_data) can cause program malfunction. Why is this?
With thanks.
Joseph Casey.


Yes its sounds funny ...but its actually un-defined by standard !

Acc. to standard:
Description - Free function

The free function causes the space pointed to by ptr to be
deallocated, that is, made available for further allocation. If ptr
is a null pointer, no action occurs.


[snip]

Which brings up the question: Why did they not originally implement 'free'
to set the pointer to NULL after deallocating the memory? Other than some
obscure algorithms, I don't see much use to the programmer of having a
pointer that points to non-existant memory.
Nov 14 '05 #7

P: n/a


Method Man wrote:
"Ravi Uday" <ra******@gmail.com> wrote in message
news:41**************@gmail.com...
Joseph Casey wrote:
Greetings.
I have read that the mistake of calling free(some_ptr) twice on
malloc(some_data) can cause program malfunction. Why is this?
With thanks.
Joseph Casey.


Yes its sounds funny ...but its actually un-defined by standard !

Acc. to standard:
Description - Free function

The free function causes the space pointed to by ptr to be
deallocated, that is, made available for further allocation. If ptr
is a null pointer, no action occurs.

[snip]

Which brings up the question: Why did they not originally implement 'free'
to set the pointer to NULL after deallocating the memory? Other than some
obscure algorithms, I don't see much use to the programmer of having a
pointer that points to non-existant memory.


How do you propose you do this?
Maybe
void free(void **p);
?
That looks invalid.

--
Al Bowers
Tampa, Fl USA
mailto: xa******@myrapidsys.com (remove the x to send email)
http://www.geocities.com/abowers822/

Nov 14 '05 #8

P: n/a
"Method Man" <a@b.c> writes:
[...]
Which brings up the question: Why did they not originally implement 'free'
to set the pointer to NULL after deallocating the memory? Other than some
obscure algorithms, I don't see much use to the programmer of having a
pointer that points to non-existant memory.


Because it can't. The pointer argument to free(), like all function
arguments, is passed by value; the function can't modify it. (A
free() function that takes a void** rather than a void* couldn't
handle arbitrary pointer types.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #9

P: n/a

"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
"Method Man" <a@b.c> writes:
[...]
Which brings up the question: Why did they not originally implement 'free' to set the pointer to NULL after deallocating the memory? Other than some obscure algorithms, I don't see much use to the programmer of having a
pointer that points to non-existant memory.


Because it can't. The pointer argument to free(), like all function
arguments, is passed by value; the function can't modify it. (A
free() function that takes a void** rather than a void* couldn't
handle arbitrary pointer types.)


Yea, I suppose not. It could return a NULL:

p = free(p);

I just think free()'ing p and setting p = 0 should be handled in one step
since the operations are closely related and it avoids programmer error.
Nov 14 '05 #10

P: n/a
Method Man wrote:
"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
"Method Man" <a@b.c> writes:
[...]
Which brings up the question: Why did they not originally implement
'free'
to set the pointer to NULL after deallocating the memory? Other than
some
obscure algorithms, I don't see much use to the programmer of having a
pointer that points to non-existant memory.


Because it can't. The pointer argument to free(), like all function
arguments, is passed by value; the function can't modify it. (A
free() function that takes a void** rather than a void* couldn't
handle arbitrary pointer types.)

Yea, I suppose not. It could return a NULL:

p = free(p);

I just think free()'ing p and setting p = 0 should be handled in one step
since the operations are closely related and it avoids programmer error.


There's a wider problem: What makes you think that `p'
is the only pointer to the memory being free()d? If there
are other copies lying around, how are you going to find
them and zap them? What are you going to do with things
like `free(find_dead_area())'?

IMHO, dodges like `#define FREE(p) (void)(free(p), (p)=0)'
are unattractive because they pretend to solve a problem but
really don't. They encourage an unwarranted relaxation of
vigilance. YMMV.

--
Er*********@sun.com
Nov 14 '05 #11

P: n/a
There's a wider problem: What makes you think that `p'
is the only pointer to the memory being free()d? If there
are other copies lying around, how are you going to find
them and zap them? What are you going to do with things
like `free(find_dead_area())'?

IMHO, dodges like `#define FREE(p) (void)(free(p), (p)=0)'
are unattractive because they pretend to solve a problem but
really don't. They encourage an unwarranted relaxation of
vigilance. YMMV.


So what !
There are other ways of getting infected so why bother with condoms...

Chqrlie.
Nov 14 '05 #12

P: n/a
Charlie Gordon wrote:
There's a wider problem: What makes you think that `p'
is the only pointer to the memory being free()d? If there
are other copies lying around, how are you going to find
them and zap them? What are you going to do with things
like `free(find_dead_area())'?

IMHO, dodges like `#define FREE(p) (void)(free(p), (p)=0)'
are unattractive because they pretend to solve a problem but
really don't. They encourage an unwarranted relaxation of
vigilance. YMMV.

So what !
There are other ways of getting infected so why bother with condoms...

Chqrlie.

The real point is: if you don't want to be bothered with integer
overflows, buffer overruns and system call failures, don't use 'C',
which will always be fast and dangerous.

Robert
Nov 14 '05 #13

P: n/a
Robert Harris wrote:
The real point is: if you don't want to be bothered with integer
overflows, buffer overruns and system call failures, don't use 'C',
which will always be fast and dangerous.


And another point is missed. The real point is, was, and always will be:
Engage your brain! Everything else is mere prattle.
Nov 14 '05 #14

P: n/a
"Method Man" <a@b.c> wrote:
"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
Because it can't. The pointer argument to free(), like all function
arguments, is passed by value; the function can't modify it. (A
free() function that takes a void** rather than a void* couldn't
handle arbitrary pointer types.)
Yea, I suppose not. It could return a NULL:

p = free(p);

I just think free()'ing p and setting p = 0 should be handled in one step
since the operations are closely related


No, they're not, except in the case of programmers who don't trust
themselves to keep their valid and invalid pointers straight.
and it avoids programmer error.


It avoids paying proper attention to your bookkeeping, which in turn
tricks people into believing that using a pointer is always safe,
because it will "always" be either valid or null. Such slackers are
guaranteed to sooner or later dereference a null pointer on a system
which doesn't trap on that, or to dereference a pointer which is wild
through broken pointer arithmetic rather than because it has been freed,
or to set a pointer to null because it's been freed, and then to use a
non-null copy of the original pointer, or...
You get my drift, I hope. _You_ need to pay attention. You cannot trust
your programs to such tricks, because there's always a situation where
they don't work, and in the mean time your vigilance has been put to
sleep.

Richard
Nov 14 '05 #15

P: n/a
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:41**************@news.individual.net...
"Method Man" <a@b.c> wrote:
"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
Because it can't. The pointer argument to free(), like all function
arguments, is passed by value; the function can't modify it. (A
free() function that takes a void** rather than a void* couldn't
handle arbitrary pointer types.)


Yea, I suppose not. It could return a NULL:

p = free(p);

I just think free()'ing p and setting p = 0 should be handled in one step
since the operations are closely related


No, they're not, except in the case of programmers who don't trust
themselves to keep their valid and invalid pointers straight.
and it avoids programmer error.


It avoids paying proper attention to your bookkeeping, which in turn
tricks people into believing that using a pointer is always safe,
because it will "always" be either valid or null. Such slackers are
guaranteed to sooner or later dereference a null pointer on a system
which doesn't trap on that, or to dereference a pointer which is wild
through broken pointer arithmetic rather than because it has been freed,
or to set a pointer to null because it's been freed, and then to use a
non-null copy of the original pointer, or...
You get my drift, I hope. _You_ need to pay attention. You cannot trust
your programs to such tricks, because there's always a situation where
they don't work, and in the mean time your vigilance has been put to
sleep.


BS.

I call it defensive programming : it is certainly not perfect, but your
arguments for not doing it are stupid.
Using the same logic, you would require that :
- code should never be indented, this puts the programmer's attention to program
structure at rest and he will sooner or later get caught with a mis aligned
unbraced block.
- variables should not be named consistently, because the programmer will expect
certain semantics associated with certain names and misunderstand perfectly
correct programs.
- comments should not be used, as they may contain mistakes that the compiler
cannot spot.
- all integer constants should be written in octal : C doesn't support BCD
arithmetic, neither should alert programmers.
- spacing is a nuisance in C source code, it hides the beauty of concise C as in
a+++b, sum/*n;
- write all programs with OCCC in mind, to keep potential maintainers in shape.

--
Chqrlie.
Nov 14 '05 #16

P: n/a
> And another point is missed. The real point is, was, and always will be:
Engage your brain! Everything else is mere prattle.


Ouch! Aren't you asking for a bit much? :D
Nov 14 '05 #17

P: n/a
Charlie Gordon wrote:
There's a wider problem: What makes you think that `p'
is the only pointer to the memory being free()d? If there
are other copies lying around, how are you going to find
them and zap them? What are you going to do with things
like `free(find_dead_area())'?

IMHO, dodges like `#define FREE(p) (void)(free(p), (p)=0)'
are unattractive because they pretend to solve a problem but
really don't. They encourage an unwarranted relaxation of
vigilance. YMMV.

So what !
There are other ways of getting infected so why bother with condoms...


Because condoms don't provide 100% protection. If you
need 100% protection, the solution is to refrain from, er,
screwing around. And if you don't screw around, you don't
need condoms.

What it comes down to, I think, is a difference in
attitude about how to write programs that work. I've been
pondering how to express my ideas about The Right Way (tm)
to do things, and my misgivings about piecemeal approaches
like zeroing the argument to free(). Here's my poor best
at trying to explain my convoluted self:

Programs aren't life-like, in the sense that their
correctness[1] isn't a matter of probability[2]. An
incorrect program is incorrect even if it hasn't actually
failed yet; the error is latent, ready to cause a failure[3]
when the circumstances are right (or wrong, depending on
your point of view). In managing dynamic memory, omitting
to keep proper track of which pointers are "live" and which
have "died" is an error. You must have a means to tell
whether a pointer is or isn't current before you try to
use[4] it -- and if you have such a means, you don't need
any special value in the pointer itself.

Now, the "means" could perfectly well rest on the
assertion "All non-NULL pointers in my program are valid."
But clobbering just the one pointer handed to free() is
not enough to maintain the assertion: You need additional
effort and additional mechanisms to find and clobber all
the other copies that may be lying around. It's often
the case, for example, that the argument to free() is
the argument to free()'s caller, so zapping free()'s
argument is only the beginning of the job. A clear-the-
pointers scheme is far beyond the capabilities of the
simplistic dodges like the one I illustrated; it must be
driven from the top down rather than from the bottom up.

Bottom-up schemes aren't good enough for serious use.
They can even be harmful: by preventing failures in a
chunk of erroneous code 99% of the time, they can make
it less likely that the error will be exposed in testing.
They are the Typhoid Marys of programming: asymptomatic
and yet deadly. And that's why I don't like 'em.

Notes:

[1] I'm using "correctness" in the weak sense: A "correct"
program is one that doesn't "go down in flames" by doing
something like following stale pointers. Such a program
may nevertheless compute the value of pi as -42 or state
that the square root of 3 is 9 or otherwise contravene its
specification, but that's not the kind of "correctness"
I'm writing about. "Well-behaved" might have been a better
word than "correct," but I'm not up for a rewrite.

[2] This isn't meant to rule out probabilistic computation
methods. A program that seeks a solution probabilistically
and sometimes fails to find one but "plays nice" and reports
the failure in a controlled manner is still correct in the
sense of [1], and in some stronger senses as well.

[3] "Failure" as in "going off the rails." A program can
produce incorrect results without "failing;" this is the
flip side of [1].

[4] Note that simply examining the value of an invalid
pointer constitutes a "use," even if the pointer is not
dereferenced. Clearing free()d pointers can avoid this
particular problem, but you've got to get 'em all or it's
of no use.

--
Er*********@sun.com

Nov 14 '05 #18

P: n/a
In article <co**********@news1brm.Central.Sun.COM>
Eric Sosman <er*********@sun.com> wrote:
What it comes down to, I think, is a difference in
attitude about how to write programs that work. I've been
pondering how to express my ideas about The Right Way (tm)
to do things, and my misgivings about piecemeal approaches
like zeroing the argument to free(). Here's my poor best
at trying to explain my convoluted self:
[much snippage]
Bottom-up schemes aren't good enough for serious use.
They can even be harmful: by preventing failures in a
chunk of erroneous code 99% of the time, they can make
it less likely that the error will be exposed in testing.
They are the Typhoid Marys of programming: asymptomatic
and yet deadly. And that's why I don't like 'em.


At the same time, having low-level routines that "don't fail"
(or at least "don't make things worse") *can* sometimes be
valuable. As always, it comes down to a complex set of cost-benefit
equations. We (the editorial "we" here) would like our programs
to be correct, but when they do fail, we would also like them to
be debuggable.

For this sort of reason, I do not mind:

free(ptr), ptr = NULL;

but in many cases it does not really add much to debuggability.
It *does* help if (but only if) you can establish an invariant:
"ptr is either NULL, or a valid pointer" -- and that depends greatly
on the context in which "ptr" appears.
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (4039.22'N, 11150.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 14 '05 #19

P: n/a
Joseph Casey wrote:
Greetings.
I have read that the mistake of calling free(some_ptr) twice on
malloc(some_data) can cause program malfunction. Why is this?
With thanks.
Joseph Casey.


Please pardon the late reply Joseph. Several valid replies follow
your original post last Sunday or Monday morning. I'm never sure
where to jump in on threads like this so I chose the top.

Someone many years ago (who?) chose to implement malloc()/free() in
a truly minimalist fashion. Minimalism has a long and credible
history in C but I think malloc()/free() is poorly designed.

It would be trivial for malloc/realloc/calloc to 'remember' what
they do in a List so that free() can do the right thing. If we call
free() with a 'wild' pointer the system would look it up in the List
and, not finding it, do nothing.

The List might also remember the size of the allocation and so
permit the much wanted 'size_t size(void *m);' which would return
the amount of memory allocated at a particular address, or zero if
we can't find m in the List.

Also, because the List remembers ALL *alloc() calls, we can support
yet another function, freeall(), which will free all allocated memory.

This is usenet and anybody can respond with anything. I know that
well. But these are not idle ramblings. I have implemented all of
this here at home and it works perfectly. My question in all this is
"Why wasn't it done this way the first time?".

--
Joe Wright mailto:jo********@comcast.net
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Nov 14 '05 #20

P: n/a
In article <4-********************@comcast.com>,
Joe Wright <jo********@comcast.net> wrote:
Someone many years ago (who?) chose to implement malloc()/free() in
a truly minimalist fashion. Minimalism has a long and credible
history in C but I think malloc()/free() is poorly designed.

It would be trivial for malloc/realloc/calloc to 'remember' what
they do in a List so that free() can do the right thing.
[snip more thoughts]
This is usenet and anybody can respond with anything. I know that
well. But these are not idle ramblings. I have implemented all of
this here at home and it works perfectly. My question in all this is
"Why wasn't it done this way the first time?".
Most likely because it's easier to implement a non-minimalist solution
on top of a minimalist one when you need the extra functionality than to
implement a minimalist solution on top of a non-minimalist one when you
really do need minimalism (or want to work from a minimalist starting
point to construct a slightly different non-minimalist version).

You now have my_malloc, my_realloc, my_free, my_freeall, my_alloced_size,
and other assorted useful stuff. Go ahead and use them. It's what
you need, it will make you more effective, and other people who need a
slightly different non-minimal solution or who want to have a hosted
implementation for an 8-bit processor still have the minimal ones to
work from.

If you feel like going over to The Dark Side, you can even put them
in a different namespace than the standard library versions, give them
the same names, and use `using' to let you use the unqualified names.
But that's off-topic here.
dave

--
Dave Vandervies dj******@csclub.uwaterloo.caI'm afraid that Comeau/Dinkum aren't going to get rich fast because they
sell the only C99 implementation available now. --Dan Pop and

We'll settle for getting rich slowly. P.J. Plaugher in comp.lang.c
Nov 14 '05 #21

P: n/a
On Thu, 25 Nov 2004 01:57:36 UTC, Joe Wright <jo********@comcast.net>
wrote:
It would be trivial for malloc/realloc/calloc to 'remember' what
they do in a List so that free() can do the right thing. If we call
free() with a 'wild' pointer the system would look it up in the List
and, not finding it, do nothing.
When you needs nappies you should avoid using C. Use Cobol instead.
The List might also remember the size of the allocation and so
permit the much wanted 'size_t size(void *m);' which would return
the amount of memory allocated at a particular address, or zero if
we can't find m in the List.
When you needs a pair of braces beside belt or nappies don't program
in C. There are programming languages enough aroud you gets double and
tripel boundles of braces, belts with nappies but C is designed to
give the best in performance and throughput - but requires that you
have a brain to think yourself.
Also, because the List remembers ALL *alloc() calls, we can support
yet another function, freeall(), which will free all allocated memory.
There is nothing that requires that C dos a bit for you, replacing
your brain. Use your brain and you gets what you needs.
This is usenet and anybody can respond with anything. I know that
well. But these are not idle ramblings. I have implemented all of
this here at home and it works perfectly. My question in all this is
"Why wasn't it done this way the first time?".

Don't ask C to be your nourse, nammy, grandma or mom - it will not
work for you. C will require that you have a functional brain and able
to use it. When you're unable to follow this you should avoid
programming in C and use another language.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
Nov 14 '05 #22

P: n/a
In <co*********@news4.newsguy.com> Chris Torek <no****@torek.net> writes:
For this sort of reason, I do not mind:

free(ptr), ptr = NULL;

but in many cases it does not really add much to debuggability.
It *does* help if (but only if) you can establish an invariant:
"ptr is either NULL, or a valid pointer" -- and that depends greatly
on the context in which "ptr" appears.


To expand a bit on your point: the above construct, which can be trivially
automated with a macro wrapper for free() is useless when you have other
pointers pointing in the block being deallocated. If you don't nullify
*all* of them right after (or before) calling free(), nullifying only one
of them is not going to buy you much.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #23

P: n/a
On Wed, 24 Nov 2004 20:57:36 -0500, Joe Wright
<jo********@comcast.net> wrote:
Someone many years ago (who?) chose to implement malloc()/free() in
a truly minimalist fashion. Minimalism has a long and credible
history in C but I think malloc()/free() is poorly designed.
It is? They seem to have worked fine for many years.
It would be trivial for malloc/realloc/calloc to 'remember' what
they do in a List so that free() can do the right thing. If we call
free() with a 'wild' pointer the system would look it up in the List
and, not finding it, do nothing.
No, it is not 'trivial'. It takes up extra memory (which was a big
factor when the C library was originally designed) to keep the list,
and searching through the list each time can take a lot of time (many
programs have thousands of allocated blocks). Imposing that on all
programs via the standard would be disastrous. A program with a large
database could have a million blocks allocated, if free() checked each
one against the list every call it would take a sizeable fraction of a
second each time, and the overhead of the link pointers would be several
megabytes (quite possibly more if the alignment size is large).
The List might also remember the size of the allocation and so
permit the much wanted 'size_t size(void *m);' which would return
the amount of memory allocated at a particular address, or zero if
we can't find m in the List.
Again, it is neither necessary nor desirable for most real programs,
which know (if they need to) how big the allocated memory is and whether
it has been allocated or assigned (and they do reference counting or
whatever as needed).
Also, because the List remembers ALL *alloc() calls, we can support
yet another function, freeall(), which will free all allocated memory.
Exit from the program, that will do it on most systems. Depending what
you mean by 'all' allocated memory -- that allocated in your program, in
the current thread, in the the whole system?

What you suggest might have some value in a teaching or debugging
environment, to catch errors, and indeed there are a number of wrapper
functions to do just that (dmalloc, for instance). They can also do
things like putting 'guard' areas round the allocated memory to catch
writes to adjacent unallocated memory (in particular "off by one"
errors). However, in production code the size and speed hit is
non-trivial and unwanted.

But teaching people that memory is always protected against multiple
uses of free() on the same address is dangerous. Teaching them
defensive programming (check things yourself) in the first place is much
more valuable.
This is usenet and anybody can respond with anything. I know that
well. But these are not idle ramblings. I have implemented all of
this here at home and it works perfectly. My question in all this is
"Why wasn't it done this way the first time?".


Because most real programmers neither need nor want it. They design
properly so that there aren't memory 'leaks' or attempts to free memory
more than once, and they build in tests for things which might go wrong
(and use things like dmalloc during testing to make sure that they
haven't missed something).

If you want something which holds your hand, you write it (as you say
you have done), or find someone else's wrapper functions to do it. That
way you are in control of what your program does, and can make
appropriate tradeoffs.

One of my recent applications, in fact, does exactly that. I have a
need to allocate a lot of small areas (often inefficient with the system
malloc and free), and to go through allocating a lot of storage which I
then want to get rid of en masse before the next cycle (while leaving
some other allocated memory intact, which your 'freeall' wouldn't do).
I therefore have wrappers round the malloc functions to allocate what I
need in the way that I need it and to track it. Your functions wouldn't
help, though, I'd still have to code round them and the efficiency would
drop accordingly.

Chris C
Nov 14 '05 #24

P: n/a
Herbert Rosenau wrote:
On Thu, 25 Nov 2004 01:57:36 UTC, Joe Wright <jo********@comcast.net>
wrote:

It would be trivial for malloc/realloc/calloc to 'remember' what
they do in a List so that free() can do the right thing. If we call
free() with a 'wild' pointer the system would look it up in the List
and, not finding it, do nothing.

When you needs nappies you should avoid using C. Use Cobol instead.

The List might also remember the size of the allocation and so
permit the much wanted 'size_t size(void *m);' which would return
the amount of memory allocated at a particular address, or zero if
we can't find m in the List.

When you needs a pair of braces beside belt or nappies don't program
in C. There are programming languages enough aroud you gets double and
tripel boundles of braces, belts with nappies but C is designed to
give the best in performance and throughput - but requires that you
have a brain to think yourself.

Also, because the List remembers ALL *alloc() calls, we can support
yet another function, freeall(), which will free all allocated memory.

There is nothing that requires that C dos a bit for you, replacing
your brain. Use your brain and you gets what you needs.

This is usenet and anybody can respond with anything. I know that
well. But these are not idle ramblings. I have implemented all of
this here at home and it works perfectly. My question in all this is
"Why wasn't it done this way the first time?".


Don't ask C to be your nourse, nammy, grandma or mom - it will not
work for you. C will require that you have a functional brain and able
to use it. When you're unable to follow this you should avoid
programming in C and use another language.


Rosenau, where does this come from? My comments re *alloc/free were
technical and well presented (If I say so myself). Your response
does not treat any technical issue raised but only insults me
personally. Was this your intent?

--
Joe Wright mailto:jo********@comcast.net
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Nov 14 '05 #25

P: n/a
>Someone many years ago (who?) chose to implement malloc()/free() in
a truly minimalist fashion. Minimalism has a long and credible
history in C but I think malloc()/free() is poorly designed.

It would be trivial for malloc/realloc/calloc to 'remember' what
they do in a List so that free() can do the right thing.
It may not be trivial in terms of run-time efficiency and paging
behavior .
If we call
free() with a 'wild' pointer the system would look it up in the List
and, not finding it, do nothing.
I believe the RIGHT THING here would be to call abort(). Stomp the
bugs, don't paper over them, and don't just occasionally fail.
The List might also remember the size of the allocation and so
permit the much wanted 'size_t size(void *m);' which would return
the amount of memory allocated at a particular address, or zero if
we can't find m in the List.
And what happens if the size of the allocation turns out to be not
what was originally requested? Does the (programmer-written) program
freak out? This can legitimately happen for such reasons as rounding
the request up to the next multiple of alignment restrictions.
Also, because the List remembers ALL *alloc() calls, we can support
yet another function, freeall(), which will free all allocated memory.
Of what possible use is such a function? You want to yank file
buffers out from under files you fopen()ed? You want to clobber
all the memory allocated by a third-party library you use? Is there
anything in ANSI C that says that the strings pointed to by argv[]
AREN'T allocated by malloc() before main() is started?

I could see where this could be useful if you can allocate memory
from a "pool" (which only the creator and those functions the creator
passes the pool handle to can allocate from) and you can release all
memory allocated from that pool. It is not useful when you have
to keep track of memory allocated by ANY function.
This is usenet and anybody can respond with anything. I know that
well. But these are not idle ramblings. I have implemented all of
this here at home and it works perfectly. My question in all this is
"Why wasn't it done this way the first time?".


Gordon L. Burditt
Nov 14 '05 #26

P: n/a
On Thu, 25 Nov 2004 19:25:06 UTC, Joe Wright <jo********@comcast.net>
wrote:
Herbert Rosenau wrote:
On Thu, 25 Nov 2004 01:57:36 UTC, Joe Wright <jo********@comcast.net>
wrote:

It would be trivial for malloc/realloc/calloc to 'remember' what
they do in a List so that free() can do the right thing. If we call
free() with a 'wild' pointer the system would look it up in the List
and, not finding it, do nothing.

When you needs nappies you should avoid using C. Use Cobol instead.

The List might also remember the size of the allocation and so
permit the much wanted 'size_t size(void *m);' which would return
the amount of memory allocated at a particular address, or zero if
we can't find m in the List.

When you needs a pair of braces beside belt or nappies don't program
in C. There are programming languages enough aroud you gets double and
tripel boundles of braces, belts with nappies but C is designed to
give the best in performance and throughput - but requires that you
have a brain to think yourself.

Also, because the List remembers ALL *alloc() calls, we can support
yet another function, freeall(), which will free all allocated memory.

There is nothing that requires that C dos a bit for you, replacing
your brain. Use your brain and you gets what you needs.

This is usenet and anybody can respond with anything. I know that
well. But these are not idle ramblings. I have implemented all of
this here at home and it works perfectly. My question in all this is
"Why wasn't it done this way the first time?".


Don't ask C to be your nourse, nammy, grandma or mom - it will not
work for you. C will require that you have a functional brain and able
to use it. When you're unable to follow this you should avoid
programming in C and use another language.


Rosenau, where does this come from? My comments re *alloc/free were
technical and well presented (If I say so myself). Your response
does not treat any technical issue raised but only insults me
personally. Was this your intent?


You were asking for nappies - repeated quote:
On Thu, 25 Nov 2004 01:57:36 UTC, Joe Wright <jo********@comcast.net>
wrote:

It would be trivial for malloc/realloc/calloc to 'remember' what
they do in a List so that free() can do the right thing. If we call
free() with a 'wild' pointer the system would look it up in the List
and, not finding it, do nothing.


C is dersigned to avoid anything an halfway experienced programmer
knows himself. That is
- knowing what C objects he had requested from the runtime - no need
for C to bookeeping the bookkeeping for nappiy wearer. But heping the
experienced programmer of time critical services. Superflous checking
costs even in an 10ThZ environment too much time, given better to real
needed work.

Write a program that loops 1,000,000,000 times over the same code but
with constant changing data, needing 10 malloc(), 5 - 10 free() each
round. Test on each case where it is absolutely clear that a pointer
needs to give back its value it points to that it contains a guilty
pointer. Do the same without the check - and you sees the difference.

That is why you needs to be adult in C not a baby. You have to learn
what C is and how it is designed to give you the chance to save
exactly the 10ms you needs to get your program excactly the point it
needs.

YOU would set your pointer to NULL when you gives its value back AND
it will not loose its visibility without return immediately. You would
save this action when you knows it looses its vilibility without
return. You would have on another places to initialise the pointer to
NULL because you would know that you later would NOT know if or if not
the pointer is valid.

It is on YOU to know when a pointer contains a valid data - or when
not. Assigning a null pointer constant to it will simply help you to
remember its state - when needed.

1 ms runtime saved can be critical to get the job done or failing on
it. It is the job of C to support you as possible in that. It is NOT
the job of C to supporty you in carrying nappies. For that you have
lots over lots on debugging methods - and assigning null pointer
constants is one of the primitivest ones. YOU would avoid it when you
knows what you does - and you would do it right and anyway better than
free() will ever be able to.

C is designed as an assembler for an abstract mashine - giving the
compiler all any any possibilities to make the best out of it on a
real mashine. That is the runtime will be optimised (by compiler
and/or by hand to get the best out of the real mashine. Inserting
probes of probes who are clearly most often nothing than more than
superflous hinders experienced programmers to get the same level out
on theyr own code because there are nappy bearer who are unable to
think theyrself.

Learn how to use pointers, learn how to handle them and you would know
better than the malloc family can ever know about them.

It is on YOU to learn how to call free() with pointers already
free()d, it is on you to know how to call realloc() without knowing
exactly that you have a pointer to a memory area allowed for
m/c/realloc/free. It IS so easy to control that there is nothing
needed as the brain of the programmer using them. It is not hard to
learn the technology needed:

1. initialise each and any pointer you have no real value for yet with
a null pointer constant.
2. reset each pointer that gets alive more than 3 statements after
free() to hold a null pointer constant instead of its original.
3. do never free() a memory area whereas some pointers to outside
direct access yet.

Easy, eh?
--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
Nov 14 '05 #27

P: n/a
In article <wm***************************@URANUS1.DV-ROSENAU.DE>,
Herbert Rosenau <os****@pc-rosenau.de> wrote:
On Thu, 25 Nov 2004 19:25:06 UTC, Joe Wright <jo********@comcast.net>
wrote:
Rosenau, where does this come from? My comments re *alloc/free were
technical and well presented (If I say so myself). Your response
does not treat any technical issue raised but only insults me
personally. Was this your intent?


You were asking for nappies - repeated quote:


No, he was asking for a potentially useful feature.

There are enough good reasons to not include this particular potentially
useful feature; giving those reasons would've been rather more helpful
than abuse.
(Not that abuse is ever a useful reply.)
That is why you needs to be adult in C not a baby. You have to learn
what C is and how it is designed to give you the chance to save
exactly the 10ms you needs to get your program excactly the point it
needs.


So we can safely assume that you don't use anything in <string.h>, and
nothing from <stdio.h> except fopen/fclose/fputc/fgetc, or anything in
<math.h>, or...?

After all, doing all the work yourself will let you design somethign
that can give you the chance to save another 10ms.

For that matter, why bother with C at all? Real Men write in assembly...
dave
(Don't go away mad, just go away.)

--
Dave Vandervies dj******@csclub.uwaterloo.ca

Right now? Either. Just fly out on your pig and buy one.
--Nick Maclaren in comp.arch
Nov 14 '05 #28

P: n/a
"Herbert Rosenau" <os****@pc-rosenau.de> writes:
[...]
Learn how to use pointers, learn how to handle them and you would know
better than the malloc family can ever know about them.

It is on YOU to learn how to call free() with pointers already
free()d, it is on you to know how to call realloc() without knowing
exactly that you have a pointer to a memory area allowed for
m/c/realloc/free. It IS so easy to control that there is nothing
needed as the brain of the programmer using them. It is not hard to
learn the technology needed:

1. initialise each and any pointer you have no real value for yet with
a null pointer constant.
2. reset each pointer that gets alive more than 3 statements after
free() to hold a null pointer constant instead of its original.
3. do never free() a memory area whereas some pointers to outside
direct access yet.

Easy, eh?


Sure, it's easy. So easy that everybody gets it right, and C programs
don't suffer from wild pointers or buffer overruns, and viruses don't
propagate.

There are arguments to be made in favor of the C approach. The valid
ones don't include the words "nappy" and "baby".

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #29

P: n/a
On Thu, 25 Nov 2004 23:04:13 UTC, dj******@csclub.uwaterloo.ca (Dave
Vandervies) wrote:
So we can safely assume that you don't use anything in <string.h>, and
nothing from <stdio.h> except fopen/fclose/fputc/fgetc, or anything in
<math.h>, or...?
No, you can't. Yes, sometimes I avoid fgets, fscanf because fgetc
gives me more control over the input stream. The time handling puch
cards is gone away. I've really seldom the need to include math.h
because business applications and floatingpoint are incompatible at
all.
After all, doing all the work yourself will let you design somethign
that can give you the chance to save another 10ms.

For that matter, why bother with C at all? Real Men write in assembly...


Yeah, you're right - but C _is_ an assembler - for an abstract
mashine. A C compiler younger than 10 years will build better mashine
code than 80% of human programmers are able to and will not more bad
than the remaining 20%.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
Nov 14 '05 #30

P: n/a
In article <wm***************************@URANUS1.DV-ROSENAU.DE>,
Herbert Rosenau <os****@pc-rosenau.de> wrote:
On Thu, 25 Nov 2004 23:04:13 UTC, dj******@csclub.uwaterloo.ca (Dave
Vandervies) wrote:
So we can safely assume that you don't use anything in <string.h>, and
nothing from <stdio.h> except fopen/fclose/fputc/fgetc, or anything in
<math.h>, or...?


No, you can't.


And how are any of these any different from wrappers around malloc
and friends?

They're only there to keep track of things so you don't have to.
dave

--
Dave Vandervies dj******@csclub.uwaterloo.ca
Personally I thought 500km each way just for lunch was excessive. He
really should have stayed for coffee afterwards.
--Ewen McNeill in the scary devil monastery
Nov 14 '05 #31

P: n/a
"Herbert Rosenau" <os****@pc-rosenau.de> writes:
On Thu, 25 Nov 2004 23:04:13 UTC, dj******@csclub.uwaterloo.ca (Dave
Vandervies) wrote: [...]
For that matter, why bother with C at all? Real Men write in assembly...


Yeah, you're right - but C _is_ an assembler - for an abstract
mashine.


No, C isn't an assembler. The language has features that no assembly
language worthy of the name would have (arbitrarily complex
expressions, control flow constructs, etc.). It doesn't allow direct
access to CPU registers (the "register" keyword notwithstanding).
There's no direct mapping between any C construct and machine
instructions.

C is a relatively low-level language; it's semantically closer to
assembly than, say, Pascal or Ada. But saying that C is an assembly
language ignores the meaning of the term.
A C compiler younger than 10 years will build better mashine code
than 80% of human programmers are able to and will not more bad than
the remaining 20%.


That's true of just about any language whose compiler generates
machine code.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #32

P: n/a
On Fri, 26 Nov 2004 18:47:34 UTC, dj******@csclub.uwaterloo.ca (Dave
Vandervies) wrote:
In article <wm***************************@URANUS1.DV-ROSENAU.DE>,
Herbert Rosenau <os****@pc-rosenau.de> wrote:
On Thu, 25 Nov 2004 23:04:13 UTC, dj******@csclub.uwaterloo.ca (Dave
Vandervies) wrote:
So we can safely assume that you don't use anything in <string.h>, and
nothing from <stdio.h> except fopen/fclose/fputc/fgetc, or anything in
<math.h>, or...?


No, you can't.


And how are any of these any different from wrappers around malloc
and friends?

They're only there to keep track of things so you don't have to.


What keeps track of what things? fgets()/fscanf()? Sometimes awkward
to get really conrolled what you need. O.k., read an punch card would
be easy - but is stdin always only a punch card? I don't think so.

Print out a long as 134,54 EUR - please use fprintf to do it - but be
sure you gets 0,45 and -1.234,45 even as -0,45. Yes, long, not double
- you knows business math is highly different from university math.
German uses ',' instead '.' and '.' instead ','. Sometimes most of
stdio.h is absolutely useless - even as it is sometimes powerfull.

The malloc family is designed well as it does the minimum needed in
any case - but nothing more. There is no need to give it more power -
because too often there is nothing to do as to free() memory a pointer
points to - because immediately thereafter the pointer will loose its
life itself or at least its visibility. Initialising variables (not
only pointers) is failsave programming. You'll learn that quickly when
your job is not only to produce M$ GUI code but human life hangs on
the code you writes. You gets stinking when you sees that the compiler
and runtime you uses is wasting CPU cycles and you have no chance to
get out to native assembly because you needs the portability C
delivers.

stdio was designed to handle punch cards but not keyboard, big files,
screens and other devices - so it works well when you can reduce the
i/o to look as if it were on punch cards - but gets problematic when
not.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
Nov 14 '05 #33

P: n/a
On Fri, 26 Nov 2004 08:10:13 +0000 (UTC)
"Herbert Rosenau" <os****@pc-rosenau.de> wrote:
On Thu, 25 Nov 2004 23:04:13 UTC, dj******@csclub.uwaterloo.ca (Dave
Vandervies) wrote:
<snip>
For that matter, why bother with C at all? Real Men write in
assembly...


Yeah, you're right - but C _is_ an assembler - for an abstract
mashine. A C compiler younger than 10 years will build better mashine
code than 80% of human programmers are able to


Possibly true. My experience from back then is that the C compiler will
beet a lot of people who write assember.
and will not more bad
than the remaining 20%.


Definitely NOT always true. On one system I worked on in about '95 I had
to code some specific areas in assembler because the C code was proved
to be too slow. It was not even a marginal thing, the C code was far too
slow and my hand crafted assembler had (from memory) something like half
the time spare.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Nov 14 '05 #34

P: n/a
On Fri, 26 Nov 2004 19:09:53 UTC, Keith Thompson <ks***@mib.org>
wrote:
"Herbert Rosenau" <os****@pc-rosenau.de> writes:
On Thu, 25 Nov 2004 23:04:13 UTC, dj******@csclub.uwaterloo.ca (Dave
Vandervies) wrote: [...]
For that matter, why bother with C at all? Real Men write in assembly...


Yeah, you're right - but C _is_ an assembler - for an abstract
mashine.


No, C isn't an assembler. The language has features that no assembly
language worthy of the name would have (arbitrarily complex
expressions, control flow constructs, etc.). It doesn't allow direct
access to CPU registers (the "register" keyword notwithstanding).
There's no direct mapping between any C construct and machine
instructions.


Not true! I had used for about 5 years an macro assembler that had
limited types like C with int/unsigned int, char, float, double....,
using more complex experssions like store name(r5++) + (r4--) * mwst /
discount.
in C. name[ix] += *p5 * mwst / dicount;

Only a little bit difference in syntax. Controlflow constructs like
cmp arr(r6+7), arr2(r4-3)
bnzc (r1+947) # if (arr[i+7] != arr2[j-3]) {....}
repnzc g4, l4 # do { ...} while (x--);
repnz (var), 9444 # while (*var--) { .... }

No direct mapping?

r2 := dest,0 # r2 = wordaddress of dest, r3 byte no. start
r4 := src,0
r3 := fz r4, 0 # strlen(src)
r5 := r3 # copy number of bytes to dest
(r2++) :y (r4++) # memcpy(dest, src, strlen(src));
# or while (*src) *dest++ = *src++;
# or for (s = src, d = dwest, l = strlen(s);
l; l--) *d++ = *s++;

r5 := $BIBEAS("/bib/sub/file.txt", "OPEN READ");
bz error
r4 := $BIBEAS(r5, "READ", 2048, buf);
r3 := $BIBEAS(r5, "CLOSE");
whereas the OS itself had only knowledge of flat files, but a library
was able to handle subdirectories of 8 levels and the macro assember
was able to simplyfy the access to that

Depending on the hardware an assembler can be powerful.

We had used C on that mashine only because we would have portability.
C was at least not so useful as planned - but only because the
compiler was one of the most buggiest programs I've ever seen - and
the customer we're working for had delayed the holding of it until it
was too late to get the developer in regress and then unwilling to pay
the extra cost to get it right.
C is a relatively low-level language; it's semantically closer to
assembly than, say, Pascal or Ada. But saying that C is an assembly
language ignores the meaning of the term.
No, it describes exactly the idea of C. C is an assembler of an
abstact mashine. Look at the standard. It describes the behavior of
that abstract mashine exactly, like a hardware manual of an real
computer would describe that.

Pascal is designed to be a learning language even as it had found its
entry into the world of high level languages. ADA is designed as HLL,
but C is simply an portable assembler. Yes, with C89 and a bit more
with C99/98 it got a bit more of HLL - but its design is at least only
assemler of an well defined abstract mashine.
A C compiler younger than 10 years will build better mashine code
than 80% of human programmers are able to and will not more bad than
the remaining 20%.


That's true of just about any language whose compiler generates
machine code.


No. It's relatvely easy to hit out any other language at assembly
level in code and performance when youre an good assembler programmer.
Each of them has its own strength - but they are not designed to beat
assembly programmers but to shorten developement time for theyr
special fields - and in that they are powerfull.

C is designed to beat assembly programmers, to shorten development
time in assembly developement, to shorten runtime and/or resource
usage whenever possible and to increase portability. Fortran is
designed to develop mathematical solutions, Cobol for business
solutions,....

I've used C compilers who were absolutely unable to generate mashine
code - but generates assembly. I've used C compilers who were unable
to generate mashine code or assembly but only P-code - whereas the
p-code was the same the assembler of that mashine was generating as
immediate step. It was the job of the linker to eat p-code and to
generate binary code ready for the system loader.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
Nov 14 '05 #35

P: n/a
On Fri, 26 Nov 2004 21:18:49 UTC, Flash Gordon
<sp**@flash-gordon.me.uk> wrote:
On Fri, 26 Nov 2004 08:10:13 +0000 (UTC)
"Herbert Rosenau" <os****@pc-rosenau.de> wrote:
On Thu, 25 Nov 2004 23:04:13 UTC, dj******@csclub.uwaterloo.ca (Dave
Vandervies) wrote:


<snip>
For that matter, why bother with C at all? Real Men write in
assembly...


Yeah, you're right - but C _is_ an assembler - for an abstract
mashine. A C compiler younger than 10 years will build better mashine
code than 80% of human programmers are able to


Possibly true. My experience from back then is that the C compiler will
beet a lot of people who write assember.
and will not more bad
than the remaining 20%.


Definitely NOT always true. On one system I worked on in about '95 I had
to code some specific areas in assembler because the C code was proved
to be too slow. It was not even a marginal thing, the C code was far too
slow and my hand crafted assembler had (from memory) something like half
the time spare.


Do you have time problems? Simply check the algorithm you use. I had
incresed throughput of hand crafted assembly by 2000%. I'd reduced its
memory usage from 400KB to 2KB by that. The algorithm the old code was
using was not too bad but bad enough to use more memory mor CPU cycles
than the buggy C compiler was producing using another one.

Find a better algorithm instead to try to write thae same in assembly.
Changing it to another brings lots more than to try to optimise per
hand what a halfways good compiler can do better.

I would say hand optimise already hand optimised braind dead code
increased the thoughput more than you can win by write something in
assembly than in C.

In 1980 it was easy for an really experienced assember programmer to
beat a C compiler. In 1990 it was hard, in 2000 it was nearly
impossible and in 2005 it would be impossible.

Use the right algorithm and the compiler will beat you. The only point
where you may need native assembly is:
- direct hardware access

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
Nov 14 '05 #36

P: n/a
On Sat, 27 Nov 2004 08:28:32 +0000 (UTC)
"Herbert Rosenau" <os****@pc-rosenau.de> wrote:
On Fri, 26 Nov 2004 21:18:49 UTC, Flash Gordon
<sp**@flash-gordon.me.uk> wrote:
On Fri, 26 Nov 2004 08:10:13 +0000 (UTC)
"Herbert Rosenau" <os****@pc-rosenau.de> wrote:
On Thu, 25 Nov 2004 23:04:13 UTC, dj******@csclub.uwaterloo.ca
(Dave Vandervies) wrote:
<snip>
> For that matter, why bother with C at all? Real Men write in
> assembly...

Yeah, you're right - but C _is_ an assembler - for an abstract
mashine. A C compiler younger than 10 years will build better
mashine code than 80% of human programmers are able to


Possibly true. My experience from back then is that the C compiler
will beet a lot of people who write assember.
and will not more bad
than the remaining 20%.


Definitely NOT always true. On one system I worked on in about '95 I
had to code some specific areas in assembler because the C code was
proved to be too slow. It was not even a marginal thing, the C code
was far too slow and my hand crafted assembler had (from memory)
something like half the time spare.


Do you have time problems? Simply check the algorithm you use.


You are assuming that this had not already been done. In fact, I always
start by trying to device the most efficient algorithm.
I had
incresed throughput of hand crafted assembly by 2000%. I'd reduced its
memory usage from 400KB to 2KB by that. The algorithm the old code was
using was not too bad but bad enough to use more memory mor CPU cycles
than the buggy C compiler was producing using another one.
So you were dealing with either badly crafted assembler or a stupid
algorithm. That does not meen that other people are.
Find a better algorithm instead to try to write thae same in assembly.
That assumes you have not already optimised the algorithm.
Changing it to another brings lots more than to try to optimise per
hand what a halfways good compiler can do better.
Unless the compiler is allowed to use instructions the assembler
programmer is not it is IMPOSSIBLE for a compiler to produce better code
that could be produced by a sufficiently skilled assembler programmer.
After all, it is possible (not necessarily probable, unless said
programmer also writes compilers) for an assember programmer to produce
exactly the same code as the compiler.
I would say hand optimise already hand optimised braind dead code
increased the thoughput more than you can win by write something in
assembly than in C.

In 1980 it was easy for an really experienced assember programmer to
beat a C compiler. In 1990 it was hard, in 2000 it was nearly ^^^^ ^^^^^^ impossible and in 2005 it would be impossible.
So above you accept that within the time frame being reffered to, i.e.
within the last 10 years, it IS possible to beet the compiler. I
reffered to a specific example in about '95 where my assember skill were
sufficiently good to completely thrash the performance of the compiler.
This was having started with algorithm development and design aimed at
producing the most efficient code.
Use the right algorithm and the compiler will beat you.
Definitely not guaranteed. It is possible to produce by hand any code
that the compiler can produce. Further, since no SW is perfect it
follows that no compiler is perfect therefor there WILL be instances
where the compiler produces less than optimal code.
The only point
where you may need native assembly is:
- direct hardware access


The times when assembler is needed are reducing, both because of better
compilers and faster hardware, but there will always be areas where
compilers can be beeten, and sometimes by a long way. It's just not
often worth the effort required to beet them.

BTW, the only non-standard feature used in the original C implementation
was mapping a 2D array to a specific memory address, and that was done
with the linker.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Nov 14 '05 #37

P: n/a
Flash Gordon wrote:

(snip)
Unless the compiler is allowed to use instructions the assembler
programmer is not it is IMPOSSIBLE for a compiler to produce better code
that could be produced by a sufficiently skilled assembler programmer.
After all, it is possible (not necessarily probable, unless said
programmer also writes compilers) for an assember programmer to produce
exactly the same code as the compiler.


(snip)

For a sufficiently small piece of code I could probably agree.
That size might be about 10 lines of C. Past that, the
combinatorics are so bad that the assembly programmer would
never finish. A compiler using a dynamic programming algorithm
can find an optimal path through a long series of operation
extremely fast.

The advantage the assembly programmer has is knowing the
actual cases that are important, and may be able to make
optimizations that a compiler can't. Just one example.

A program may need to multiply two N bit integers generating
a 2N bit product. In C this must be done by multiplying two
2N bit numbers, yet the hardware might have an instruction
that can multiply N bit numbers with a 2N bit product.
It is possible the compiler will figure this out, but it
is likely that it won't.

Some machines have some special instructions that are
not normally used by compilers. It might be that they
are not useful in the general case, but are in a
specific case.

-- glen

Nov 14 '05 #38

P: n/a
Method Man <a@b.c> scribbled the following:
"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
"Method Man" <a@b.c> writes:
[...]
> Which brings up the question: Why did they not originally implement 'free' > to set the pointer to NULL after deallocating the memory? Other than some > obscure algorithms, I don't see much use to the programmer of having a
> pointer that points to non-existant memory.
Because it can't. The pointer argument to free(), like all function
arguments, is passed by value; the function can't modify it. (A
free() function that takes a void** rather than a void* couldn't
handle arbitrary pointer types.)

Yea, I suppose not. It could return a NULL: p = free(p); I just think free()'ing p and setting p = 0 should be handled in one step
since the operations are closely related and it avoids programmer error.


AFAIK free()ing a properly allocated pointer always succeeds, and
free()ing a pointer that has not been properly allocated always causes
undefined behaviour. So unless free() can distinguish between properly
allocated and not properly allocated pointers, the above definition
would define it as always returning NULL. What's the point of defining
a return value that is always going to be the same? Why not simply set
p to NULL yourself, because you know the correct value to set it to is
always NULL?

--
/-- Joona Palaste (pa*****@cc.helsinki.fi) ------------- Finland --------\
\-------------------------------------------------------- rules! --------/
"It's not survival of the fattest, it's survival of the fittest."
- Ludvig von Drake
Nov 14 '05 #39

P: n/a
On Sun, 28 Nov 2004, Joona I Palaste wrote:
What's the point of defining
a return value that is always going to be the same? Why not simply set
p to NULL yourself, because you know the correct value to set it to is
always NULL?


Well, it's about as pointless as returning a value that's always going to
be the same as one of the parameters..
Nov 14 '05 #40

P: n/a
Jarno A Wuolijoki <jw******@cs.helsinki.fi> scribbled the following:
On Sun, 28 Nov 2004, Joona I Palaste wrote:
What's the point of defining
a return value that is always going to be the same? Why not simply set
p to NULL yourself, because you know the correct value to set it to is
always NULL?
Well, it's about as pointless as returning a value that's always going to
be the same as one of the parameters..


So therefore the current design of free() - not returning anything at
all - is the best.

--
/-- Joona Palaste (pa*****@cc.helsinki.fi) ------------- Finland --------\
\-------------------------------------------------------- rules! --------/
"We sorcerers don't like to eat our words, so to say."
- Sparrowhawk
Nov 14 '05 #41

P: n/a
Joona I Palaste <pa*****@cc.helsinki.fi> scribbled the following:
Jarno A Wuolijoki <jw******@cs.helsinki.fi> scribbled the following:
On Sun, 28 Nov 2004, Joona I Palaste wrote:
What's the point of defining
a return value that is always going to be the same? Why not simply set
p to NULL yourself, because you know the correct value to set it to is
always NULL?
Well, it's about as pointless as returning a value that's always going to
be the same as one of the parameters..
So therefore the current design of free() - not returning anything at
all - is the best.


One "clever" use for always returning the value of a specific parameter
would be in fprintf(), to allow this sort of trick:

fclose(fprintf(fopen("foo.txt", "w"), "Hello world!"));

where FILE *fprintf(FILE *stream, char *format, ...) is defined as
returning the stream it writes to.
Of course if the fopen() fails this will cause undefined behaviour.

--
/-- Joona Palaste (pa*****@cc.helsinki.fi) ------------- Finland --------\
\-------------------------------------------------------- rules! --------/
"C++. C++ run. Run, ++, run."
- JIPsoft
Nov 14 '05 #42

P: n/a
"Joona I Palaste" <pa*****@cc.helsinki.fi> wrote in message
news:co**********@oravannahka.helsinki.fi...
One "clever" use for always returning the value of a specific parameter
would be in fprintf(), to allow this sort of trick:

fclose(fprintf(fopen("foo.txt", "w"), "Hello world!"));
Not really clever, childish at best.
where FILE *fprintf(FILE *stream, char *format, ...) is defined as
returning the stream it writes to.
Of course if the fopen() fails this will cause undefined behaviour.


QED

Conversely, the string functions that return their destination parameter can be
used that way :

fp = fopen(strcat(strcat(strcpy(dest, dir), "/"), filename), "r");

But that is pretty lame too, and doesn't add much to neither readability nor
performance.
Not to mention lack of precise semantics : what if dir end with / already, what
if it is "/" iself ? what does an empty dir mean is this context ?...

--
Chqrlie.


Nov 14 '05 #43

P: n/a
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote in message
news:Q6hqd.476782$D%.222708@attbi_s51...
The advantage the assembly programmer has is knowing the
actual cases that are important, and may be able to make
optimizations that a compiler can't. Just one example.

A program may need to multiply two N bit integers generating
a 2N bit product. In C this must be done by multiplying two
2N bit numbers, yet the hardware might have an instruction
that can multiply N bit numbers with a 2N bit product.
It is possible the compiler will figure this out, but it
is likely that it won't.

Some machines have some special instructions that are
not normally used by compilers. It might be that they
are not useful in the general case, but are in a
specific case.


That's a good example, but a good compiler will notice the pattern and produce
the correct code :

short op1, op2;
long result = (long)op1 * (long)op2;

long op1, op2;
long long result = (long long)op1 * (long long)op2;

where short is 16 bits, long is 32 bits ans long long is 64 bits.
gcc will use the appropriate IA86 instruction for these instead of using the
general case.

Evolution has made compilers very clever, but programmers still beat them on
intelligence sometimes.

--
Chqrlie.
Nov 14 '05 #44

P: n/a

In article <co**********@oravannahka.helsinki.fi>, Joona I Palaste <pa*****@cc.helsinki.fi> writes:

AFAIK free()ing a properly allocated pointer always succeeds, and
free()ing a pointer that has not been properly allocated always causes
undefined behaviour.


free(0) has defined behavior. Not that it matters particularly to this
argument, which I agree with - there's little point in returning any
value from free.

--
Michael Wojcik mi************@microfocus.com
Nov 14 '05 #45

P: n/a
Jarno A Wuolijoki wrote:
On Sun, 28 Nov 2004, Joona I Palaste wrote:
What's the point of defining a return value that is always going
to be the same? Why not simply set p to NULL yourself, because you
know the correct value to set it to is always NULL?


Well, it's about as pointless as returning a value that's always
going to be the same as one of the parameters..


No, that last has the advantage of inducing silly programming
errors and making FAQs valuable.

--
"The most amazing achievement of the computer software industry
is its continuing cancellation of the steady and staggering
gains made by the computer hardware industry..." - Petroski
Nov 14 '05 #46

P: n/a
Joona I Palaste wrote:
.... snip ...
One "clever" use for always returning the value of a specific
parameter would be in fprintf(), to allow this sort of trick:

fclose(fprintf(fopen("foo.txt", "w"), "Hello world!"));

where FILE *fprintf(FILE *stream, char *format, ...) is defined
as returning the stream it writes to. Of course if the fopen()
fails this will cause undefined behaviour.


fprintf is not so defined, it actually returns a useful value.
Some idiot is going to try your statement and get all sorts of
nonsense.

--
"The most amazing achievement of the computer software industry
is its continuing cancellation of the steady and staggering
gains made by the computer hardware industry..." - Petroski
Nov 14 '05 #47

P: n/a
In article <wm***************************@URANUS1.DV-ROSENAU.DE>,
Herbert Rosenau <os****@pc-rosenau.de> wrote:
C is designed to beat assembly programmers, to shorten development
time in assembly developement, to shorten runtime and/or resource
usage whenever possible and to increase portability.


Just like any other general-purpose high-level language, then?
dave

--
Dave Vandervies dj******@csclub.uwaterloo.ca
This happens to be very confusing, and it is not surprising that
people get it wrong.
--Chris Torek in comp.lang.c
Nov 14 '05 #48

P: n/a
Joe Wright wrote:
[...]

It would be trivial for malloc/realloc/calloc to 'remember' what
they do in a List so that free() can do the right thing. If we call
free() with a 'wild' pointer the system would look it up in the List
and, not finding it, do nothing.
Nothing in the Standard forbids such an implementation.
Indeed, I have written implementations that maintain such
records as debugging aids -- but when trouble is detected
they complain as loudly as they can rather than doing nothing.
[...]
Also, because the List remembers ALL *alloc() calls, we can support
yet another function, freeall(), which will free all allocated memory.
[...]


Why would anybody ever want such a thing? Or to put
it another way: What actions could a program take after
a freeall() without risking undefined behavior? Which
library functions are forbidden to obtain memory from
malloc(), or are required to work even if their allocated
memory is torn out from underneath them without warning?
What good is a pointer obtained from malloc() if it can
be invalidated at any moment by a completely unrelated
piece of code that doesn't even have knowledge of the
pointer's existence?

In light of such issues, I think the Standard already
provides freeall(), but under a different name: abort().

--
Er*********@sun.com

Nov 14 '05 #49

P: n/a
In <Q6hqd.476782$D%.222708@attbi_s51> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
Flash Gordon wrote:

(snip)
Unless the compiler is allowed to use instructions the assembler
programmer is not it is IMPOSSIBLE for a compiler to produce better code
that could be produced by a sufficiently skilled assembler programmer.
After all, it is possible (not necessarily probable, unless said
programmer also writes compilers) for an assember programmer to produce
exactly the same code as the compiler.


(snip)

For a sufficiently small piece of code I could probably agree.
That size might be about 10 lines of C. Past that, the
combinatorics are so bad that the assembly programmer would
never finish. A compiler using a dynamic programming algorithm
can find an optimal path through a long series of operation
extremely fast.


The nature of the target processor makes quite a lot of difference.
Compilers have never been competitive on 8-bit CPUs with limited
resources (even when used as cross-compilers and run on platforms with
plenty of resources). I haven't even heard about a C compiler for
the i8048 chip, yet I've written many non-trivial embedded control
applications for that platform in assembly. The smallest chip a C
compiler can target seems to be the i8051, but I doubt many applications
have been written in C for the unextended i8051 chip.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #50

50 Replies

This discussion thread is closed

Replies have been disabled for this discussion.