By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,758 Members | 1,247 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,758 IT Pros & Developers. It's quick & easy.

What is the advantage of malloc over calloc

P: n/a
Please reply...

Feb 21 '06 #1
Share this Question
Share on Google+
37 Replies


P: n/a
On 20 Feb 2006 21:25:26 -0800, "deepu" <de********@gmail.com> wrote in
comp.lang.c:
Please reply...


Please define "better".

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Feb 21 '06 #2

P: n/a
What is the advantage of malloc over calloc. i.e. since malloc exists
to perform the memory operations why calloc is needed. Asked in an
interview.

deepu wrote:
Please reply...


Feb 21 '06 #3

P: n/a
deepu wrote:

What is the advantage of malloc over calloc.
It's possible that malloc might be smaller and faster.
i.e. since malloc exists
to perform the memory operations why calloc is needed.
Because calloc does something that malloc doesn't do.
Asked in an interview.


The two questions are non sequitors.

--
pete
Feb 21 '06 #4

P: n/a
deepu wrote:
What is the advantage of malloc over calloc. i.e. since malloc exists
to perform the memory operations why calloc is needed. Asked in an
interview.

Do you know the difference between the two? Hint, one zeros the
allocated memory, which may or may not be an advantage.

--
Ian Collins.
Feb 21 '06 #5

P: n/a
Ian Collins <ia******@hotmail.com> writes:
deepu wrote:
What is the advantage of malloc over calloc. i.e. since malloc exists
to perform the memory operations why calloc is needed. Asked in an
interview.

Do you know the difference between the two? Hint, one zeros the
allocated memory, which may or may not be an advantage.


Keep in mind that zeroing memory won't necessarily do anything useful
for pointers or floating-point objects. (Worse yet, it very commonly
will set them to NULL and 0.0, respectively, which can make certain
bugs very difficult to find.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Feb 21 '06 #6

P: n/a
Keith Thompson <ks***@mib.org> writes:
Keep in mind that zeroing memory won't necessarily do anything useful
for pointers or floating-point objects. (Worse yet, it very commonly
will set them to NULL and 0.0, respectively, which can make certain
bugs very difficult to find.)

Sorry, I'm confused as to why you've described this as potentially not
useful. (without restarting the discussion thread of whether NULL is
always zero) surely by calling calloc() the allocated space is provided
in an initialized state? Yes, your program may not *want* NULLs and
0.0s, but knowing what they are, and why they were set to that value,
must be better than the uninitialized contents provided by malloc(). ?

--
Chris.

Feb 21 '06 #7

P: n/a
In article <dt**********@enyo.uwa.edu.au>,
Chris McDonald <ch***@csse.uwa.edu.au> wrote:
Yes, your program may not *want* NULLs and
0.0s, but knowing what they are, and why they were set to that value,
must be better than the uninitialized contents provided by malloc(). ?


Why? If your program has no bugs, then you will not use these values,
so it makes no difference what they are. If your program does have
bugs, then random values may well cause the error to be spotted sooner
than null pointers and 0.0 floats.

A real example: when shared libraries were added to SunOS, several
programs (standard unix commands) were found that had been unwittingly
relying on automatic variables being initialised to zero. This showed
up because the dynamic loading used the stack before main() was
called.

Some malloc implementations have an option to "scribble" over memory
before mallocing and after freeing, to increase the likelihood of
errors being detected.

-- Richard
Feb 21 '06 #8

P: n/a
On 2006-02-21, Keith Thompson <ks***@mib.org> wrote:
Ian Collins <ia******@hotmail.com> writes:
deepu wrote:
What is the advantage of malloc over calloc. i.e. since malloc exists
to perform the memory operations why calloc is needed. Asked in an
interview.

Do you know the difference between the two? Hint, one zeros the
allocated memory, which may or may not be an advantage.


Keep in mind that zeroing memory won't necessarily do anything useful
for pointers or floating-point objects. (Worse yet, it very commonly
will set them to NULL and 0.0, respectively, which can make certain
bugs very difficult to find.)


Could you expand on this a little. How on earth can it not be useful
to zero a block of memory if that is the intent? What do you mean by
"it will commonly set then to NULL"? of course it would : its zeroing
memory (e.g zeroing a block of char **).

--
Remove evomer to reply
Feb 21 '06 #9

P: n/a
On 2006-02-21, Richard Tobin <ri*****@cogsci.ed.ac.uk> wrote:
In article <dt**********@enyo.uwa.edu.au>,
Chris McDonald <ch***@csse.uwa.edu.au> wrote:
Yes, your program may not *want* NULLs and
0.0s, but knowing what they are, and why they were set to that value,
must be better than the uninitialized contents provided by malloc(). ?
Why? If your program has no bugs, then you will not use these

values,

Unless of course the zeroed memory was to hold a set of pointers and a
non null value indicates a valid pointer.
so it makes no difference what they are. If your program does have
bugs, then random values may well cause the error to be spotted sooner
than null pointers and 0.0 floats.
A real example: when shared libraries were added to SunOS, several
programs (standard unix commands) were found that had been unwittingly
relying on automatic variables being initialised to zero. This showed
up because the dynamic loading used the stack before main() was
called.
But this does not in any way invalidate the need/requirement for auto
zeroing in a memory allocation call.

Some malloc implementations have an option to "scribble" over memory
before mallocing and after freeing, to increase the likelihood of
errors being detected.

-- Richard

--
Remove evomer to reply
Feb 21 '06 #10

P: n/a
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <dt**********@enyo.uwa.edu.au>,
Chris McDonald <ch***@csse.uwa.edu.au> wrote:
Yes, your program may not *want* NULLs and
0.0s, but knowing what they are, and why they were set to that value,
must be better than the uninitialized contents provided by malloc(). ?
Why? If your program has no bugs, then you will not use these values,
so it makes no difference what they are. If your program does have
bugs, then random values may well cause the error to be spotted sooner
than null pointers and 0.0 floats. A real example: when shared libraries were added to SunOS, several
programs (standard unix commands) were found that had been unwittingly
relying on automatic variables being initialised to zero. This showed
up because the dynamic loading used the stack before main() was
called. Some malloc implementations have an option to "scribble" over memory
before mallocing and after freeing, to increase the likelihood of
errors being detected.

Thanks for the reply Richard (and an interesting SunOS example).

I'm not comfortable with the approach that assuming, or relying upon,
random values *may* trigger bugs in code. I understand why this may
be effective, as you've described, but surely calling calloc() could
be more effective in helping to isolate one of those bugs in the first
place, by provided a known state from which to start that debugging.

One could also argue that setting pointers to NULL values is more likely
to trigger unknown pointer bugs, than if they just had random values
where 1 in 4, or 1 in 8, may just work.

--
Chris.
Feb 21 '06 #11

P: n/a
Richard G. Riley wrote:
On 2006-02-21, Keith Thompson <ks***@mib.org> wrote:
Ian Collins <ia******@hotmail.com> writes:
deepu wrote:
What is the advantage of malloc over calloc. i.e. since malloc exists
to perform the memory operations why calloc is needed. Asked in an
interview.

Do you know the difference between the two? Hint, one zeros the
allocated memory, which may or may not be an advantage.
Keep in mind that zeroing memory won't necessarily do anything useful
for pointers or floating-point objects. (Worse yet, it very commonly
will set them to NULL and 0.0, respectively, which can make certain
bugs very difficult to find.)


Could you expand on this a little. How on earth can it not be useful
to zero a block of memory if that is the intent?


Because it's not /guaranteed/ to do so for non-integer types.
What do you mean by
"it will commonly set then to NULL"? of course it would : its zeroing
memory (e.g zeroing a block of char **).


There is no requirement that the bytes of a null pointer be zeroes.
A program that assumes that there is - because it's been tried on
machines where null pointers have zero bytes - will likely break
when it encounters an implementation where null pointers have interesting
bit-patterns.

[Note that the standard /does/ require that a null pointer compare equal
to a literal 0; this is not the same case.]

--
Chris "was stirred, now shaken" Dollin
RIP Andreas "G'Kar" Katsulas, May 1946 - February 2006
Feb 21 '06 #12

P: n/a
On 2006-02-21, Chris Dollin <ke**@hpl.hp.com> wrote:

Because it's not /guaranteed/ to do so for non-integer types.


I have no idea how this came up : when I said "intent" clearly the
intent (with the example) was to zero banks of pointers. I agree that
for non integer types that 0s dont necessarily compute to 0..

To sum up : of course calloc is useful
Feb 21 '06 #13

P: n/a
On 2006-02-21, Chris McDonald <ch***@csse.uwa.edu.au> wrote:
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <dt**********@enyo.uwa.edu.au>,
Chris McDonald <ch***@csse.uwa.edu.au> wrote:
Yes, your program may not *want* NULLs and
0.0s, but knowing what they are, and why they were set to that value,
must be better than the uninitialized contents provided by malloc(). ?
Why? If your program has no bugs, then you will not use these values,
so it makes no difference what they are. If your program does have
bugs, then random values may well cause the error to be spotted sooner
than null pointers and 0.0 floats.

A real example: when shared libraries were added to SunOS, several
programs (standard unix commands) were found that had been unwittingly
relying on automatic variables being initialised to zero. This showed
up because the dynamic loading used the stack before main() was
called.

Some malloc implementations have an option to "scribble" over memory
before mallocing and after freeing, to increase the likelihood of
errors being detected.

Thanks for the reply Richard (and an interesting SunOS example).

I'm not comfortable with the approach that assuming, or relying upon,
random values *may* trigger bugs in code. I understand why this may
be effective, as you've described, but surely calling calloc() could
be more effective in helping to isolate one of those bugs in the first
place, by provided a known state from which to start that debugging.


I would agree and was going to write the same.
One could also argue that setting pointers to NULL values is more likely
to trigger unknown pointer bugs, than if they just had random values
where 1 in 4, or 1 in 8, may just work.


Again : I agree.

The only issue being (and I'm not sure how it was raised since it
seemd a bit of a strawman to knock calloc) that zeroed memory isnt't
always what might be intended if real "0" values were intended :
although more theory than practice in real world IMO since one of the
only reasons I would use calloc would be to initialise pointer blocks.

--
Remove evomer to reply
Feb 21 '06 #14

P: n/a
Richard G. Riley wrote:
On 2006-02-21, Chris Dollin <ke**@hpl.hp.com> wrote:

Because it's not /guaranteed/ to do so for non-integer types.

I have no idea how this came up : when I said "intent" clearly the
intent (with the example) was to zero banks of pointers. I agree that
for non integer types that 0s dont necessarily compute to 0..


Pointers are not integer types. callocing memory which is then treated
as pointer values, without pointer values being assigned into it,
/doesn't/ guarantee that the memory reads as null pointers.
To sum up : of course calloc is useful


For some things. I don't believe I've ever had a use for it, but I
don't write /that/ much C code (nowadays): usually I wanted to put
non-zero things into my mallocated space.

--
Chris "was stirred, now shaken" Dollin
RIP Andreas "G'Kar" Katsulas, May 1946 - February 2006
Feb 21 '06 #15

P: n/a
On 2006-02-21, Chris Dollin <ke**@hpl.hp.com> wrote:
There is no requirement that the bytes of a null pointer be zeroes.
A program that assumes that there is - because it's been tried on
machines where null pointers have zero bytes - will likely break
when it encounters an implementation where null pointers have interesting
bit-patterns.

[Note that the standard /does/ require that a null pointer compare equal
to a literal 0; this is not the same case.]


I must say that in many years of programming it is the first time I
have come across this. I stand corrected. Makes me wonder about the
usefulness of calloc after all.

Do you know of a platform where "Null" pointers did not cast to zero?
--
Remove evomer to reply
Feb 21 '06 #16

P: n/a
"Richard G. Riley" <rg***********@gmail.com> writes:
The only issue being (and I'm not sure how it was raised since it
seemd a bit of a strawman to knock calloc) that zeroed memory isnt't
always what might be intended if real "0" values were intended :
although more theory than practice in real world IMO since one of the
only reasons I would use calloc would be to initialise pointer blocks.

(good to have a sensible discussion).

I know it's all about standard C here, and that history can't be changed,
but I'd suggest that much C programming may be 'easier' if the default
function of choice was calloc(), which always zeroes memory, and for those
that require it, a function named randalloc() be available which fills
memory with random, or hazardous, bytes (if memset() is insufficient).

--
Chris.
Feb 21 '06 #17

P: n/a
On 2006-02-21, Chris McDonald <ch***@csse.uwa.edu.au> wrote:
"Richard G. Riley" <rg***********@gmail.com> writes:
The only issue being (and I'm not sure how it was raised since it
seemd a bit of a strawman to knock calloc) that zeroed memory isnt't
always what might be intended if real "0" values were intended :
although more theory than practice in real world IMO since one of the
only reasons I would use calloc would be to initialise pointer blocks.

(good to have a sensible discussion).

I know it's all about standard C here, and that history can't be changed,
but I'd suggest that much C programming may be 'easier' if the default
function of choice was calloc(), which always zeroes memory, and for those
that require it, a function named randalloc() be available which fills
memory with random, or hazardous, bytes (if memset() is insufficient).


of course. And further more "relying" on malloc to provide "random
values" is pie in the sky. The mallocd memory may well already be 0s
or some continuous memory check pattern.

--
Remove evomer to reply
Feb 21 '06 #18

P: n/a
Richard G. Riley wrote:
Makes me wonder about the
usefulness of calloc after all.


The only time I've been tempted to use calloc,
is when working with strings.
But as the code evolved, it always turned out for me,
that using calloc wasn't necessary.

I have no examples of calloc use in my C code archives.

--
pete
Feb 21 '06 #19

P: n/a
Richard G. Riley wrote:
On 2006-02-21, Chris Dollin <ke**@hpl.hp.com> wrote:
There is no requirement that the bytes of a null pointer be zeroes.
A program that assumes that there is - because it's been tried on
machines where null pointers have zero bytes - will likely break
when it encounters an implementation where null pointers have interesting
bit-patterns.

[Note that the standard /does/ require that a null pointer compare equal
to a literal 0; this is not the same case.]


I must say that in many years of programming it is the first time I
have come across this. I stand corrected. Makes me wonder about the
usefulness of calloc after all.

Do you know of a platform where "Null" pointers did not cast to zero?


Casting is something else again [1]: the issue above was about whether
a pointer with all bytes (or bits) zero was necessarily a null pointer.

If I remember Chris Torek's posts on this subject correctly, the Prime
series of computers had null pointers which had set bits in them (for
capabilities). Hence callocated memory treated as pointers would have
illegal values in, or values pointing /at/ something - but they would
not be /null/ ....

--
Chris "was stirred, now shaken" Dollin
RIP Andreas "G'Kar" Katsulas, May 1946 - February 2006
Feb 21 '06 #20

P: n/a
Richard Tobin wrote:
In article <dt**********@enyo.uwa.edu.au>,
Chris McDonald <ch***@csse.uwa.edu.au> wrote:
Yes, your program may not *want* NULLs and
0.0s, but knowing what they are, and why they were set to that value,
must be better than the uninitialized contents provided by malloc().
?


Why? If your program has no bugs, then you will not use these values,
so it makes no difference what they are. If your program does have
bugs, then random values may well cause the error to be spotted sooner
than null pointers and 0.0 floats.


<snip>

In the case where 0 is *not* a valid value, a check, looping and dividing by
the contents of each vector element would show if there's an error ...
whereas, by some other random [non-zero] value would not.

I think I'd go with zero'ed memory vs. random [esp. during early
development].
--
==============
Not a pedant
==============
Feb 21 '06 #21

P: n/a
On 2006-02-21, Chris Dollin <ke**@hpl.hp.com> wrote:
Richard G. Riley wrote:
On 2006-02-21, Chris Dollin <ke**@hpl.hp.com> wrote:
There is no requirement that the bytes of a null pointer be zeroes.
A program that assumes that there is - because it's been tried on
machines where null pointers have zero bytes - will likely break
when it encounters an implementation where null pointers have interesting
bit-patterns.

[Note that the standard /does/ require that a null pointer compare equal
to a literal 0; this is not the same case.]
I must say that in many years of programming it is the first time I
have come across this. I stand corrected. Makes me wonder about the
usefulness of calloc after all.

Do you know of a platform where "Null" pointers did not cast to zero?


Casting is something else again [1]: the issue above was about whether
a pointer with all bytes (or bits) zero was necessarily a null

pointer.

I know. Hence I asked when you have seen a NULL pointer not being byte
equivalent to 0.

If I remember Chris Torek's posts on this subject correctly, the Prime
series of computers had null pointers which had set bits in them (for
capabilities). Hence callocated memory treated as pointers would have
illegal values in, or values pointing /at/ something - but they would
not be /null/ ....


I'll keep that in mind ..
--
Remove evomer to reply
Feb 21 '06 #22

P: n/a
>On 2006-02-21, Chris Dollin <ke**@hpl.hp.com> wrote:
If I remember Chris Torek's posts on this subject correctly, the Prime
series of computers had null pointers which had set bits in them (for
capabilities). ...

I never actually used one myself, but the really old PR1MEs had
48-bit "char *", 32-bit for everything else (this was before
"void *"). Pointers had "address" and "segment" components.
Internally, NULL pointers were represented using a special segment
number (0777 or something like that), as the machine had an
instruction to test for "null pointer" by checking for this
segment number.

Later versions of C on the PR1ME actually used *new hardware*: they
added an instruction, spelled TCNP in assembler, which stood for
"Test C Null Pointer". This new instruction tested the "address"
part for zero, instead of the segment part for 0777 (or whatever
it was).

In article <46************@individual.net>
Richard G. Riley <rg***********@gmail.com> wrote:I'll keep that in mind ..


In case you ever program a PR1ME? :-)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (4039.22'N, 11150.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Feb 21 '06 #23

P: n/a
"Richard G. Riley" <rg***********@gmail.com> writes:
On 2006-02-21, Chris Dollin <ke**@hpl.hp.com> wrote:
Richard G. Riley wrote: [...]
Do you know of a platform where "Null" pointers did not cast to zero?


Casting is something else again [1]: the issue above was about whether
a pointer with all bytes (or bits) zero was necessarily a null

pointer.

I know. Hence I asked when you have seen a NULL pointer not being byte
equivalent to 0.


I'm sure that's what you meant, but it's not what you actually wrote;
see above.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Feb 21 '06 #24

P: n/a

In article <46************@individual.net>, "Richard G. Riley" <rg***********@gmail.com> writes:
On 2006-02-21, Richard Tobin <ri*****@cogsci.ed.ac.uk> wrote:
In article <dt**********@enyo.uwa.edu.au>,
Chris McDonald <ch***@csse.uwa.edu.au> wrote:
Yes, your program may not *want* NULLs and
0.0s, but knowing what they are, and why they were set to that value,
must be better than the uninitialized contents provided by malloc(). ?

It's not better if the program immediately sets the contents to some
other value. Indeed it's worse, since you've just wasted cycles and
potentially lost locality of reference. If you're using a system
with lazy allocation, you've also just potentially committed a lot of
pages that you might not actually need.
Why? If your program has no bugs, then you will not use these

values,

Unless of course the zeroed memory was to hold a set of pointers and a
non null value indicates a valid pointer.


That's what we call a "bug". It's excluded by Richard Tobin's
explicit condition "has no bugs".

And zeroing memory, as calloc does, is not guaranteed to produce a
null pointer value, as I'm sure other people have already pointed
out.
so it makes no difference what they are. If your program does have
bugs, then random values may well cause the error to be spotted sooner
than null pointers and 0.0 floats.


But this does not in any way invalidate the need/requirement for auto
zeroing in a memory allocation call.


That's true in a strictly limited sense (if there is any such
requirement, which frankly I do not believe is often the case).
However, the existence of any such requirement equally does not
remove the justification Richard Tobin described for *not*
initializing memory.

Far better, in most cases, that the program allocate memory and then
immediately initialize it explicitly and properly in a type-safe
manner. (Note that may not entail touching every single byte as
calloc does; the data structure being allocated may use a sentinel or
some other length indicator to separate valid from invalid data.)

That way, the program can do just as much work as necessary; it can
commit just as much memory as necessary; it can initialize each field
correctly; it can let the compiler catch certain errors; and it can
document its intentions for maintainers.

The only case where calloc has any advantage, as far as I can tell,
is if a program actually needs to set an entire allocated area to
all-bits-zero, and this happens in a performance-critical area,
because it's conceivable that there might be some small performance
benefit in that case. But if memory allocation is being done in a
performance-critical area there are likely other more significant
optimizations to be made.

--
Michael Wojcik mi************@microfocus.com

An intense imaginative activity accompanied by a psychological and moral
passivity is bound eventually to result in a curbing of the growth to
maturity and in consequent artistic repetitiveness and stultification.
-- D. S. Savage
Feb 22 '06 #25

P: n/a

In article <dt**********@enyo.uwa.edu.au>, Chris McDonald <ch***@csse.uwa.edu.au> writes:

... I'd suggest that much C programming may be 'easier' if the default
function of choice was calloc(), which always zeroes memory, and for those
that require it, a function named randalloc() be available which fills
memory with random, or hazardous, bytes (if memset() is insufficient).


This would have dreadful consequences on a lazy-allocating system.
And lazy-allocating systems are common.

It would have adverse consequences on a system with paged virtual
memory and/or memory caching. And those are even more common.

It's very common for C programs to allocate more memory than they
actually use. In fact, this is recommended practice in many
quarters for certain situations - for example to reduce the number
of calls to realloc when enlarging a buffer as it accumulates data.
Such programs, when running on modern general-purpose systems with
caching and paged virtual memory, gain significant benefits if they
avoid touching pages they don't need.

There are far better ways to avoid operating on uninitialized data
than to initialize it all as soon as you get it. Just because C
doesn't provide syntactic sugar for constructors and destructors
doesn't mean you can't allocate and discard objects through factory
and disposal functions that are type-safe, simple, and easy to check
for correctness. For that matter, C supports ADTs very well, via
pointers to incomplete structures; it's really not hard to produce
well-layered code that avoids invalid dereferences. And you get much
more in the bargain, like type-safety and strong interfaces.

--
Michael Wojcik mi************@microfocus.com

I will shoue the world one of the grate Wonders of the world in 15
months if Now man mourders me in Dors or out Dors
-- "Lord" Timothy Dexter, _A Pickle for the Knowing Ones_
Feb 22 '06 #26

P: n/a
In article <dt*********@news2.newsguy.com>,
Michael Wojcik <mw*****@newsguy.com> wrote:
... I'd suggest that much C programming may be 'easier' if the default
function of choice was calloc(), which always zeroes memory, and for those
that require it, a function named randalloc() be available which fills
memory with random, or hazardous, bytes (if memset() is insufficient).
This would have dreadful consequences on a lazy-allocating system.


It would be easy enough to add operating system support for it. Most
OSes can provide memory pages which are zeroed when they are first
accessed, and providing pages that are randomly-filled when first
accessed would not be hard.

-- Richard
Feb 22 '06 #27

P: n/a
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <dt*********@news2.newsguy.com>,
Michael Wojcik <mw*****@newsguy.com> wrote:
... I'd suggest that much C programming may be 'easier' if the default
function of choice was calloc(), which always zeroes memory, and for those
that require it, a function named randalloc() be available which fills
memory with random, or hazardous, bytes (if memset() is insufficient).
This would have dreadful consequences on a lazy-allocating system.

It would be easy enough to add operating system support for it. Most
OSes can provide memory pages which are zeroed when they are first
accessed, and providing pages that are randomly-filled when first
accessed would not be hard.

My understanding, after being berated many times by the learned
contributors to this newsgroup, is that the language's design should
not bow down to any specific type of architecture. Yes, a percentage
of architectures may have 'dreadful consequences' *for them*.

As Richard has indicated, if today's lazy-allocating systems don't already
zero-fill memory, then their calloc()s must be zero-filling the memory
after the operating system has provided it. Thus, this is no different
than random-filling it (which, I suspect, few operating systems provide).

--
Chris.
Feb 22 '06 #28

P: n/a

In article <dt***********@pc-news.cogsci.ed.ac.uk>, ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <dt*********@news2.newsguy.com>,
Michael Wojcik <mw*****@newsguy.com> wrote:
... I'd suggest that much C programming may be 'easier' if the default
function of choice was calloc(), which always zeroes memory, and for those
that require it, a function named randalloc() be available which fills
memory with random, or hazardous, bytes (if memset() is insufficient).

This would have dreadful consequences on a lazy-allocating system.


It would be easy enough to add operating system support for it. Most
OSes can provide memory pages which are zeroed when they are first
accessed, and providing pages that are randomly-filled when first
accessed would not be hard.


But the allocation functions would then have to be able to
distinguish between pages which were reused from previously allocated
and freed memory (which would not be "first accessed", from the OS's
point of view) and those that were freshly allocated from the OS
(which the allocation should not touch, lest they be committed).

It's a significant complication for little benefit, and more benefit
is easily achieved by adopting better programming habits in the
first place.

--
Michael Wojcik mi************@microfocus.com

But I still wouldn't count out the monkey - modern novelists being as
unpredictable as they are at times. -- Marilyn J. Miller
Feb 23 '06 #29

P: n/a

In article <dt**********@enyo.uwa.edu.au>, Chris McDonald <ch***@csse.uwa.edu.au> writes:
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <dt*********@news2.newsguy.com>,
Michael Wojcik <mw*****@newsguy.com> wrote:
... I'd suggest that much C programming may be 'easier' if the default
function of choice was calloc(), which always zeroes memory, and for those
that require it, a function named randalloc() be available which fills
memory with random, or hazardous, bytes (if memset() is insufficient).This would have dreadful consequences on a lazy-allocating system.


My understanding, after being berated many times by the learned
contributors to this newsgroup, is that the language's design should
not bow down to any specific type of architecture.


Nor should it go out of its way to impose inefficiencies on common
platforms.
Yes, a percentage of architectures may have 'dreadful consequences'
*for them*.
Including the platforms which represent a majority of the targets for
hosted C development.
As Richard has indicated, if today's lazy-allocating systems don't already
zero-fill memory, then their calloc()s must be zero-filling the memory
after the operating system has provided it.


See my response to Richard. This is not a fix.

Your proposal asks the language to do extra work and waste resources
in a weak effort to save lazy programmers from themselves. That's
contrary to the spirit of C, and of dubious value in any event.

--
Michael Wojcik mi************@microfocus.com

Any average educated person can turn out competent verse. -- W. H. Auden
Feb 23 '06 #30

P: n/a
mw*****@newsguy.com (Michael Wojcik) writes:
My understanding, after being berated many times by the learned
contributors to this newsgroup, is that the language's design should
not bow down to any specific type of architecture.
Nor should it go out of its way to impose inefficiencies on common
platforms.
Agreed - I never proposed deliberately imposing inefficiencies -
but that may have been a consequence.
I was discussing a possible approach that may have made some programming easier.

Yes, a percentage of architectures may have 'dreadful consequences'
*for them*.

Including the platforms which represent a majority of the targets for
hosted C development.
Arguments about language and library design, that too early focus on
implementation efficiencies, tend to constrain thinking and finish
prematurely. Fortunately functional, Lisp, and scripting programmers
didn't give up so early. I fully appreciate that standard C won't be
changing in this direction, but parties that justify the status-quo
because of majorities, also seem to tolerate viruses and keyloggers,
and believe that big-endian IPv4 addresses must be evil.

Your proposal asks the language to do extra work and waste resources
in a weak effort to save lazy programmers from themselves. That's
contrary to the spirit of C, and of dubious value in any event.


Strong language.
That's a 20 year view of what a programming language is supposed to provide -
to have a first priority of maximising execution efficiency, and not
assisting programmers if possible.

A more contemporary view is to increase the expressiveness of languages
(we see this with the introduction of C's enumerated types and Booleans)
and, yes, to often save lazy programmers from themselves (C no longer has
assumed integer types and requires prototypes and, of course <OT>Java</OT>
does lots of this).

Yes, extra work may be required, resources can be considered as being
there to be used, and whether the effort is weak or not has been the
focus of this whole thread. There are valid opinions on both sides;
yours has been one.

--
Chris.
Feb 23 '06 #31

P: n/a
On 2006-02-23, Chris McDonald <ch***@csse.uwa.edu.au> wrote:
That's a 20 year view of what a programming language is supposed to provide -
to have a first priority of maximising execution efficiency, and not
assisting programmers if possible.


Personally I hope C doesnt evolve in the same way some others have. C is
one of the few that almost gives you the reigns and allows you make
your machine trot when it should trot and canter when it should
canter. I've been keeping an eye on the development of Java and it its
still doing a good job of pushing new HW to its limits :-;
Feb 23 '06 #32

P: n/a
Richard G. Riley wrote:
There is no requirement that the bytes of a null pointer be zeroes.
A program that assumes that there is - because it's been tried on
machines where null pointers have zero bytes - will likely break
when it encounters an implementation where null pointers have interesting
bit-patterns.

[Note that the standard /does/ require that a null pointer compare equal
to a literal 0; this is not the same case.]


I must say that in many years of programming it is the first time I
have come across this. I stand corrected. Makes me wonder about the
usefulness of calloc after all.

Do you know of a platform where "Null" pointers did not cast to zero?
...


It is probably worth adding that the only objects that can be correctly
initialized with 'calloc' ('memset' to 0, etc.) are objects of type
'[signed/unsigned] char' (and, of course, aggregates composed of these types).
There's a known proposal to modify C standard and make this kind of low-level
initialization to produce defined result for all _integral_ types.
Unfortunately, I don't know the current status of the proposal.

--
Best regards,
Andrey Tarasevich
Feb 23 '06 #33

P: n/a
On 2006-02-23, Andrey Tarasevich <an**************@hotmail.com> wrote:
Richard G. Riley wrote:
There is no requirement that the bytes of a null pointer be zeroes.
A program that assumes that there is - because it's been tried on
machines where null pointers have zero bytes - will likely break
when it encounters an implementation where null pointers have interesting
bit-patterns.

[Note that the standard /does/ require that a null pointer compare equal
to a literal 0; this is not the same case.]


I must say that in many years of programming it is the first time I
have come across this. I stand corrected. Makes me wonder about the
usefulness of calloc after all.

Do you know of a platform where "Null" pointers did not cast to zero?
...


It is probably worth adding that the only objects that can be correctly
initialized with 'calloc' ('memset' to 0, etc.) are objects of type
'[signed/unsigned] char' (and, of course, aggregates composed of these types).
There's a known proposal to modify C standard and make this kind of low-level
initialization to produce defined result for all _integral_ types.
Unfortunately, I don't know the current status of the proposal.


The standard _does_ guarantee it for integral types, IIRC.
Feb 23 '06 #34

P: n/a
Jordan Abel wrote:
On 2006-02-23, Andrey Tarasevich <an**************@hotmail.com> wrote:
Richard G. Riley wrote:
There is no requirement that the bytes of a null pointer be zeroes.
A program that assumes that there is - because it's been tried on
machines where null pointers have zero bytes - will likely break
when it encounters an implementation where null pointers have interesting
bit-patterns.

[Note that the standard /does/ require that a null pointer compare equal
to a literal 0; this is not the same case.]
I must say that in many years of programming it is the first time I
have come across this. I stand corrected. Makes me wonder about the
usefulness of calloc after all.

Do you know of a platform where "Null" pointers did not cast to zero?
...


It is probably worth adding that the only objects that can be correctly
initialized with 'calloc' ('memset' to 0, etc.) are objects of type
'[signed/unsigned] char' (and, of course, aggregates composed of these types).
There's a known proposal to modify C standard and make this kind of low-level
initialization to produce defined result for all _integral_ types.
Unfortunately, I don't know the current status of the proposal.


The standard _does_ guarantee it for integral types, IIRC.


The C89/90 and original version of C99 did not. The issue here is that the above
method of "initialization" affects padding bits, which might be present in any
integral type other than '[signed/unsigned] char'. And it is not guaranteed that
all-zero bit pattern in both padding and value-representing bits of such
integral type constitutes a valid (non-trap) combination of bits for an object
of such type.

The key point of proposal was to force all implementations to treat all-zero bit
pattern in both padding and value-representing bits as legal.

--
Best regards,
Andrey Tarasevich
Feb 23 '06 #35

P: n/a
Andrey Tarasevich <an**************@hotmail.com> writes:
[...]
It is probably worth adding that the only objects that can be
correctly initialized with 'calloc' ('memset' to 0, etc.) are
objects of type '[signed/unsigned] char' (and, of course, aggregates
composed of these types). There's a known proposal to modify C
standard and make this kind of low-level initialization to produce
defined result for all _integral_ types. Unfortunately, I don't
know the current status of the proposal.


It's been approved and is now effectively part of the standard.

The defect report is at
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_263.htm>.
Its status is "Closed, published in TC 2". You can see the
update in the freely available n1124.pdf, which incorporates
the C99 standard plus TC1 and TC2:

6.2.6.2p5:
[...] For any integer type, the object representation where all
the bits are zero shall be a representation of the value zero in
that type.

In n1124.pdf, differences from the original C99 standard are marked
with change bars.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Feb 23 '06 #36

P: n/a
Andrey Tarasevich <an**************@hotmail.com> wrote:
Jordan Abel wrote:
On 2006-02-23, Andrey Tarasevich <an**************@hotmail.com> wrote: ....
It is probably worth adding that the only objects that can be correctly
initialized with 'calloc' ('memset' to 0, etc.) are objects of type
'[signed/unsigned] char' (and, of course, aggregates composed of these types).
There's a known proposal to modify C standard and make this kind of low-level
initialization to produce defined result for all _integral_ types.
Unfortunately, I don't know the current status of the proposal.


The standard _does_ guarantee it for integral types, IIRC.


The C89/90 and original version of C99 did not. The issue here is that the above
method of "initialization" affects padding bits, which might be present in any
integral type other than '[signed/unsigned] char'. And it is not guaranteed that
all-zero bit pattern in both padding and value-representing bits of such
integral type constitutes a valid (non-trap) combination of bits for an object
of such type.

....

TC2 (to C99) changed this:

# 9. Page 39, 6.2.6.2
# Append to paragraph 5:
#
# For any integer type, the object representation where all the bits
# are zero shall be a representation of the value zero in that type.

--
Stan Tobias
mailx `echo si***@FamOuS.BedBuG.pAlS.INVALID | sed s/[[:upper:]]//g`
Feb 23 '06 #37

P: n/a
"S.Tobias" wrote:

Andrey Tarasevich <an**************@hotmail.com> wrote:

.... snip ...

The C89/90 and original version of C99 did not. The issue here is
that the above method of "initialization" affects padding bits,
which might be present in any integral type other than '[signed/
unsigned] char'. And it is not guaranteed that all-zero bit
pattern in both padding and value-representing bits of such
integral type constitutes a valid (non-trap) combination of bits
for an object of such type.

...

TC2 (to C99) changed this:

# 9. Page 39, 6.2.6.2
# Append to paragraph 5:
#
# For any integer type, the object representation where all
# the bits are zero shall be a representation of the value
# zero in that type.


IMO this change was only feasible because an extensive search did
not yield any systems in which it did not already hold. The rule
about not breaking existant code is paramount.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
Feb 24 '06 #38

This discussion thread is closed

Replies have been disabled for this discussion.