472,144 Members | 1,917 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,144 software developers and data experts.

Dynamically resizing a buffer

Hello clc,

I have a buffer in a program which I write to. The buffer has
write-only, unsigned-char-at-a-time access, and the amount of space
required isn't known a priori. Therefore I want the buffer to
dynamically grow using realloc().

A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the buffer,
but if realloc() fails request smaller size increments until realloc()
succeeds or until realloc() has failed to increase the buffer by even
one byte.

The basic idea is below. The key function is MyBuffer_writebyte(), which
expects the incoming MyBuffer object to be in a consistent state.

Are there any improvements I could make to this code? To me it feels
clumsy, especially with the break in the 3-line while loop.

struct mybuffer_t {
unsigned char *data;
size_t size; /* size of buffer allocated */
size_t index; /* index of first unwritten member of data */
};

typedef struct mybuffer_t MyBuffer;

void MyBuffer_writebyte(MyBuffer *buf, unsigned char byte) {
if(buf->size == buf->index) {
/* need to allocate more space */
size_t inc = buf->size;
unsigned char *tmp;
while(inc>0) {
tmp = realloc(buf->data, buf->size + inc);
if (tmp!=NULL) break; /* break to preserve the size of inc*/
inc/=2;
}
if(tmp==NULL) {
/* couldn't allocate any more space, print error and exit */
exit(EXIT_FAILURE);
}
buf->size += inc;
}
buf->data[buf->index++] = byte;
}

--
Philip Potter pgp <atdoc.ic.ac.uk
Aug 22 '07 #1
64 9087
Philip Potter wrote:
>
struct mybuffer_t {
unsigned char *data;
size_t size; /* size of buffer allocated */
size_t index; /* index of first unwritten member of data */
};

typedef struct mybuffer_t MyBuffer;

void MyBuffer_writebyte(MyBuffer *buf, unsigned char byte) {
if(buf->size == buf->index) {
/* need to allocate more space */
size_t inc = buf->size;
unsigned char *tmp;
while(inc>0) {
tmp = realloc(buf->data, buf->size + inc);
if (tmp!=NULL) break; /* break to preserve the size of inc*/
inc/=2;
}
if(tmp==NULL) {
/* couldn't allocate any more space, print error and exit */
exit(EXIT_FAILURE);
}
You never replace buf->data... It will continue to point to the original
space, which may have been freed ...
buf->size += inc;
}
buf->data[buf->index++] = byte;
}
I'd split out the resizing to a separate function, as it's a
generalizable technique.

You could combine the two forms of loop control (size of increment,
success/failure of allocation) in the while.

This (untested) is my rough hack :-

void MyBuffer_writebyte(MyBuffer *buf, unsigned char byte) {
if(buf->size == buf->index) {
if(!resizeBuffer(buf)){
exit(EXIT_FAILURE); /* or some better error handling */
}
}
buf->data[buf->index++] = byte;
}

/* returns new size */
int resizeBuffer(MyBuffer *buf) {
size_t inc = buf->size;
unsigned char *tmp = NULL;
while(inc>0 &&
(tmp = realloc(buf->data, buf->size + inc)) == NULL) {
inc/=2;
}
if(tmp!=NULL) {
buf->data = tmp;
buf->size += inc;
return buf->size;
} else {
return 0;
}

}
Aug 22 '07 #2
Philip Potter <pg*@see.sig.invalidwrites:
Hello clc,

I have a buffer in a program which I write to. The buffer has
write-only, unsigned-char-at-a-time access, and the amount of space
required isn't known a priori. Therefore I want the buffer to
dynamically grow using realloc().

A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the
buffer, but if realloc() fails request smaller size increments until
realloc() succeeds or until realloc() has failed to increase the
buffer by even one byte.
Double the size of the buffer? Never IMO. Not if you "designed" the buffer
to be "about right" in the first place. Suppose it doesn't fail but does
exhaust memory? Well, bang goes your next malloc.

Keep it simple. Have a delta increase and use that. I would suggest
something like 10-20% of the initial size.

Clearly if you KNOW that the buffer is going to double/quadruple etc
then malloc it to start with.
>
The basic idea is below. The key function is MyBuffer_writebyte(),
which expects the incoming MyBuffer object to be in a consistent
state.

Are there any improvements I could make to this code? To me it feels
clumsy, especially with the break in the 3-line while loop.

struct mybuffer_t {
unsigned char *data;
size_t size; /* size of buffer allocated */
size_t index; /* index of first unwritten member of data */
};

typedef struct mybuffer_t MyBuffer;

void MyBuffer_writebyte(MyBuffer *buf, unsigned char byte) {
if(buf->size == buf->index) {
/* need to allocate more space */
size_t inc = buf->size;
unsigned char *tmp;
while(inc>0) {
tmp = realloc(buf->data, buf->size + inc);
if (tmp!=NULL) break; /* break to preserve the size of inc*/
inc/=2;
}
if(tmp==NULL) {
/* couldn't allocate any more space, print error and exit */
exit(EXIT_FAILURE);
}
buf->size += inc;
}
buf->data[buf->index++] = byte;
}
--
Aug 22 '07 #3
Richard wrote:
Philip Potter <pg*@see.sig.invalidwrites:
>Hello clc,

I have a buffer in a program which I write to. The buffer has
write-only, unsigned-char-at-a-time access, and the amount of space
required isn't known a priori. Therefore I want the buffer to
dynamically grow using realloc().

A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the
buffer, but if realloc() fails request smaller size increments until
realloc() succeeds or until realloc() has failed to increase the
buffer by even one byte.

Double the size of the buffer? Never IMO. Not if you "designed" the buffer
to be "about right" in the first place. Suppose it doesn't fail but does
exhaust memory? Well, bang goes your next malloc.

Keep it simple. Have a delta increase and use that. I would suggest
something like 10-20% of the initial size.

Clearly if you KNOW that the buffer is going to double/quadruple etc
then malloc it to start with.
I've already stated that the final amount of space required isn't known
a priori. It could be thousands of bytes, it could be millions. A
linearly-growing buffer is not appropriate for this situation, which is
why I chose an exponentially-growing buffer.

Phil

--
Philip Potter pgp <atdoc.ic.ac.uk
Aug 22 '07 #4
Mark Bluemel wrote:
Philip Potter wrote:

You never replace buf->data... It will continue to point to the original
space, which may have been freed ...
Whoops! That bug was also in my project code...
> buf->size += inc;
}
buf->data[buf->index++] = byte;
}

I'd split out the resizing to a separate function, as it's a
generalizable technique.

You could combine the two forms of loop control (size of increment,
success/failure of allocation) in the while.
I think this is clumsy too, but less so :)
This (untested) is my rough hack :-

void MyBuffer_writebyte(MyBuffer *buf, unsigned char byte) {
if(buf->size == buf->index) {
if(!resizeBuffer(buf)){
exit(EXIT_FAILURE); /* or some better error handling */
}
}
buf->data[buf->index++] = byte;
}

/* returns new size */
int resizeBuffer(MyBuffer *buf) {
size_t inc = buf->size;
unsigned char *tmp = NULL;
while(inc>0 &&
(tmp = realloc(buf->data, buf->size + inc)) == NULL) {
inc/=2;
}
if(tmp!=NULL) {
buf->data = tmp;
buf->size += inc;
return buf->size;
} else {
return 0;
}

}
Surely resizeBuffer() should return size_t?

--
Philip Potter pgp <atdoc.ic.ac.uk
Aug 22 '07 #5
Philip Potter wrote:
Mark Bluemel wrote:
>I'd split out the resizing to a separate function, as it's a
generalizable technique.
>
You could combine the two forms of loop control (size of increment,
success/failure of allocation) in the while.

I think this is clumsy too, but less so :)
I'm inclined to agree. I'm still trying to think of a more elegant solution.
>
>This (untested) is my rough hack :-
<snip>
>/* returns new size */
int resizeBuffer(MyBuffer *buf) {
<snip>
Surely resizeBuffer() should return size_t?
Indeed it should.
Aug 22 '07 #6
Philip Potter wrote:
A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the buffer,
but if realloc() fails request smaller size increments until realloc()
succeeds or until realloc() has failed to increase the buffer by even
one byte.

The basic idea is below. The key function is MyBuffer_writebyte(), which
expects the incoming MyBuffer object to be in a consistent state.

Are there any improvements I could make to this code? To me it feels
clumsy, especially with the break in the 3-line while loop.

struct mybuffer_t {
unsigned char *data;
size_t size; /* size of buffer allocated */
size_t index; /* index of first unwritten member of data */
};

typedef struct mybuffer_t MyBuffer;

void MyBuffer_writebyte(MyBuffer *buf, unsigned char byte) {
if(buf->size == buf->index) {
/* need to allocate more space */
size_t inc = buf->size;
unsigned char *tmp;
while(inc>0) {
tmp = realloc(buf->data, buf->size + inc);
if (tmp!=NULL) break; /* break to preserve the size of inc*/
inc/=2;
}
if(tmp==NULL) {
/* couldn't allocate any more space, print error and exit */
exit(EXIT_FAILURE);
}
buf->size += inc;
}
buf->data[buf->index++] = byte;
}
Here's a simple alternative :-

void MyBuffer_writebyte(MyBuffer *buf, unsigned char byte) {
if(buf->size == buf->index) {
/* need to allocate more space */
size_t inc = buf->size;
unsigned char *tmp;
while(inc>0) {
tmp = realloc(buf->data, buf->size + inc);
if (tmp!=NULL) {
buf->data = tmp;
buf->size += inc;
inc = 0; /* force exit */
} else {
inc /= 2;
}
}
if(tmp==NULL) {
/* couldn't allocate any more space, print error and exit */
exit(EXIT_FAILURE);
}
}
buf->data[buf->index++] = byte;
}
Aug 22 '07 #7

"Philip Potter" <pg*@see.sig.invalidwrote in message
news:fa**********@aioe.org...
Hello clc,

I have a buffer in a program which I write to. The buffer has write-only,
unsigned-char-at-a-time access, and the amount of space required isn't
known a priori. Therefore I want the buffer to dynamically grow using
realloc().

A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the buffer,
but if realloc() fails request smaller size increments until realloc()
succeeds or until realloc() has failed to increase the buffer by even one
byte.
doubling is probably too steep IMO.

I usually use 50% each time ('size2=size+(size>>1);').
25% may also be sane ('size2=size+(size>>2);').
33% may also be good.

33% is ugly though:
size2=size+(size>>2)+(size>>4)+(size>>6); //bulky
size2=size+size*33/100; //critical buffer size limit

but is a nice 4/3 ratio...

<snip>
Aug 22 '07 #8
Philip Potter wrote:
[...]

Are there any improvements I could make to this code? To me it feels
clumsy, especially with the break in the 3-line while loop.
[...]
while(inc>0) {
tmp = realloc(buf->data, buf->size + inc);
if (tmp!=NULL) break; /* break to preserve the size of inc*/
inc/=2;
}
if(tmp==NULL) {
/* couldn't allocate any more space, print error and exit */
exit(EXIT_FAILURE);
}
buf->size += inc;
The loop has two termination conditions -- realloc()
succeeds, or increment goes to zero -- so I don't see how
you can avoid two tests. But as written there's an after-
the-loop test to (re-)discover why the loop ended. That,
at least, is easy to get rid of:

while ((tmp = realloc(buf->data, buf->size + inc)) == NULL) {
if ((inc /= 2) == 0)
exit(EXIT_FAILURE);
}
buf->data = tmp;
buf->size += inc;

--
Eric Sosman
es*****@ieee-dot-org.invalid
Aug 22 '07 #9
In article <rc************@homelinux.net>, Richard <rg****@gmail.comwrote:
>Double the size of the buffer? Never IMO. Not if you "designed" the buffer
to be "about right" in the first place. Suppose it doesn't fail but does
exhaust memory? Well, bang goes your next malloc.

Keep it simple. Have a delta increase and use that. I would suggest
something like 10-20% of the initial size.
If you don't know the likely size (e.g. when reading some textual
data) it makes more sense to increase by a proportion of the current
size: if you started at 10 bytes are have now reached 100 it makes no
sense to go on incrementing by one or two.

Of course that's what the original proposal did: increase by 100% of
the current size. If you start from one, this means you only allocate
power-of-two sized blocks.

A problem with any approach is that it might interact badly with the
malloc() implementation. Imagine an implementation that always
allocates powers of two but uses sizeof(void *) bytes of it to record
the size - if you always allocated powers of two you would end up
allocating nearly four times as much as you need.

I recommend separating out the increment algorithm so that it can
easily be changed if it proves to be bad on some platform.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Aug 22 '07 #10
Philip Potter said:
Richard wrote:
>Philip Potter <pg*@see.sig.invalidwrites:
<snip>
>>A comment by Richard Heathfield in a thread here suggested that a
good algorithm for this is to use realloc() to double the size of
the buffer, but if realloc() fails request smaller size increments
until realloc() succeeds or until realloc() has failed to increase
the buffer by even one byte.

Double the size of the buffer? Never IMO. Not if you "designed" the
buffer to be "about right" in the first place. Suppose it doesn't
fail but does exhaust memory? Well, bang goes your next malloc.
>
Keep it simple. Have a delta increase and use that. I would suggest
something like 10-20% of the initial size.
>
Clearly if you KNOW that the buffer is going to double/quadruple etc
then malloc it to start with.

I've already stated that the final amount of space required isn't
known a priori. It could be thousands of bytes, it could be millions.
A linearly-growing buffer is not appropriate for this situation, which
is why I chose an exponentially-growing buffer.
Right. A linear increase is most unwise.

One of the problems with killfiling trolls is that one doesn't get to
see (and thus have the opportunity to correct) the ludicrous "advice"
they dish out, unless it happens to be quoted in a reply.

Incidentally, if memory is tight (relative to the length of the line
you're reading), memory exhaustion /is/ a risk, but one that is easily
dealt with. Once you know that the line has been read completely, you
can "shrink" the allocation to be an exact fit, thus taking up no more
memory than you actually need.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Aug 22 '07 #11
Eric Sosman said:

<snip>
while ((tmp = realloc(buf->data, buf->size + inc)) == NULL) {
if ((inc /= 2) == 0)
exit(EXIT_FAILURE);
Do you really think that's a good idea? We've had this whole discussion
recently, I know, but it bears reiterating nonetheless that it is not a
library function's job to decide whether to terminate a program.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Aug 22 '07 #12
Richard Heathfield <rj*@see.sig.invalidwrites:
Philip Potter said:
>Richard wrote:
>>Philip Potter <pg*@see.sig.invalidwrites:

<snip>
>>>A comment by Richard Heathfield in a thread here suggested that a
good algorithm for this is to use realloc() to double the size of
the buffer, but if realloc() fails request smaller size increments
until realloc() succeeds or until realloc() has failed to increase
the buffer by even one byte.

Double the size of the buffer? Never IMO. Not if you "designed" the
buffer to be "about right" in the first place. Suppose it doesn't
fail but does exhaust memory? Well, bang goes your next malloc.

Keep it simple. Have a delta increase and use that. I would suggest
something like 10-20% of the initial size.

Clearly if you KNOW that the buffer is going to double/quadruple etc
then malloc it to start with.

I've already stated that the final amount of space required isn't
known a priori. It could be thousands of bytes, it could be millions.
A linearly-growing buffer is not appropriate for this situation, which
is why I chose an exponentially-growing buffer.
I missed that point. I also caveated my reply in the second
sentence. Apologies if you thought my reply was misleading.

"Not if you "designed" the buffer to be "about right" in the first place."
>
Right. A linear increase is most unwise.

One of the problems with killfiling trolls is that one doesn't get to
see (and thus have the opportunity to correct) the ludicrous "advice"
they dish out, unless it happens to be quoted in a reply.
Hilarious advice from someone who frequently posts incorrect advice.
>
Incidentally, if memory is tight (relative to the length of the line
you're reading), memory exhaustion /is/ a risk, but one that is easily
dealt with. Once you know that the line has been read completely, you
can "shrink" the allocation to be an exact fit, thus taking up no more
memory than you actually need.
And in the next episode, RH explains why "using less memory is better
for the system performance".
Aug 22 '07 #13
Richard Heathfield wrote:
Eric Sosman said:

<snip>
>while ((tmp = realloc(buf->data, buf->size + inc)) == NULL) {
if ((inc /= 2) == 0)
exit(EXIT_FAILURE);

Do you really think that's a good idea? We've had this whole discussion
recently, I know, but it bears reiterating nonetheless that it is not a
library function's job to decide whether to terminate a program.
In Eric's (and my:-) defence, I think he was just making the minimum
change to the OP's code to address the specific issue the OP had raised
(about the use of "break").

As I remarked in my earlier, less than elegant reply, I think the
resizing should be split out to a separate function which returns some
value which indicates success or failure.

Equally the "MyBuffer_writebyte" routine might also be better returning
a success/failure result rather than terminating the program on failure.

Aug 22 '07 #14
Richard wrote:
Richard Heathfield <rj*@see.sig.invalidwrites:
>Philip Potter said:
>>I've already stated that the final amount of space required isn't
known a priori. It could be thousands of bytes, it could be millions.
A linearly-growing buffer is not appropriate for this situation, which
is why I chose an exponentially-growing buffer.

I missed that point. I also caveated my reply in the second
sentence. Apologies if you thought my reply was misleading.

"Not if you "designed" the buffer to be "about right" in the first place."
And if you knew the right size, why would you need a dynamic buffer in
the first place? The only situation in which a linear expansion is
useful is when you know an upper bound on the size requirements; in
which case why don't you just allocate that upper bound?
>Right. A linear increase is most unwise.

One of the problems with killfiling trolls is that one doesn't get to
see (and thus have the opportunity to correct) the ludicrous "advice"
they dish out, unless it happens to be quoted in a reply.

Hilarious advice from someone who frequently posts incorrect advice.
On the contrary, I find RH's advice to be some of the best here -- along
with a few others. He can be quite blunt with it sometimes, but we can't
have everything :)
>Incidentally, if memory is tight (relative to the length of the line
you're reading), memory exhaustion /is/ a risk, but one that is easily
dealt with. Once you know that the line has been read completely, you
can "shrink" the allocation to be an exact fit, thus taking up no more
memory than you actually need.

And in the next episode, RH explains why "using less memory is better
for the system performance".
....and his advice continues to be correct.

Phil

--
Philip Potter pgp <atdoc.ic.ac.uk
Aug 22 '07 #15
In article <wf******************************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>Once you know that the line has been read completely, you
can "shrink" the allocation to be an exact fit, thus taking up no more
memory than you actually need.
Possibly. realloc() may well be a no-op for size decreases.

-- Richard

--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Aug 22 '07 #16
Richard Tobin said:
In article <wf******************************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>>Once you know that the line has been read completely, you
can "shrink" the allocation to be an exact fit, thus taking up no more
memory than you actually need.

Possibly. realloc() may well be a no-op for size decreases.
True enough, but then malloc() may well allocate in chunks of 16MB. You
can't fight a lousy implementation, except by ditching it in favour of
a better one.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Aug 22 '07 #17
Richard Heathfield wrote On 08/22/07 10:05,:
Eric Sosman said:

<snip>

>>while ((tmp = realloc(buf->data, buf->size + inc)) == NULL) {
if ((inc /= 2) == 0)
exit(EXIT_FAILURE);


Do you really think that's a good idea? We've had this whole discussion
recently, I know, but it bears reiterating nonetheless that it is not a
library function's job to decide whether to terminate a program.
If you'll review the discussion you mention, you'll
find that I was firmly on the "report, don't crash" side.

The purpose of my rewrite of the O.P.'s code was to
implement his choices less clumsily, not to confuse the
issue by inflicting my own choices on him. That's why
(for example) I didn't mention the alternative design
possibility of a linked list of smaller buffers instead
of one ever-growing array: it would only have distracted
attention from the main issue.

At one point I even had a /* your choice */ comment on
the exit() call, but decided it was too distracting and
removed it for fear it would provoke responses on side-
issues ... Damned if I do, damned if I don't, I guess.

--
Er*********@sun.com
Aug 22 '07 #18
Eric Sosman said:
Richard Heathfield wrote On 08/22/07 10:05,:
>Eric Sosman said:

<snip>

>>>while ((tmp = realloc(buf->data, buf->size + inc)) == NULL) {
if ((inc /= 2) == 0)
exit(EXIT_FAILURE);


Do you really think that's a good idea? We've had this whole
discussion recently, I know, but it bears reiterating nonetheless
that it is not a library function's job to decide whether to
terminate a program.

If you'll review the discussion you mention, you'll
find that I was firmly on the "report, don't crash" side.

The purpose of my rewrite of the O.P.'s code was to
implement his choices less clumsily, not to confuse the
issue by inflicting my own choices on him.
Fair enough. I do the same myself sometimes (and get the same kind of
stick that I've just given you!).

<snip>
Damned if I do, damned if I don't, I guess.
I know. To make it up to you, we'll double your fee on this occasion.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Aug 22 '07 #19
cr88192 wrote, On 22/08/07 13:20:
"Philip Potter" <pg*@see.sig.invalidwrote in message
news:fa**********@aioe.org...
>Hello clc,

I have a buffer in a program which I write to. The buffer has write-only,
unsigned-char-at-a-time access, and the amount of space required isn't
known a priori. Therefore I want the buffer to dynamically grow using
realloc().

A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the buffer,
but if realloc() fails request smaller size increments until realloc()
succeeds or until realloc() has failed to increase the buffer by even one
byte.

doubling is probably too steep IMO.

I usually use 50% each time ('size2=size+(size>>1);').
25% may also be sane ('size2=size+(size>>2);').
When you want to divide, divide. It is far easier for people to read and
shows your intention. It is over 15 years since I saw a compiler that
would not optimise division by a power of 2 to a shift.
33% may also be good.

33% is ugly though:
size2=size+(size>>2)+(size>>4)+(size>>6); //bulky
size2=size+size*33/100; //critical buffer size limit
Or more accurately and clearly
size2 = size + size/3;

Or, if you can put the result back in size
size += size/3;
but is a nice 4/3 ratio...
Nice is whatever gives a decent performance.
--
Flash Gordon
Aug 22 '07 #20
Philip Potter wrote:
Richard wrote:
.... snip ...
>
>Clearly if you KNOW that the buffer is going to double/quadruple
etc then malloc it to start with.

I've already stated that the final amount of space required isn't
known a priori. It could be thousands of bytes, it could be
millions. A linearly-growing buffer is not appropriate for this
situation, which is why I chose an exponentially-growing buffer.
And the choice of strategy depends on the usage. That is why my
ggets uses linear allocation (normal use is interactive input -
limited size) while my hashlib uses doubling (possibly millions of
entries).

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Aug 22 '07 #21

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:bo************@news.flash-gordon.me.uk...
cr88192 wrote, On 22/08/07 13:20:
>"Philip Potter" <pg*@see.sig.invalidwrote in message
news:fa**********@aioe.org...
>>Hello clc,

I have a buffer in a program which I write to. The buffer has
write-only, unsigned-char-at-a-time access, and the amount of space
required isn't known a priori. Therefore I want the buffer to
dynamically grow using realloc().

A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the buffer,
but if realloc() fails request smaller size increments until realloc()
succeeds or until realloc() has failed to increase the buffer by even
one byte.

doubling is probably too steep IMO.

I usually use 50% each time ('size2=size+(size>>1);').
25% may also be sane ('size2=size+(size>>2);').

When you want to divide, divide. It is far easier for people to read and
shows your intention. It is over 15 years since I saw a compiler that
would not optimise division by a power of 2 to a shift.
shifts are obvious enough...

>33% may also be good.

33% is ugly though:
size2=size+(size>>2)+(size>>4)+(size>>6); //bulky
size2=size+size*33/100; //critical buffer size limit

Or more accurately and clearly
size2 = size + size/3;
odd that I missed such an obvious option...
makes me look stupid, oh well...

Or, if you can put the result back in size
size += size/3;
>but is a nice 4/3 ratio...

Nice is whatever gives a decent performance.
maybe, or whatever best follows a natural growth curve, or something...

--
Flash Gordon

Aug 22 '07 #22
On Wed, 22 Aug 2007 11:52:01 -0400, Eric Sosman wrote:
At one point I even had a /* your choice */ comment on
the exit() call, but decided it was too distracting and
removed it for fear it would provoke responses on side-
issues ... Damned if I do, damned if I don't, I guess.
and if it would be, why have you to fear?
Aug 23 '07 #23
On Wed, 22 Aug 2007 12:05:40 +0100, Philip Potter
<pg*@see.sig.invalidwrote:
>Hello clc,

I have a buffer in a program which I write to. The buffer has
write-only, unsigned-char-at-a-time access, and the amount of space
required isn't known a priori. Therefore I want the buffer to
dynamically grow using realloc().

A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the buffer,
but if realloc() fails request smaller size increments until realloc()
succeeds or until realloc() has failed to increase the buffer by even
one byte.

The basic idea is below. The key function is MyBuffer_writebyte(), which
expects the incoming MyBuffer object to be in a consistent state.

Are there any improvements I could make to this code? To me it feels
clumsy, especially with the break in the 3-line while loop.

struct mybuffer_t {
unsigned char *data;
size_t size; /* size of buffer allocated */
size_t index; /* index of first unwritten member of data */
};

typedef struct mybuffer_t MyBuffer;

void MyBuffer_writebyte(MyBuffer *buf, unsigned char byte) {
if(buf->size == buf->index) {
/* need to allocate more space */
size_t inc = buf->size;
unsigned char *tmp;
while(inc>0) {
tmp = realloc(buf->data, buf->size + inc);
if (tmp!=NULL) break; /* break to preserve the size of inc*/
inc/=2;
}
if(tmp==NULL) {
/* couldn't allocate any more space, print error and exit */
exit(EXIT_FAILURE);
Nobody mentioned this so far, but I think it's worth mentioning.

Your immediate above comment is wrong. The exit() function does not
necessarily print an error. In this case, the exit() function
terminates the program and, according to the C standard, allowably
silently.

In fact, most implementations I've come across don't print anything
out as a result of calling the exit() function--the program simply
terminates silently, and the user is left waiving his or her hands in
the air.

And even if exit() did print out an error, what would you expect for
it to print out in this case? Surely it can't print out the following
(unless you expect C to have a crystal ball that intelligently reads
comments):

couldn't allocate any more space, print error and exit

If you want to print an error and then "exit()", you'll have to print
the error out on your own. Something like this would work:

if ( !tmp )
{
/* couldn't allocate any more space, print error and exit */
fprintf(stderr, "couldn't allocate any more space\n");
exit(EXIT_FAILURE);
}

If you do something like the above, make sure you include <stdio.h>
for the prototype for fprintf() and <stdlib.hfor the prototype for
exit() and the definition of the macro EXIT_FAILURE.

One of the problems with outputting an error message to stderr (or
stdout) and then calling exit() is that your user may never see the
error message. There is nothing in the C standard that prevents an
operating system, or more appropriately, a run time environment, from
terminating your program when exit() is called without you having a
glimmer of a chance of viewing the message output to stderr (or
stdout).

When you decide to call the function exit(), you are basically, as a
programmer, throwing your hands up in the air (and most likely to have
the user emulate you) and claiming that "this condition should never
happen". After all, the exit() function simply terminates the program
as far as the user is concerned.

Given this, perhaps you should consider some alternatives to using or
not using exit(). Far better as an alternative, IMHO, is to use the
assert macro instead of the exit() function. As a compromise, use the
assert macro in conjunction with the exit() function. For example:

assert(tmp != NULL);
if ( !tmp )
{
exit(EXIT_FAILURE);
}

If you use the assert macro, make sure you include <assert.h>.

As a developer, you should know to compile and test your code without
the NDEBUG macro defined. That way, the assert macro will fire off
before your program even gets to the exit() statement. Furthermore,
the assert macro will hopefully and most likely provide you with
valuable information that helps you to trace the root cause of your
problem, which is most likely, IMHO, a programming error. (If you're
really lucky, you'll be able to break into a debugger when the assert
macro fires off. Visual Studio 98 and later provide this feature,
BTW.)

Note that even with the added assert statement, you can get back to
your original functionality of calling only exit() and not assert'ing
simply by defining the macro NDEBUG; this is well defined by the C
standard.

On a somewhat related note, you should NEVER call the exit() function
from main(). Doing so expresses your lack of knowledge of Standard C,
which guarantees that a return statement in main() has the same effect
as calling exit() with an argument that is the same as the return
value. In other words, use only return statements in main()--never
call exit() from main().

And finally, the only acceptable return values from main, and the only
values you can pass into exit(), are 0, EXIT_SUCCESS and EXIT_FAILURE.
The latter two macros are defined in <stdlib.h>, so make sure to
include that header file if you use either one. Returning a value of 0
or calling exit(0) is equivalent to returning a value of EXIT_SUCCESS
or calling exit(EXIT_SUCCESS), as far as the C standard is concerned.

One convention I've grown accustomed to is to return 0 from main() if
I never return a failure condition, e.g.:

int main(void)
{
return 0;
}

But I return EXIT_SUCCESS as a successful return value if I also
return a failure condition (which can only be EXIT_FAILURE and nothing
else) from main(), e.g.:

#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
if ( argc < 2 )
{
printf("Error.\n");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}

The above could arguably be better written as (along with many other
variants):

#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
int status = EXIT_SUCCESS;
if ( argc < 2 )
{
printf("Error.\n");
status = EXIT_FAILURE;
}
return status;
}

Best regards
--
jay
Aug 23 '07 #24
cr88192 wrote, On 23/08/07 00:03:
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:bo************@news.flash-gordon.me.uk...
>cr88192 wrote, On 22/08/07 13:20:
>>"Philip Potter" <pg*@see.sig.invalidwrote in message
news:fa**********@aioe.org...
Hello clc,

I have a buffer in a program which I write to. The buffer has
write-only, unsigned-char-at-a-time access, and the amount of space
required isn't known a priori. Therefore I want the buffer to
dynamically grow using realloc().

A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the buffer,
but if realloc() fails request smaller size increments until realloc()
succeeds or until realloc() has failed to increase the buffer by even
one byte.

doubling is probably too steep IMO.

I usually use 50% each time ('size2=size+(size>>1);').
25% may also be sane ('size2=size+(size>>2);').
When you want to divide, divide. It is far easier for people to read and
shows your intention. It is over 15 years since I saw a compiler that
would not optimise division by a power of 2 to a shift.

shifts are obvious enough...
It is not as obvious. I can read shifts, but for something like the
above I then have to think to know what the scaling factor is, but with
division the scaling factor is actually stated. I also don't have to
worry about precedence because for arithmetic C just follows the rules I
was taught before I saw my first computer. I also don't have to worry
about whether it is a signed number (shifting a negative number is not
required to act as division). Is is also easier to change the factor
with division.

Finally, and most importantly, a division expresses the intent, shift
expresses a way of achieving that intent. It is always better to express
your intent where possible until you have *proved* you need to worry
about the details.
>>33% may also be good.

33% is ugly though:
size2=size+(size>>2)+(size>>4)+(size>>6); //bulky
size2=size+size*33/100; //critical buffer size limit
Or more accurately and clearly
size2 = size + size/3;

odd that I missed such an obvious option...
makes me look stupid, oh well...
You missed it because you insist on looking for micro-optimisations
instead of trying to express your intent clearly. Oh wait, that is
another way of saying it makes you look stupid. This is because this
sort of micro-optimisation has been stupid for many years because the
compiler will do it for you so wasting your time trying to beet the
compiler is stupid.
>Or, if you can put the result back in size
size += size/3;
>>but is a nice 4/3 ratio...
Nice is whatever gives a decent performance.

maybe, or whatever best follows a natural growth curve, or something...
I can't think of a good reason for wanting less than decent performance.
Of course, I consider all resources when thinking about performance, so
tell me what I've missed.
--
Flash Gordon
Aug 23 '07 #25
jaysome wrote:
On Wed, 22 Aug 2007 12:05:40 +0100, Philip Potter
<pg*@see.sig.invalidwrote:
> if(tmp==NULL) {
/* couldn't allocate any more space, print error and exit */
exit(EXIT_FAILURE);

Nobody mentioned this so far, but I think it's worth mentioning.

Your immediate above comment is wrong. The exit() function does not
necessarily print an error. In this case, the exit() function
terminates the program and, according to the C standard, allowably
silently.

In fact, most implementations I've come across don't print anything
out as a result of calling the exit() function--the program simply
terminates silently, and the user is left waiving his or her hands in
the air.
<snip>

Yes, I know all this; the comment was to show the /intent/ of that
conditional in a code example which was studying something else -
namely, realloc()ing a buffer.
If you do something like the above, make sure you include <stdio.h>
for the prototype for fprintf() and <stdlib.hfor the prototype for
exit() and the definition of the macro EXIT_FAILURE.
Similarly, I didn't show headers because I was asking questions about
concepts rather than "Why doesn't this compile?".
One of the problems with outputting an error message to stderr (or
stdout) and then calling exit() is that your user may never see the
error message. There is nothing in the C standard that prevents an
operating system, or more appropriately, a run time environment, from
terminating your program when exit() is called without you having a
glimmer of a chance of viewing the message output to stderr (or
stdout).
True. But I'm not writing production code. If I were, I would likely be
within some sort of framework which provides better error reporting and
would wrap fprintf(stderr) or CreateErrorWindow() in some sort of
errprint() function, as I have done here. I didn't quote errprint()
because it wasn't relevant. (In this case, it calls the nonstandard
function xil_printf() on the MicroBlaze soft processor, or
fprintf(stderr) in a UNIX environment.)
When you decide to call the function exit(), you are basically, as a
programmer, throwing your hands up in the air (and most likely to have
the user emulate you) and claiming that "this condition should never
happen". After all, the exit() function simply terminates the program
as far as the user is concerned.
Either that, or you're stating "if this condition does happen, this
program cannot reasonably continue". Which in this case is true. And
exit() is fine for my requirements, because this is not production code.
(Even if it was, I *still* think exit() on *alloc()-failure is the best
option for this particular application. The data generated in the buffer
is JPEG data, and a partly-generated JPEG bitstream just results in an
ugly mess and a disappointed user.)
Given this, perhaps you should consider some alternatives to using or
not using exit(). Far better as an alternative, IMHO, is to use the
assert macro instead of the exit() function. As a compromise, use the
assert macro in conjunction with the exit() function. For example:

assert(tmp != NULL);
if ( !tmp )
{
exit(EXIT_FAILURE);
}

If you use the assert macro, make sure you include <assert.h>.
I already know about assert(), but I feel that assert() is for
conditions which the programmer believes can never happen, but which
during development all too often do. That way, it is theoretically
*safe* to turn off assert()s in production code with NDEBUG because
these conditions "can't happen".

Because we know that calls to *alloc() _can_ fail, we should not use
assert() to ensure they don't - far better to detect the error and
report it through the normal channels, even during development. This way
you have tested the error-reporting functionality, assuming the
out-of-memory error occurs during development.
As a developer, you should know to compile and test your code without
the NDEBUG macro defined. That way, the assert macro will fire off
before your program even gets to the exit() statement. Furthermore,
the assert macro will hopefully and most likely provide you with
valuable information that helps you to trace the root cause of your
problem, which is most likely, IMHO, a programming error. (If you're
really lucky, you'll be able to break into a debugger when the assert
macro fires off. Visual Studio 98 and later provide this feature,
BTW.)
That would be nice if I was programming for an environment which Visual
Studio and friends target.

<snip>
On a somewhat related note, you should NEVER call the exit() function
from main(). Doing so expresses your lack of knowledge of Standard C,
which guarantees that a return statement in main() has the same effect
as calling exit() with an argument that is the same as the return
value. In other words, use only return statements in main()--never
call exit() from main().
What? Why? This "NEVER" seems highly peculiar. Yes, exit() and return
are equivalent in main(). But you don't say why you should prefer
return. If it's because return is likely to be faster than exit(), this
laughable because exit() can only be called once, so the time saved is
completely insignificant.

If you like your error reporting to be idiomatic and consistent in
style, surely it's better to call exit() from main() just like you would
anywhere else?

In any case, I'd reserve the word "NEVER" for things like "NEVER free()
the same pointer twice" or "NEVER declare main() as returning void", not
stylistic points like this.
And finally, the only acceptable return values from main, and the only
values you can pass into exit(), are 0, EXIT_SUCCESS and EXIT_FAILURE.
The latter two macros are defined in <stdlib.h>, so make sure to
include that header file if you use either one. Returning a value of 0
or calling exit(0) is equivalent to returning a value of EXIT_SUCCESS
or calling exit(EXIT_SUCCESS), as far as the C standard is concerned.
Yes, I know. That's why I wrote exit(EXIT_FAILURE) and not exit(1).

<snip more>

Phil

--
Philip Potter pgp <atdoc.ic.ac.uk
Aug 23 '07 #26

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:nu************@news.flash-gordon.me.uk...
cr88192 wrote, On 23/08/07 00:03:
>"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:bo************@news.flash-gordon.me.uk...
>>cr88192 wrote, On 22/08/07 13:20:
"Philip Potter" <pg*@see.sig.invalidwrote in message
news:fa**********@aioe.org...
Hello clc,
>
I have a buffer in a program which I write to. The buffer has
write-only, unsigned-char-at-a-time access, and the amount of space
required isn't known a priori. Therefore I want the buffer to
dynamically grow using realloc().
>
A comment by Richard Heathfield in a thread here suggested that a good
algorithm for this is to use realloc() to double the size of the
buffer, but if realloc() fails request smaller size increments until
realloc() succeeds or until realloc() has failed to increase the
buffer by even one byte.
>
doubling is probably too steep IMO.

I usually use 50% each time ('size2=size+(size>>1);').
25% may also be sane ('size2=size+(size>>2);').
When you want to divide, divide. It is far easier for people to read and
shows your intention. It is over 15 years since I saw a compiler that
would not optimise division by a power of 2 to a shift.

shifts are obvious enough...

It is not as obvious. I can read shifts, but for something like the above
I then have to think to know what the scaling factor is, but with division
the scaling factor is actually stated. I also don't have to worry about
precedence because for arithmetic C just follows the rules I was taught
before I saw my first computer. I also don't have to worry about whether
it is a signed number (shifting a negative number is not required to act
as division). Is is also easier to change the factor with division.

Finally, and most importantly, a division expresses the intent, shift
expresses a way of achieving that intent. It is always better to express
your intent where possible until you have *proved* you need to worry about
the details.
after maybe a few years of experience, one has probably forgotten about the
issue anyways, the shift seems intuitive enough...

>>>33% may also be good.

33% is ugly though:
size2=size+(size>>2)+(size>>4)+(size>>6); //bulky
size2=size+size*33/100; //critical buffer size limit
Or more accurately and clearly
size2 = size + size/3;

odd that I missed such an obvious option...
makes me look stupid, oh well...

You missed it because you insist on looking for micro-optimisations
instead of trying to express your intent clearly. Oh wait, that is another
way of saying it makes you look stupid. This is because this sort of
micro-optimisation has been stupid for many years because the compiler
will do it for you so wasting your time trying to beet the compiler is
stupid.
not optimizations. shifts are how one typically does these things...

it is much the same as why we call int variables i, j, and k I think, or
many other common practices. after enough years, and enough code, one
largely forgets any such reasoning, all rote response really...

actually, had I been thinking of compiler behavior much at all, I would have
realized that 'i/3' actually becomes a fixed point multiply by a reciprocal.
no such reasoning was used in this case.

this was a trivial and obvious mistake is all.

>>Or, if you can put the result back in size
size += size/3;

but is a nice 4/3 ratio...
Nice is whatever gives a decent performance.

maybe, or whatever best follows a natural growth curve, or something...

I can't think of a good reason for wanting less than decent performance.
Of course, I consider all resources when thinking about performance, so
tell me what I've missed.
4/3 is a natural growth curve. something around this ratio should presumably
work good as a general mean case.

--
Flash Gordon

Aug 23 '07 #27
cr88192 wrote:
4/3 is a natural growth curve. something around this ratio should presumably
work good as a general mean case.
What do you mean by this? What is a "natural growth curve"? And why is
4/3 more "natural" than, say, Euler's number or the golden ratio?

--
Philip Potter pgp <atdoc.ic.ac.uk
Aug 23 '07 #28
cr88192 wrote, On 23/08/07 11:12:
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:nu************@news.flash-gordon.me.uk...
>cr88192 wrote, On 23/08/07 00:03:
>>"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:bo************@news.flash-gordon.me.uk...
cr88192 wrote, On 22/08/07 13:20:
"Philip Potter" <pg*@see.sig.invalidwrote in message
news:fa**********@aioe.org...
>Hello clc,
>>
>I have a buffer in a program which I write to. The buffer has
>write-only, unsigned-char-at-a-time access, and the amount of space
>required isn't known a priori. Therefore I want the buffer to
>dynamically grow using realloc().
>>
>A comment by Richard Heathfield in a thread here suggested that a good
>algorithm for this is to use realloc() to double the size of the
>buffer, but if realloc() fails request smaller size increments until
>realloc() succeeds or until realloc() has failed to increase the
>buffer by even one byte.
>>
doubling is probably too steep IMO.
>
I usually use 50% each time ('size2=size+(size>>1);').
25% may also be sane ('size2=size+(size>>2);').
When you want to divide, divide. It is far easier for people to read and
shows your intention. It is over 15 years since I saw a compiler that
would not optimise division by a power of 2 to a shift.
shifts are obvious enough...
It is not as obvious. I can read shifts, but for something like the above
I then have to think to know what the scaling factor is, but with division
the scaling factor is actually stated. I also don't have to worry about
precedence because for arithmetic C just follows the rules I was taught
before I saw my first computer. I also don't have to worry about whether
it is a signed number (shifting a negative number is not required to act
as division). Is is also easier to change the factor with division.

Finally, and most importantly, a division expresses the intent, shift
expresses a way of achieving that intent. It is always better to express
your intent where possible until you have *proved* you need to worry about
the details.

after maybe a few years of experience, one has probably forgotten about the
issue anyways, the shift seems intuitive enough...

>>>>33% may also be good.
>
33% is ugly though:
size2=size+(size>>2)+(size>>4)+(size>>6); //bulky
size2=size+size*33/100; //critical buffer size limit
Or more accurately and clearly
size2 = size + size/3;
odd that I missed such an obvious option...
makes me look stupid, oh well...
You missed it because you insist on looking for micro-optimisations
instead of trying to express your intent clearly. Oh wait, that is another
way of saying it makes you look stupid. This is because this sort of
micro-optimisation has been stupid for many years because the compiler
will do it for you so wasting your time trying to beet the compiler is
stupid.

not optimizations. shifts are how one typically does these things...
Not if one is being sensible.
it is much the same as why we call int variables i, j, and k I think, or
many other common practices. after enough years, and enough code, one
largely forgets any such reasoning, all rote response really...
I have spent years programming in assembler where I would use shifts and
years spent programming in high level languages where I would not.
actually, had I been thinking of compiler behavior much at all, I would have
realized that 'i/3' actually becomes a fixed point multiply by a reciprocal.
no such reasoning was used in this case.

this was a trivial and obvious mistake is all.
A trivial mistake that does not get made if you code what you want to
express instead of trying to use tricks. That is part of the point.

Whatever the reason for you learning to use such tricks it is well past
time you learned not to use them excpet where it is proved that you need to.
>>>Or, if you can put the result back in size
size += size/3;

but is a nice 4/3 ratio...
Nice is whatever gives a decent performance.
maybe, or whatever best follows a natural growth curve, or something...
I can't think of a good reason for wanting less than decent performance.
Of course, I consider all resources when thinking about performance, so
tell me what I've missed.

4/3 is a natural growth curve. something around this ratio should presumably
work good as a general mean case.
Exponential is also a natural growth curve, if you don't believe me
check how populations grow in nature, for at least some it is
exponential until a crash.

The best growth curve depends on the situation. For some thing I know
that in the foreseeable future I need space for 10 foos and if it grows
beyond that it will be unlikely to be by much, so I start with 10 and
use a small linear growth (saves having to revisit the code unless
something very strange happens). For other things that would be
completely stupid.
--
Flash Gordon
Aug 23 '07 #29

"Philip Potter" <pg*@see.sig.invalidwrote in message
news:fa**********@aioe.org...
cr88192 wrote:
>4/3 is a natural growth curve. something around this ratio should
presumably work good as a general mean case.

What do you mean by this? What is a "natural growth curve"? And why is 4/3
more "natural" than, say, Euler's number or the golden ratio?
good mystery...
I don't know where this came from...
one would probably need to run simulations or something to determine this
(aka, 'what is the best growth curve').

well, in the past, I usually used 50%, or 3/2, probably good enough...

--
Philip Potter pgp <atdoc.ic.ac.uk

Aug 23 '07 #30

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:ck************@news.flash-gordon.me.uk...
cr88192 wrote, On 23/08/07 11:12:
<snip>
>>
not optimizations. shifts are how one typically does these things...

Not if one is being sensible.
>it is much the same as why we call int variables i, j, and k I think, or
many other common practices. after enough years, and enough code, one
largely forgets any such reasoning, all rote response really...

I have spent years programming in assembler where I would use shifts and
years spent programming in high level languages where I would not.
yeah, I also use assembler...

otherwise:

I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think divides...
if I were thinking of a divide, than i/3 is an obvious difference from i/2.
if I was not, it is not.
it is a non-obvious jump from 'i>>1' to 'i/3', unless one first thinks that
'i>>1' is actually 'i/2'...

>actually, had I been thinking of compiler behavior much at all, I would
have realized that 'i/3' actually becomes a fixed point multiply by a
reciprocal. no such reasoning was used in this case.

this was a trivial and obvious mistake is all.

A trivial mistake that does not get made if you code what you want to
express instead of trying to use tricks. That is part of the point.
'want to express'?...

this implies certain things, ie, that what I would want to express is
different from the code I would write to express it.

do you ask me, if my grammar is unusual, if it is because I am not writing
what I am meaning to express?...

it is a similar question IMO...

Whatever the reason for you learning to use such tricks it is well past
time you learned not to use them excpet where it is proved that you need
to.
'tricks'?...

shifts are a basic operator, I don't see why they would be viewed as any
kind of trick...

do we call pointers and casts tricks as well?... typical code is riddled
with the things, nothing sinister there...

now, does one think in terms of arithmetic ops or in terms of bitwise
ops?...
does one do their everyday arithmetic (internally) in decimal or in hex?...

if we read something do we see the text, hear words, or see imagery?...
this kind of issue has not come up in any time in recent memory...
so, one thinks in hex and writes in decimal, or vice versa, not usually that
important.
and, if one happens to be thinking in hex right, then a shift is more
intuitive than a divide.

does one think of the money in their wallet as 300 or 0x12C?...
does one think in british or metric units?...

or does one use whatever system happens to seem more natural at that
instant?...
up to them really. it only matters that when someone asks that they give the
right number, and when they read that they don't get confused...

one typically doesn't know the magic going on even in ones' own head...

>>
4/3 is a natural growth curve. something around this ratio should
presumably work good as a general mean case.

Exponential is also a natural growth curve, if you don't believe me check
how populations grow in nature, for at least some it is exponential until
a crash.
when applied recursively, this is exponential...

The best growth curve depends on the situation. For some thing I know that
in the foreseeable future I need space for 10 foos and if it grows beyond
that it will be unlikely to be by much, so I start with 10 and use a small
linear growth (saves having to revisit the code unless something very
strange happens). For other things that would be completely stupid.
yeah...

--
Flash Gordon

Aug 23 '07 #31
Philip Potter said:
Richard Heathfield wrote:
<snip>
>>
I have just conducted a search of my development archives, and found
a few calls to exit() in ancient code, so I can't claim I never use
it. But without looking it up, I can't recall the last time I used
it, at least not in "real" code.

So do you prefer to handle errors by returning an error code, which
main() can deal with as it pleases?
Yes.
I guess that makes sense, but it
seems like a more effortful design style than I really need for this
project.
<shrugI do it that way not because it takes effort but because it
saves effort. At least, it does for me. By sticking to a rigid
structure, I find it much, much easier to locate and destroy bugs.

<example of freeing "the same pointer" twice>
I thought someone might come up with that example. But it could be
argued that q is a different pointer from p, because although it may
have the same value, it serves a different purpose.
Indeed it could be argued that the second pointer is a different value
that just happens to look remarkably similar to the first. I was, of
course, merely cranking up the pedantry. :-)

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Aug 23 '07 #32
Richard Heathfield wrote:
Philip Potter said:
>I thought someone might come up with that example. But it could be
argued that q is a different pointer from p, because although it may
have the same value, it serves a different purpose.

Indeed it could be argued that the second pointer is a different value
that just happens to look remarkably similar to the first. I was, of
course, merely cranking up the pedantry. :-)
I think you cranked it up a long time ago; the wind changed and it got
stuck...

--
Philip Potter pgp <atdoc.ic.ac.uk
Aug 23 '07 #33
cr88192 wrote, On 23/08/07 16:09:
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:ck************@news.flash-gordon.me.uk...
>cr88192 wrote, On 23/08/07 11:12:

<snip>
>>not optimizations. shifts are how one typically does these things...
Not if one is being sensible.
>>it is much the same as why we call int variables i, j, and k I think, or
many other common practices. after enough years, and enough code, one
largely forgets any such reasoning, all rote response really...
I have spent years programming in assembler where I would use shifts and
years spent programming in high level languages where I would not.

yeah, I also use assembler...

otherwise:

I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think divides...

if I were thinking of a divide, than i/3 is an obvious difference from i/2.
if I was not, it is not.
it is a non-obvious jump from 'i>>1' to 'i/3', unless one first thinks that
'i>>1' is actually 'i/2'...
You are trying to reduce a number by a factor not trying to move bits.
They are conceptually different things. Try using a shit to halve a
floating point number as see how far it gets you.
>>actually, had I been thinking of compiler behavior much at all, I would
have realized that 'i/3' actually becomes a fixed point multiply by a
reciprocal. no such reasoning was used in this case.

this was a trivial and obvious mistake is all.
A trivial mistake that does not get made if you code what you want to
express instead of trying to use tricks. That is part of the point.

'want to express'?...

this implies certain things, ie, that what I would want to express is
different from the code I would write to express it.
Shift is for moving bits, divide is for scaling. Otherwise shift would
work on floating point and would have behaviour defined by the C
standard for negative numbers (it leaves it for the implementation to
define the result of right shifting a negative number).
do you ask me, if my grammar is unusual, if it is because I am not writing
what I am meaning to express?...

it is a similar question IMO...
No, it is a question of using the wrong word. One that in some
situations happens to be similar, but in many situations means something
completely different.
>Whatever the reason for you learning to use such tricks it is well past
time you learned not to use them excpet where it is proved that you need
to.

'tricks'?...
Yes, it is a trick.
shifts are a basic operator, I don't see why they would be viewed as any
kind of trick...
The shift operator is a basic operator for moving bits, using it to
divide is a trick and one that does not work in all situations.
do we call pointers and casts tricks as well?... typical code is riddled
with the things, nothing sinister there...
Code riddled with casts probably is bad.

Code using a lot of shifts for shifting bits, NOT for division, could be
good.
now, does one think in terms of arithmetic ops or in terms of bitwise
ops?...
No. However a shift works in terms of bits, not in terms of arithmetic.
does one do their everyday arithmetic (internally) in decimal or in hex?...

if we read something do we see the text, hear words, or see imagery?...
Several of the things above are reasons for using divide rather than shift.
this kind of issue has not come up in any time in recent memory...
The issue of you making a mistake that you would not have done had you
stuck to doing simple integer division is a good reason for using
integer division.
so, one thinks in hex and writes in decimal, or vice versa, not usually that
important.
and, if one happens to be thinking in hex right, then a shift is more
intuitive than a divide.
What do you get if you right shift -1? On some machines it will be 32767
on others it will be -1. Does that sound like division to you?
does one think of the money in their wallet as 300 or 0x12C?...
does one think in british or metric units?...

or does one use whatever system happens to seem more natural at that
instant?...
Not relevant since a shift is not a different representation for
division, it is a different operation.
up to them really. it only matters that when someone asks that they give the
right number, and when they read that they don't get confused...

one typically doesn't know the magic going on even in ones' own head...
Code is written for other people, not just the original author or the
compiler.
>>4/3 is a natural growth curve. something around this ratio should
presumably work good as a general mean case.
Exponential is also a natural growth curve, if you don't believe me check
how populations grow in nature, for at least some it is exponential until
a crash.

when applied recursively, this is exponential...
Linear is another natural growth curve.
>The best growth curve depends on the situation. For some thing I know that
in the foreseeable future I need space for 10 foos and if it grows beyond
that it will be unlikely to be by much, so I start with 10 and use a small
linear growth (saves having to revisit the code unless something very
strange happens). For other things that would be completely stupid.

yeah...
Ah, so you realise now that suggesting a 4/3 curve makes less sense than
suggesting using a curve that will give good performance. My example
above was extreme, but there are plenty of less extreme examples.
--
Flash Gordon
Aug 23 '07 #34
cr88192 wrote:
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:ck************@news.flash-gordon.me.uk...
>cr88192 wrote, On 23/08/07 11:12:

<snip>
>>not optimizations. shifts are how one typically does these things...

Not if one is being sensible.
[ ... ]
>I have spent years programming in assembler where I would use shifts and
years spent programming in high level languages where I would not.

yeah, I also use assembler...

otherwise:

I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think divides...
<snip>
>Whatever the reason for you learning to use such tricks it is well past
time you learned not to use them excpet where it is proved that you need
to.
'tricks'?...

shifts are a basic operator, I don't see why they would be viewed as any
kind of trick...
The point, I think, is that using right shift for achieving the effect of
division is not completely portable. It used to be a viable alternative at
a time when optimisation was primitive, but it's useless and flaky
nowadays, since almost any compiler is going to optimise a division into
shifts, if it can.
do we call pointers and casts tricks as well?
Not the devices themselves, but their uses in specific cases, eg., type
punning, casts that discard data etc.
... typical code is riddled
with the things, nothing sinister there...
now, does one think in terms of arithmetic ops or in terms of bitwise
ops?...
No, in terms of their effect according to the rules of arithmetics. Numbers
may be represented as bits under computers, but that is no reason to think
of every arithmetic operation in terms of their effect at the bit
representation level, except when it _is_ required.
does one do their everyday arithmetic (internally) in decimal or in
hex?...

if we read something do we see the text, hear words, or see imagery?...
Personally, an amalgamation of all three, plus imagination of other types of
sensory inputs, as appropriate.

<snip>

Aug 23 '07 #35

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:1r************@news.flash-gordon.me.uk...
cr88192 wrote, On 23/08/07 16:09:
>"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:ck************@news.flash-gordon.me.uk...
>>cr88192 wrote, On 23/08/07 11:12:

<snip>
>>>not optimizations. shifts are how one typically does these things...
Not if one is being sensible.

it is much the same as why we call int variables i, j, and k I think,
or many other common practices. after enough years, and enough code,
one largely forgets any such reasoning, all rote response really...
I have spent years programming in assembler where I would use shifts and
years spent programming in high level languages where I would not.

yeah, I also use assembler...

otherwise:

I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think
divides...

if I were thinking of a divide, than i/3 is an obvious difference from
i/2. if I was not, it is not.
it is a non-obvious jump from 'i>>1' to 'i/3', unless one first thinks
that 'i>>1' is actually 'i/2'...

You are trying to reduce a number by a factor not trying to move bits.
They are conceptually different things. Try using a shit to halve a
floating point number as see how far it gets you.
but, a float and an integer are different concepts though...

not sure of why one would consider using a shift on a float...

>>>actually, had I been thinking of compiler behavior much at all, I would
have realized that 'i/3' actually becomes a fixed point multiply by a
reciprocal. no such reasoning was used in this case.

this was a trivial and obvious mistake is all.
A trivial mistake that does not get made if you code what you want to
express instead of trying to use tricks. That is part of the point.

'want to express'?...

this implies certain things, ie, that what I would want to express is
different from the code I would write to express it.

Shift is for moving bits, divide is for scaling. Otherwise shift would
work on floating point and would have behaviour defined by the C standard
for negative numbers (it leaves it for the implementation to define the
result of right shifting a negative number).
ok, makes sense.

>do you ask me, if my grammar is unusual, if it is because I am not
writing what I am meaning to express?...

it is a similar question IMO...

No, it is a question of using the wrong word. One that in some situations
happens to be similar, but in many situations means something completely
different.
the situations are different though.
as noted above, an integer and a float are conceptually different.

if I am speaking to someone do I count in floats? no, I count in integers...

>>Whatever the reason for you learning to use such tricks it is well past
time you learned not to use them excpet where it is proved that you need
to.

'tricks'?...

Yes, it is a trick.
>shifts are a basic operator, I don't see why they would be viewed as any
kind of trick...

The shift operator is a basic operator for moving bits, using it to divide
is a trick and one that does not work in all situations.
it works on integers, integers are the context and the situation.
sizes are generally not negative either.

there is no problem as I see it.

>do we call pointers and casts tricks as well?... typical code is riddled
with the things, nothing sinister there...

Code riddled with casts probably is bad.

Code using a lot of shifts for shifting bits, NOT for division, could be
good.
ok, but then one has to differentiate: when is the concept shifting bits and
when is it division?...
this is a subjective matter.

>now, does one think in terms of arithmetic ops or in terms of bitwise
ops?...

No. However a shift works in terms of bits, not in terms of arithmetic.
but on integers, bit ops are arithmetic...

makes nearly as much sense in conversation as it would in C.
we ask someone 'what is 12 and 7?' they say '8'...

>does one do their everyday arithmetic (internally) in decimal or in
hex?...

if we read something do we see the text, hear words, or see imagery?...

Several of the things above are reasons for using divide rather than
shift.
if one is confused as to whether not they are dealing with an integer...

what if I meant to grab an apple but instead grabbed a potatoe and proceeded
to eat it as said apple. people would look oddly, and then maybe ones'
stomach gets irritated from the raw potatoe...

so, casually eating an object raw is valid for an apple but not for said
potatoe.
likewise, we put potatoes in soup but not apples.

same difference really...
or a bigger mystery (OT, but as an example):
when someone can claim adherence to a certain religion and do things which
are condemned within the doctrine of said religion, meanwhile knowing the
doctrine, and then believe that their actions are moral (presumably within
the bounds of said religion).

this has happened recently, and after several months I still haven't figured
it out.

do they deny their actions? no.
do they deny the doctrine or the existence (or interpretation) of the
indicated statements? no.
do they admit that their actions are immoral? no.

this makes little sense really...

like some kind of bizarre paradox as to their reasoning...
(throwing philosophy at the problem still does not resolve it...).

>this kind of issue has not come up in any time in recent memory...

The issue of you making a mistake that you would not have done had you
stuck to doing simple integer division is a good reason for using integer
division.
potentially, but this makes an assumption about ones' thought process.
if one always types shifts, one thinks shifts, one does not think division
unless one were first thinking of division, which I assert that I was not
(which is why I think I missed such an obvious answer).

>so, one thinks in hex and writes in decimal, or vice versa, not usually
that important.
and, if one happens to be thinking in hex right, then a shift is more
intuitive than a divide.

What do you get if you right shift -1? On some machines it will be 32767
on others it will be -1. Does that sound like division to you?
different context.

one machine is probably an x86 in real mode with an int being 16 bits and
using shr.
the other is probably using sar.

>does one think of the money in their wallet as 300 or 0x12C?...
does one think in british or metric units?...

or does one use whatever system happens to seem more natural at that
instant?...

Not relevant since a shift is not a different representation for division,
it is a different operation.
it is an operation relevant to a situation, that situation being positive
integers...
I was not claiming it was also sane for floats or for negatives.

>up to them really. it only matters that when someone asks that they give
the right number, and when they read that they don't get confused...

one typically doesn't know the magic going on even in ones' own head...

Code is written for other people, not just the original author or the
compiler.
maybe.
if other people ever actually read said code.

>>>4/3 is a natural growth curve. something around this ratio should
presumably work good as a general mean case.
Exponential is also a natural growth curve, if you don't believe me
check how populations grow in nature, for at least some it is
exponential until a crash.

when applied recursively, this is exponential...

Linear is another natural growth curve.
except that linear is not a curve...

>>The best growth curve depends on the situation. For some thing I know
that in the foreseeable future I need space for 10 foos and if it grows
beyond that it will be unlikely to be by much, so I start with 10 and
use a small linear growth (saves having to revisit the code unless
something very strange happens). For other things that would be
completely stupid.

yeah...

Ah, so you realise now that suggesting a 4/3 curve makes less sense than
suggesting using a curve that will give good performance. My example above
was extreme, but there are plenty of less extreme examples.
the question is then what curve gives the best performance, which may depend
on the situation.
it is a tradeoff really between number of reallocations and wasted space.

4/3 is more conservative with space than 3/2, which is still more
conservative than the golden ratio.

so the question is then of rate of growth and likelyhood of continued
growth.
and, for the previous topic:
it is by some odd convention that when we see a division by undivisible
integers that we realize that it has a non-integer output.

I guess it is also by similar convention that we realize that negative even
roots are imaginary and that non-integral exponents of negative bases are
complex...

--
Flash Gordon

Aug 24 '07 #36
On 2007-08-22 11:29, Mark Bluemel <ma**********@pobox.comwrote:
Philip Potter wrote:
>>
struct mybuffer_t {
unsigned char *data;
size_t size; /* size of buffer allocated */
size_t index; /* index of first unwritten member of data */
};
[...]
int resizeBuffer(MyBuffer *buf) {
size_t inc = buf->size;
unsigned char *tmp = NULL;
while(inc>0 &&
(tmp = realloc(buf->data, buf->size + inc)) == NULL) {
You can change this test slightly to catch size_t overflow at the same
time:

while((size_t)(buf->size + inc) buf->size &&
(tmp = realloc(buf->data, buf->size + inc)) == NULL) {

hp
--
_ | Peter J. Holzer | I know I'd be respectful of a pirate
|_|_) | Sysadmin WSR | with an emu on his shoulder.
| | | hj*@hjp.at |
__/ | http://www.hjp.at/ | -- Sam in "Freefall"
Aug 24 '07 #37
On 2007-08-23 11:11, Flash Gordon <sp**@flash-gordon.me.ukwrote:
cr88192 wrote, On 23/08/07 11:12:
>>
4/3 is a natural growth curve. something around this ratio should
presumably work good as a general mean case.

Exponential is also a natural growth curve,
n_i+1 = 4/3 * n_i *is* exponential.

hp
--
_ | Peter J. Holzer | I know I'd be respectful of a pirate
|_|_) | Sysadmin WSR | with an emu on his shoulder.
| | | hj*@hjp.at |
__/ | http://www.hjp.at/ | -- Sam in "Freefall"
Aug 24 '07 #38
On 2007-08-24 02:01, cr88192 <cr*****@hotmail.comwrote:
>
"santosh" <sa*********@gmail.comwrote in message
news:fa**********@aioe.org...
>No, in terms of their effect according to the rules of arithmetics.
Numbers may be represented as bits under computers, but that is no
reason to think of every arithmetic operation in terms of their
effect at the bit representation level, except when it _is_ required.

ok. people then thinking in decimal rather than bits
Or just in terms of numbers regardless of any specific representation.

hp

--
_ | Peter J. Holzer | I know I'd be respectful of a pirate
|_|_) | Sysadmin WSR | with an emu on his shoulder.
| | | hj*@hjp.at |
__/ | http://www.hjp.at/ | -- Sam in "Freefall"
Aug 24 '07 #39
On 2007-08-23 15:09, cr88192 <cr*****@hotmail.comwrote:
I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think divides...
if I were thinking of a divide, than i/3 is an obvious difference from i/2.
if I was not, it is not.
it is a non-obvious jump from 'i>>1' to 'i/3', unless one first thinks that
'i>>1' is actually 'i/2'...
But you were thinking of divides - you explicitely stated that you
wanted an increase of 50% (1/2), 33% (ca. 1/3) or 25% (1/4). Since 1/2,
1/3 and 1/4 is a nice progression, I find it much more likely that you
derived ((i>>2) + (i>>4) + (i>>6)) as an approximation of 1/3 than
the other way around. If you had been thinking of shifts, you would
probably have chosen ((i>>2) + (i>>3)) as the middle point between
(i>>1) and (i>>2).

hp
--
_ | Peter J. Holzer | I know I'd be respectful of a pirate
|_|_) | Sysadmin WSR | with an emu on his shoulder.
| | | hj*@hjp.at |
__/ | http://www.hjp.at/ | -- Sam in "Freefall"
Aug 24 '07 #40

"Peter J. Holzer" <hj*********@hjp.atwrote in message
news:sl************************@zeno.hjp.at...
On 2007-08-24 02:01, cr88192 <cr*****@hotmail.comwrote:
>>
"santosh" <sa*********@gmail.comwrote in message
news:fa**********@aioe.org...
>>No, in terms of their effect according to the rules of arithmetics.
Numbers may be represented as bits under computers, but that is no
reason to think of every arithmetic operation in terms of their
effect at the bit representation level, except when it _is_ required.

ok. people then thinking in decimal rather than bits

Or just in terms of numbers regardless of any specific representation.
however this would work I guess...
actually, I think if I think of a specific-sized integer, I see a fixed
square with hex-digits inside.
if I think of other numbers, I see them as a decimal version. floats are
decimal but associated with a square (the number is fit inside the square).

I think they may also be overlayed with a name and other information.

I think idle thinking results in a good deal of visual "shadowing". I think
about something, and stuff goes on in some kind of semi-diagramic head-UI.

things in said head-UI, look different than in traditional UIs, mostly light
on dark rather than dark on light (as is typical in windows...).
or something...

hp

--
_ | Peter J. Holzer | I know I'd be respectful of a pirate
|_|_) | Sysadmin WSR | with an emu on his shoulder.
| | | hj*@hjp.at |
__/ | http://www.hjp.at/ | -- Sam in "Freefall"

Aug 24 '07 #41
On 2007-08-22 13:13, Richard Tobin <ri*****@cogsci.ed.ac.ukwrote:
Of course that's what the original proposal did: increase by 100% of
the current size. If you start from one, this means you only allocate
power-of-two sized blocks.

A problem with any approach is that it might interact badly with the
malloc() implementation. Imagine an implementation that always
allocates powers of two but uses sizeof(void *) bytes of it to record
the size - if you always allocated powers of two you would end up
allocating nearly four times as much as you need.
I actually ran into this problem when I tested my dynamic buffer
implementation on several systems in the early 1990's. I found that
increasing by a factor of 1.5 worked well across all the implementations
I tested at that time, but of course that's no guarantee that it works
as well for all possible implementations (or even most implementations
today).
I recommend separating out the increment algorithm so that it can
easily be changed if it proves to be bad on some platform.
Yup. Did that, too, at the time:

/* Macro: DA_GROW
* Purpose: Return the new size of an array which had size |a| and must
* include index |b|.
* This macro may be changed by the application, but the default is
* expected to be useful for many applications.
*
* Algorithm: It is assumed that most arrays will grow linearly,
* that is, |b| will equal |a|. To avoid calling realloc for every
* element added, the size is multiplied by a constant factor.
* The factor is 1.5 because a factor of 2 has produced extreme
* fragmentation with some allocators.
* For the case where this expansion is not sufficient to reach
* index |b| we do not guess about the future growth of the array
* and make it just large enough.
* This algorithm is good enough to start at zero, but the
* steps will be very small at first (0, 1, 2, 3, 4, 6, 9, 13, ...)
* so it might be a good idea to start with some moderate index.
*/
#ifndef DA_GROW
#define DA_GROW(a,b) MAX((a) * 3 / 2, (b) + 1)
#endif

(the whole thing was entirely implemented as macros, not as functions)

hp
--
_ | Peter J. Holzer | I know I'd be respectful of a pirate
|_|_) | Sysadmin WSR | with an emu on his shoulder.
| | | hj*@hjp.at |
__/ | http://www.hjp.at/ | -- Sam in "Freefall"
Aug 24 '07 #42
cr88192 wrote, On 24/08/07 02:46:
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:1r************@news.flash-gordon.me.uk...
>cr88192 wrote, On 23/08/07 16:09:
>>"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:ck************@news.flash-gordon.me.uk...
cr88192 wrote, On 23/08/07 11:12:
<snip>

not optimizations. shifts are how one typically does these things...
Not if one is being sensible.

it is much the same as why we call int variables i, j, and k I think,
or many other common practices. after enough years, and enough code,
one largely forgets any such reasoning, all rote response really...
I have spent years programming in assembler where I would use shifts and
years spent programming in high level languages where I would not.
yeah, I also use assembler...

otherwise:

I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think
divides...

if I were thinking of a divide, than i/3 is an obvious difference from
i/2. if I was not, it is not.
it is a non-obvious jump from 'i>>1' to 'i/3', unless one first thinks
that 'i>>1' is actually 'i/2'...
You are trying to reduce a number by a factor not trying to move bits.
They are conceptually different things. Try using a shit to halve a
floating point number as see how far it gets you.

but, a float and an integer are different concepts though...
not sure of why one would consider using a shift on a float...
Because you consider shift an appropriate method of halving a number.
>>>>actually, had I been thinking of compiler behavior much at all, I would
have realized that 'i/3' actually becomes a fixed point multiply by a
reciprocal. no such reasoning was used in this case.
>
this was a trivial and obvious mistake is all.
A trivial mistake that does not get made if you code what you want to
express instead of trying to use tricks. That is part of the point.
'want to express'?...

this implies certain things, ie, that what I would want to express is
different from the code I would write to express it.
Shift is for moving bits, divide is for scaling. Otherwise shift would
work on floating point and would have behaviour defined by the C standard
for negative numbers (it leaves it for the implementation to define the
result of right shifting a negative number).

ok, makes sense.
So will you move to the less error prone system then?
>>do you ask me, if my grammar is unusual, if it is because I am not
writing what I am meaning to express?...

it is a similar question IMO...
No, it is a question of using the wrong word. One that in some situations
happens to be similar, but in many situations means something completely
different.

the situations are different though.
as noted above, an integer and a float are conceptually different.
if I am speaking to someone do I count in floats? no, I count in integers...
I measure sizes in all sorts of different systems not all of which are
integers.
>>>Whatever the reason for you learning to use such tricks it is well past
time you learned not to use them excpet where it is proved that you need
to.

'tricks'?...
Yes, it is a trick.
>>shifts are a basic operator, I don't see why they would be viewed as any
kind of trick...
The shift operator is a basic operator for moving bits, using it to divide
is a trick and one that does not work in all situations.

it works on integers, integers are the context and the situation.
sizes are generally not negative either.
They are sometimes.
If a spec says a number is in bits 3 to 7 then getting it to the correct
place is shifting.
there is no problem as I see it.
I thought you admitted to an error that would not be made if using
simple division. For one value you certainly suggested something more
complex than simple division.
>>do we call pointers and casts tricks as well?... typical code is riddled
with the things, nothing sinister there...
Code riddled with casts probably is bad.

Code using a lot of shifts for shifting bits, NOT for division, could be
good.

ok, but then one has to differentiate: when is the concept shifting bits and
when is it division?...
this is a subjective matter.
I honestly cannot conceive of why one would consider shifting to be the
natural way of scaling a value. Would you convert from inches to
centimetres by shifting? Would you talk about doubling your salary or
shifting it one bit to the left? Would you tell someone that they you
have doubled the storage capacity of a disk array or shifted it one bit
to the left? Or that the free space has been shifted one bit to the
right instead of halved?

Changing the size of a buffer is NOT linked to the representation of the
size, shifting is.
>>now, does one think in terms of arithmetic ops or in terms of bitwise
ops?...
No. However a shift works in terms of bits, not in terms of arithmetic.

but on integers, bit ops are arithmetic...
Nope. Right shift is a logical shift on some processors *not*
arithmetic. If anyone ever builds a trinary machine then the shift
operation procided by the processor will *not* be multiply/divide by 2.
On a machine using BCD it isn't either, and BCD has been used for
representing integer values.
makes nearly as much sense in conversation as it would in C.
we ask someone 'what is 12 and 7?' they say '8'...
I can't see what point you are trying to make, unless it is why you
should *not* use a shift for scaling.
>>does one do their everyday arithmetic (internally) in decimal or in
hex?...

if we read something do we see the text, hear words, or see imagery?...
Several of the things above are reasons for using divide rather than
shift.

if one is confused as to whether not they are dealing with an integer...
You still keep forgetting that it only works for half the integer types
C provides, a sure sign in my opinion that you should stay well away
from the shift operator in C.

You are confused about values and specific representations. Binary is
not the only representation for numbers, it just happens to be the
current vogue in computing.
what if I meant to grab an apple but instead grabbed a potatoe and proceeded
to eat it as said apple. people would look oddly, and then maybe ones'
stomach gets irritated from the raw potatoe...

so, casually eating an object raw is valid for an apple but not for said
potatoe.
likewise, we put potatoes in soup but not apples.

same difference really...
All valid analogies for why you should be using division for scaling not
shifting.

<snip philosophy ramblings that seem to have no bearing on the matter at
hand>
>>this kind of issue has not come up in any time in recent memory...
The issue of you making a mistake that you would not have done had you
stuck to doing simple integer division is a good reason for using integer
division.

potentially, but this makes an assumption about ones' thought process.
if one always types shifts, one thinks shifts, one does not think division
unless one were first thinking of division, which I assert that I was not
(which is why I think I missed such an obvious answer).
So if someone tells you to increase a buffer by 81% you would think in
terms of shifts?
>>so, one thinks in hex and writes in decimal, or vice versa, not usually
that important.
and, if one happens to be thinking in hex right, then a shift is more
intuitive than a divide.
What do you get if you right shift -1? On some machines it will be 32767
on others it will be -1. Does that sound like division to you?

different context.

one machine is probably an x86 in real mode with an int being 16 bits and
using shr.
the other is probably using sar.
Nope, one machine is not an x86 and does not have an arithmetic shift at
all.
>>does one think of the money in their wallet as 300 or 0x12C?...
does one think in british or metric units?...

or does one use whatever system happens to seem more natural at that
instant?...
Not relevant since a shift is not a different representation for division,
it is a different operation.

it is an operation relevant to a situation, that situation being positive
integers...
So when you have spent half your money do you think that you have
shifted your resources one bit to the right?
I was not claiming it was also sane for floats or for negatives.
Why? They are just numbers. In any case, you keep referring to integers
rather than unsigned integers, so either you are repeatedly making a
mistake and therefore showing why you should not be using shift you you
are being sloppy in a way that leads to mistakes thus showing why using
shift is inadvisable.
>>up to them really. it only matters that when someone asks that they give
the right number, and when they read that they don't get confused...

one typically doesn't know the magic going on even in ones' own head...
Code is written for other people, not just the original author or the
compiler.

maybe.
if other people ever actually read said code.
You were advising on how someone else should do things, therefore
someone other than you *will* see the code.
>>>>4/3 is a natural growth curve. something around this ratio should
presumably work good as a general mean case.
Exponential is also a natural growth curve, if you don't believe me
check how populations grow in nature, for at least some it is
exponential until a crash.
when applied recursively, this is exponential...
Linear is another natural growth curve.

except that linear is not a curve...
Check the definition of a curve in maths and you will find that it *is*
a curve. You will also find plenty of other growth curves in nature if
you look.
>>>The best growth curve depends on the situation. For some thing I know
that in the foreseeable future I need space for 10 foos and if it grows
beyond that it will be unlikely to be by much, so I start with 10 and
use a small linear growth (saves having to revisit the code unless
something very strange happens). For other things that would be
completely stupid.
yeah...
Ah, so you realise now that suggesting a 4/3 curve makes less sense than
suggesting using a curve that will give good performance. My example above
was extreme, but there are plenty of less extreme examples.

the question is then what curve gives the best performance, which may depend
on the situation.
it is a tradeoff really between number of reallocations and wasted space.

4/3 is more conservative with space than 3/2, which is still more
conservative than the golden ratio.

so the question is then of rate of growth and likelyhood of continued
growth.
Not only the likelihood of continued growth, but how much it is likely
to grow if it does grow. All of which was one of my points. Why when
someone says you need to select the best growth curve would you
"correct" that to saying they should use some specific growth curve when
you don't know the details of what the input is?
and, for the previous topic:
it is by some odd convention that when we see a division by undivisible
integers that we realize that it has a non-integer output.
Not if you know C you don't. Or Pascal. Or Modula 2. Or assembler. If
you see integer division you expect an integer result because you know
that when working in integers you are working in integers.
I guess it is also by similar convention that we realize that negative even
roots are imaginary and that non-integral exponents of negative bases are
complex...
All completely irrelevant to the point. Why use something dependant on
*representation* when there is a more natural operator for scaling, i.e.
something that does not depend on representation.
>--
Flash Gordon
Please don't quote signatures, the bit typically after a "-- ", unless
you are commenting on them.
--
Flash Gordon
Aug 24 '07 #43

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:eg************@news.flash-gordon.me.uk...
cr88192 wrote, On 24/08/07 02:46:
>"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:1r************@news.flash-gordon.me.uk...
>>cr88192 wrote, On 23/08/07 16:09:
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:ck************@news.flash-gordon.me.uk...
cr88192 wrote, On 23/08/07 11:12:
<snip>

>not optimizations. shifts are how one typically does these things...
Not if one is being sensible.
>
>it is much the same as why we call int variables i, j, and k I think,
>or many other common practices. after enough years, and enough code,
>one largely forgets any such reasoning, all rote response really...
I have spent years programming in assembler where I would use shifts
and years spent programming in high level languages where I would not.
yeah, I also use assembler...

otherwise:

I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think
divides...

if I were thinking of a divide, than i/3 is an obvious difference from
i/2. if I was not, it is not.
it is a non-obvious jump from 'i>>1' to 'i/3', unless one first thinks
that 'i>>1' is actually 'i/2'...
You are trying to reduce a number by a factor not trying to move bits.
They are conceptually different things. Try using a shit to halve a
floating point number as see how far it gets you.

but, a float and an integer are different concepts though...
not sure of why one would consider using a shift on a float...

Because you consider shift an appropriate method of halving a number.
as noted, if that number is an integer.

I personally regard integers and non-integers as different concepts, with
different behaviors, semantics, and rules.

after all, with integers, 3/4==0, but with non-integers or reals, the answer
is 0.75...

likewise I consider reals and complexes to be different concepts.

>>>>>actually, had I been thinking of compiler behavior much at all, I
>would have realized that 'i/3' actually becomes a fixed point
>multiply by a reciprocal. no such reasoning was used in this case.
>>
>this was a trivial and obvious mistake is all.
A trivial mistake that does not get made if you code what you want to
express instead of trying to use tricks. That is part of the point.
'want to express'?...

this implies certain things, ie, that what I would want to express is
different from the code I would write to express it.
Shift is for moving bits, divide is for scaling. Otherwise shift would
work on floating point and would have behaviour defined by the C
standard for negative numbers (it leaves it for the implementation to
define the result of right shifting a negative number).

ok, makes sense.

So will you move to the less error prone system then?
I write whatever I write really.
in the past, I have never really seen any major problem with it.

if there were some problem, as I percieve it, then I probably would have
changed it long ago (before writing many hundreds of kloc using these kind
of conventions).

>>>do you ask me, if my grammar is unusual, if it is because I am not
writing what I am meaning to express?...

it is a similar question IMO...
No, it is a question of using the wrong word. One that in some
situations happens to be similar, but in many situations means something
completely different.

the situations are different though.
as noted above, an integer and a float are conceptually different.
if I am speaking to someone do I count in floats? no, I count in
integers...

I measure sizes in all sorts of different systems not all of which are
integers.
lengths are reals, often.

sizes of arrays are not.
counting is not.

we don't say '3.85 apples' because one of them is small, or '4.25' because
one is large...
the count is 4 apples.

>>>>Whatever the reason for you learning to use such tricks it is well
past time you learned not to use them excpet where it is proved that
you need to.
>
'tricks'?...
Yes, it is a trick.

shifts are a basic operator, I don't see why they would be viewed as
any kind of trick...
The shift operator is a basic operator for moving bits, using it to
divide is a trick and one that does not work in all situations.

it works on integers, integers are the context and the situation.
sizes are generally not negative either.

They are sometimes.
If a spec says a number is in bits 3 to 7 then getting it to the correct
place is shifting.
I do this often as well.
for example, I have a good deal of packed-integer based types which I modify
via masks and shifts...

>there is no problem as I see it.

I thought you admitted to an error that would not be made if using simple
division. For one value you certainly suggested something more complex
than simple division.
yes, this was a faulty thought, something I would have likely noticed and
fixed later for looking stupid...

I think it was because I was thinking of percentages, and this is the
typical way I manipulate things via percentages.

'17% of i' ='(i*17)/100'.

"i's percentage of j" '((i*100)/j)'.

>>>do we call pointers and casts tricks as well?... typical code is
riddled with the things, nothing sinister there...
Code riddled with casts probably is bad.

Code using a lot of shifts for shifting bits, NOT for division, could be
good.

ok, but then one has to differentiate: when is the concept shifting bits
and when is it division?...
this is a subjective matter.

I honestly cannot conceive of why one would consider shifting to be the
natural way of scaling a value. Would you convert from inches to
centimetres by shifting? Would you talk about doubling your salary or
shifting it one bit to the left? Would you tell someone that they you have
doubled the storage capacity of a disk array or shifted it one bit to the
left? Or that the free space has been shifted one bit to the right instead
of halved?
would probably not say it as such, but mentally I often use shifting in
performing calculations, as I find it easier than multiplication or
division.

Changing the size of a buffer is NOT linked to the representation of the
size, shifting is.
a buffer's size, however, is naturally constrained to being a positive
integer.

>>>now, does one think in terms of arithmetic ops or in terms of bitwise
ops?...
No. However a shift works in terms of bits, not in terms of arithmetic.

but on integers, bit ops are arithmetic...

Nope. Right shift is a logical shift on some processors *not* arithmetic.
If anyone ever builds a trinary machine then the shift operation procided
by the processor will *not* be multiply/divide by 2. On a machine using
BCD it isn't either, and BCD has been used for representing integer
values.
maybe...

however, I reason, almost none of my crap is ever likely to be run on
something non-x86-based, much less something so far reachingly different,
which I would unlikely even consider coding for, assuming I ever even
encountered such a beast...

>makes nearly as much sense in conversation as it would in C.
we ask someone 'what is 12 and 7?' they say '8'...

I can't see what point you are trying to make, unless it is why you should
*not* use a shift for scaling.
the operations make sense to humans as well, if they know them...

>>>does one do their everyday arithmetic (internally) in decimal or in
hex?...

if we read something do we see the text, hear words, or see imagery?...
Several of the things above are reasons for using divide rather than
shift.

if one is confused as to whether not they are dealing with an integer...

You still keep forgetting that it only works for half the integer types C
provides, a sure sign in my opinion that you should stay well away from
the shift operator in C.
was never saying it provably worked on negative integers.
however, it does work in what compilers I am fammiliar with.

You are confused about values and specific representations. Binary is not
the only representation for numbers, it just happens to be the current
vogue in computing.
theoretical argument, maybe, but IMO of little practical concern. I don't
think binary will go away anytime soon, as doing so would break much of the
software in existence at this point, and assuming such a change occures, it
will not matter since the mass of software would have been being
rewritten/replaced anyways.

I say though, not only is it valid for computers, but also humans...

>what if I meant to grab an apple but instead grabbed a potatoe and
proceeded to eat it as said apple. people would look oddly, and then
maybe ones' stomach gets irritated from the raw potatoe...

so, casually eating an object raw is valid for an apple but not for said
potatoe.
likewise, we put potatoes in soup but not apples.

same difference really...

All valid analogies for why you should be using division for scaling not
shifting.

<snip philosophy ramblings that seem to have no bearing on the matter at
hand>
>>>this kind of issue has not come up in any time in recent memory...
The issue of you making a mistake that you would not have done had you
stuck to doing simple integer division is a good reason for using
integer division.

potentially, but this makes an assumption about ones' thought process.
if one always types shifts, one thinks shifts, one does not think
division unless one were first thinking of division, which I assert that
I was not (which is why I think I missed such an obvious answer).

So if someone tells you to increase a buffer by 81% you would think in
terms of shifts?
naturally, I wouild have thought in one of the options I originally provided
'((i*81)/100)'.
why: because this value, as it so happens, does not have an integer
reciprocal.

>>>so, one thinks in hex and writes in decimal, or vice versa, not usually
that important.
and, if one happens to be thinking in hex right, then a shift is more
intuitive than a divide.
What do you get if you right shift -1? On some machines it will be 32767
on others it will be -1. Does that sound like division to you?

different context.

one machine is probably an x86 in real mode with an int being 16 bits and
using shr.
the other is probably using sar.

Nope, one machine is not an x86 and does not have an arithmetic shift at
all.
ok, abstraction failing, it uses a 16 bit int.

>>>does one think of the money in their wallet as 300 or 0x12C?...
does one think in british or metric units?...

or does one use whatever system happens to seem more natural at that
instant?...
Not relevant since a shift is not a different representation for
division, it is a different operation.

it is an operation relevant to a situation, that situation being positive
integers...

So when you have spent half your money do you think that you have shifted
your resources one bit to the right?
very often, this is how I reason about some things.

>I was not claiming it was also sane for floats or for negatives.

Why? They are just numbers. In any case, you keep referring to integers
rather than unsigned integers, so either you are repeatedly making a
mistake and therefore showing why you should not be using shift you you
are being sloppy in a way that leads to mistakes thus showing why using
shift is inadvisable.
'unsigned integer' is longer to type, however in context I repeatedly
indicate that the numbers are positive, an indication that, whether the
storage is an integer or unsigned integer, the value is constrained to be
positive.

'23>>1' is '11' regardless of it being an int or uint.

>>>up to them really. it only matters that when someone asks that they
give the right number, and when they read that they don't get
confused...

one typically doesn't know the magic going on even in ones' own head...
Code is written for other people, not just the original author or the
compiler.

maybe.
if other people ever actually read said code.

You were advising on how someone else should do things, therefore someone
other than you *will* see the code.
well, this is usenet, not ones' codebase.

>>>>>4/3 is a natural growth curve. something around this ratio should
>presumably work good as a general mean case.
Exponential is also a natural growth curve, if you don't believe me
check how populations grow in nature, for at least some it is
exponential until a crash.
when applied recursively, this is exponential...
Linear is another natural growth curve.

except that linear is not a curve...

Check the definition of a curve in maths and you will find that it *is* a
curve. You will also find plenty of other growth curves in nature if you
look.
x^2 is a curve.
2^x is a curve.
x^0.5 is a curve.

x is not, it is linear, and thus not a curve.

>>>>The best growth curve depends on the situation. For some thing I know
that in the foreseeable future I need space for 10 foos and if it
grows beyond that it will be unlikely to be by much, so I start with
10 and use a small linear growth (saves having to revisit the code
unless something very strange happens). For other things that would be
completely stupid.
yeah...
Ah, so you realise now that suggesting a 4/3 curve makes less sense than
suggesting using a curve that will give good performance. My example
above was extreme, but there are plenty of less extreme examples.

the question is then what curve gives the best performance, which may
depend on the situation.
it is a tradeoff really between number of reallocations and wasted space.

4/3 is more conservative with space than 3/2, which is still more
conservative than the golden ratio.

so the question is then of rate of growth and likelyhood of continued
growth.

Not only the likelihood of continued growth, but how much it is likely to
grow if it does grow. All of which was one of my points. Why when someone
says you need to select the best growth curve would you "correct" that to
saying they should use some specific growth curve when you don't know the
details of what the input is?
>and, for the previous topic:
it is by some odd convention that when we see a division by undivisible
integers that we realize that it has a non-integer output.

Not if you know C you don't. Or Pascal. Or Modula 2. Or assembler. If you
see integer division you expect an integer result because you know that
when working in integers you are working in integers.
I was talking about fractions or rationals here.

humans have a convention that division of 2 integers leads to a non-integer
rather than an integer, and that computers do not is a difference. it shows
that integers are not numbers in a strictly traditional sense, because the
behavior is different.

>I guess it is also by similar convention that we realize that negative
even roots are imaginary and that non-integral exponents of negative
bases are complex...

All completely irrelevant to the point. Why use something dependant on
*representation* when there is a more natural operator for scaling, i.e.
something that does not depend on representation.
representation is value, and implementation is definition...

>>--
Flash Gordon

Please don't quote signatures, the bit typically after a "-- ", unless you
are commenting on them.
--
Flash Gordon

Aug 25 '07 #44

"Peter J. Holzer" <hj*********@hjp.atwrote in message
news:sl************************@zeno.hjp.at...
On 2007-08-23 15:09, cr88192 <cr*****@hotmail.comwrote:
>I always used shifts, never thought much of it.
I use shifts where I think shifts, I had never thought to think
divides...
if I were thinking of a divide, than i/3 is an obvious difference from
i/2.
if I was not, it is not.
it is a non-obvious jump from 'i>>1' to 'i/3', unless one first thinks
that
'i>>1' is actually 'i/2'...

But you were thinking of divides - you explicitely stated that you
wanted an increase of 50% (1/2), 33% (ca. 1/3) or 25% (1/4). Since 1/2,
1/3 and 1/4 is a nice progression, I find it much more likely that you
derived ((i>>2) + (i>>4) + (i>>6)) as an approximation of 1/3 than
the other way around. If you had been thinking of shifts, you would
probably have chosen ((i>>2) + (i>>3)) as the middle point between
(i>>1) and (i>>2).
odd assertion, however I am not sure how that chain of thought would work,
so was not likely employed in my case.

in any case, my memory has faded out now, I no longer remember.

hp
--
_ | Peter J. Holzer | I know I'd be respectful of a pirate
|_|_) | Sysadmin WSR | with an emu on his shoulder.
| | | hj*@hjp.at |
__/ | http://www.hjp.at/ | -- Sam in "Freefall"

Aug 25 '07 #45
On 2007-08-24 14:18, cr88192 <cr*****@hotmail.comwrote:
"Peter J. Holzer" <hj*********@hjp.atwrote in message
news:sl************************@zeno.hjp.at...
>On 2007-08-24 02:01, cr88192 <cr*****@hotmail.comwrote:
>>"santosh" <sa*********@gmail.comwrote in message
news:fa**********@aioe.org...
No, in terms of their effect according to the rules of arithmetics.
Numbers may be represented as bits under computers, but that is no
reason to think of every arithmetic operation in terms of their
effect at the bit representation level, except when it _is_ required.
ok. people then thinking in decimal rather than bits

Or just in terms of numbers regardless of any specific representation.

however this would work I guess...
Works for me. The number nineteen is a specific number (the successor of
eighteen in the natural numbers), regardless of whether it is
represented as "19", "13", "10011", "IXX", "||||| ||||| ||||| ||||",
"nineteen", "neunzehn", "19.0", "1.9E1", or whatever.

actually, I think if I think of a specific-sized integer, I see a fixed
square with hex-digits inside.
I think of objects in a similar sense (I prefer rectangles to squares),
but not of numbers. Numbers are values which can be put into those
boxes, not the boxes themselves. And I usually don't imagine even
numeric objects to be in a specific base (unless the fact that they are
in fact stored as binary is important - then I think of them in binary).
if I think of other numbers, I see them as a decimal version. floats are
decimal but associated with a square (the number is fit inside the square).
Floats are the same as integers: Normally the base doesn't matter and I
don't imagine any specific base (Oh, I write FP numbers in decimal, but
I don't imagine them to be decimal - that's just a notation). If it
does matter, I imagine them to be in the base they are actually stored
in (binary, usually).

hp
--
_ | Peter J. Holzer | I know I'd be respectful of a pirate
|_|_) | Sysadmin WSR | with an emu on his shoulder.
| | | hj*@hjp.at |
__/ | http://www.hjp.at/ | -- Sam in "Freefall"
Aug 25 '07 #46
cr88192 wrote:
>
.... snip ...
>
but on integers, bit ops are arithmetic...

makes nearly as much sense in conversation as it would in C.
we ask someone 'what is 12 and 7?' they say '8'...
I can see ways of deriving 4, 19, and 3. The base must be at least
octal, and the items must be at least 4 bits wide. However I see
no way of arriving at 8.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Aug 25 '07 #47
Richard Heathfield wrote:
>
.... snip ...
>
Careful. You might get the same pointer value twice from malloc:

p = malloc(n * sizeof *p);
store_object_representation(pobjrep, &p);
free(p);
After which dereferencing p is undefined behaviour.
q = malloc(n * sizeof *q);
store_object_representation(qobjrep, &q);
d = compare_object_representation(pobjrep, qobjrep);

d might well be 0. Nevertheless, it is still correct to pass q's value
to free() when you're done with it.
So what? Who cares if q == oldvalueof(p). You seem to have
something mixed up.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Aug 25 '07 #48
Flash Gordon wrote:
cr88192 wrote, On 23/08/07 16:09:
>"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
.... snip ...
>
>>Whatever the reason for you learning to use such tricks it is
well past time you learned not to use them excpet where it is
proved that you need to.

'tricks'?...

Yes, it is a trick.
>shifts are a basic operator, I don't see why they would be
viewed as any kind of trick...

The shift operator is a basic operator for moving bits, using it
to divide is a trick and one that does not work in all situations.
Actually, the shift operator is defined in terms of division or
multiplication, IIRC. This is just one more reason it should not
be used on negative values. Use of unsigned is safe.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Aug 25 '07 #49
CBFalconer said:
Richard Heathfield wrote:
>>
... snip ...
>>
Careful. You might get the same pointer value twice from malloc:

p = malloc(n * sizeof *p);
store_object_representation(pobjrep, &p);
free(p);

After which dereferencing p is undefined behaviour.
Indeed. So what? The object representation is safely stored in a
separate object, so p is irrelevant now. The *value* is safe.
>
> q = malloc(n * sizeof *q);
store_object_representation(qobjrep, &q);
d = compare_object_representation(pobjrep, qobjrep);

d might well be 0. Nevertheless, it is still correct to pass q's
value to free() when you're done with it.

So what? Who cares if q == oldvalueof(p).
I suggest you review the discussion again.
You seem to have something mixed up.
For a certain value of "You", perhaps. :-)

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Aug 25 '07 #50

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

2 posts views Thread by Sticky | last post: by
13 posts views Thread by Jon Yeager | last post: by
1 post views Thread by Nikhil Patel | last post: by
4 posts views Thread by John A Grandy | last post: by
reply views Thread by leo001 | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.