473,507 Members | 2,476 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Making Fatal Hidden Assumptions

We often find hidden, and totally unnecessary, assumptions being
made in code. The following leans heavily on one particular
example, which happens to be in C. However similar things can (and
do) occur in any language.

These assumptions are generally made because of familiarity with
the language. As a non-code example, consider the idea that the
faulty code is written by blackguards bent on foulling the
language. The term blackguards is not in favor these days, and for
good reason. However, the older you are, the more likely you are
to have used it since childhood, and to use it again, barring
specific thought on the subject. The same type of thing applies to
writing code.

I hope, with this little monograph, to encourage people to examine
some hidden assumptions they are making in their code. As ever, in
dealing with C, the reference standard is the ISO C standard.
Versions can be found in text and pdf format, by searching for N869
and N1124. [1] The latter does not have a text version, but is
more up-to-date.

We will always have innocent appearing code with these kinds of
assumptions built-in. However it would be wise to annotate such
code to make the assumptions explicit, which can avoid a great deal
of agony when the code is reused under other systems.

In the following example, the code is as downloaded from the
referenced URL, and the comments are entirely mine, including the
'every 5' linenumber references.

/* Making fatal hidden assumptions */
/* Paul Hsiehs version of strlen.
http://www.azillionmonkeys.com/qed/asmexample.html

Some sneaky hidden assumptions here:
1. p = s - 1 is valid. Not guaranteed. Careless coding.
2. cast (int) p is meaningful. Not guaranteed.
3. Use of 2's complement arithmetic.
4. ints have no trap representations or hidden bits.
5. 4 == sizeof(int) && 8 == CHAR_BIT.
6. size_t is actually int.
7. sizeof(int) is a power of 2.
8. int alignment depends on a zeroed bit field.

Since strlen is normally supplied by the system, the system
designer can guarantee all but item 1. Otherwise this is
not portable. Item 1 can probably be beaten by suitable
code reorganization to avoid the initial p = s - 1. This
is a serious bug which, for example, can cause segfaults
on many systems. It is most likely to foul when (int)s
has the value 0, and is meaningful.

He fails to make the valid assumption: 1 == sizeof(char).
*/

#define hasNulByte(x) ((x - 0x01010101) & ~x & 0x80808080)
#define SW (sizeof (int) / sizeof (char))

int xstrlen (const char *s) {
const char *p; /* 5 */
int d;

p = s - 1;
do {
p++; /* 10 */
if ((((int) p) & (SW - 1)) == 0) {
do {
d = *((int *) p);
p += SW;
} while (!hasNulByte (d)); /* 15 */
p -= SW;
}
} while (*p != 0);
return p - s;
} /* 20 */

Let us start with line 1! The constants appear to require that
sizeof(int) be 4, and that CHAR_BIT be precisely 8. I haven't
really looked too closely, and it is possible that the ~x term
allows for larger sizeof(int), but nothing allows for larger
CHAR_BIT. A further hidden assumption is that there are no trap
values in the representation of an int. Its functioning is
doubtful when sizeof(int) is less that 4. At the least it will
force promotion to long, which will seriously affect the speed.

This is an ingenious and speedy way of detecting a zero byte within
an int, provided the preconditions are met. There is nothing wrong
with it, PROVIDED we know when it is valid.

In line 2 we have the confusing use of sizeof(char), which is 1 by
definition. This just serves to obscure the fact that SW is
actually sizeof(int) later. No hidden assumptions have been made
here, but the usage helps to conceal later assumptions.

Line 4. Since this is intended to replace the systems strlen()
function, it would seem advantageous to use the appropriate
signature for the function. In particular strlen returns a size_t,
not an int. size_t is always unsigned.

In line 8 we come to a biggie. The standard specifically does not
guarantee the action of a pointer below an object. The only real
purpose of this statement is to compensate for the initial
increment in line 10. This can be avoided by rearrangement of the
code, which will then let the routine function where the
assumptions are valid. This is the only real error in the code
that I see.

In line 11 we have several hidden assumptions. The first is that
the cast of a pointer to an int is valid. This is never
guaranteed. A pointer can be much larger than an int, and may have
all sorts of non-integer like information embedded, such as segment
id. If sizeof(int) is less than 4 the validity of this is even
less likely.

Then we come to the purpose of the statement, which is to discover
if the pointer is suitably aligned for an int. It does this by
bit-anding with SW-1, which is the concealed sizeof(int)-1. This
won't be very useful if sizeof(int) is, say, 3 or any other
non-poweroftwo. In addition, it assumes that an aligned pointer
will have those bits zero. While this last is very likely in
todays systems, it is still an assumption. The system designer is
entitled to assume this, but user code is not.

Line 13 again uses the unwarranted cast of a pointer to an int.
This enables the use of the already suspicious macro hasNulByte in
line 15.

If all these assumptions are correct, line 19 finally calculates a
pointer difference (which is valid, and of type size_t or ssize_t,
but will always fit into a size_t). It then does a concealed cast
of this into an int, which could cause undefined or implementation
defined behaviour if the value exceeds what will fit into an int.
This one is also unnecessary, since it is trivial to define the
return type as size_t and guarantee success.

I haven't even mentioned the assumption of 2's complement
arithmetic, which I believe to be embedded in the hasNulByte
macro. I haven't bothered to think this out.

Would you believe that so many hidden assumptions can be embedded
in such innocent looking code? The sneaky thing is that the code
appears trivially correct at first glance. This is the stuff that
Heisenbugs are made of. Yet use of such code is fairly safe if we
are aware of those hidden assumptions.

I have cross-posted this without setting follow-ups, because I
believe that discussion will be valid in all the newsgroups posted.

[1] The draft C standards can be found at:
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/>

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>

Mar 6 '06
351 12704
Andrew Reilly <an*************@areilly.bpc-users.org> writes:
On Wed, 08 Mar 2006 00:56:03 +0000, Robin Haigh wrote:
It's not always equivalent. The trouble starts with

char a[8];
char *p;

for ( p = a+1 ; p < a+8 ; p += 2 ) {}

intending that the loop terminates on p == a+9 (since it skips a+8). But
how do we know that a+9 > a+8 ? If the array is right at the top of some
kind of segment, the arithmetic might have wrapped round.


a+9 > a+8 because a + 9 - (a + 8) == 1, which is > 0. Doesn't matter if
the signed or unsigned pointer value wrapped around in an intermediate
term. On many machines that's how the comparison is done anyway. You're
suggesting that having the compiler ensure that a+8 doesn't wrap around
wrt a is OK, but a+9 is too hard. I don't buy it.


How would you guarantee that a+(i+1) > a+i for all arbitrary values of
i? It's easy enough to do this when the addition doesn't go beyond
the end of the array (plus the case where it points just past the end
of the array), but if you want to support arbitrary arithmetic beyond
the array bounds, it's going to take some extra work, all for the sake
of guaranteeing properties that have *never* been guaranteed by the C
language. (Don't confuse what happens to have always worked for you
with what's actually guaranteed by the language itself.)

[...]
Ints have the nice property that 0 is in the middle and we know how much
headroom we've got either side. So it's easy for the compiler to make
the int version work (leaving it to the programmer to take
responsibility for avoiding overflow, which is no big deal).


Unsigned ints have the nice property that (a + 1) - 1 == a for all a, even
if a + 1 == 0. Overflow is generally no big deal in any case. (Other
than the object larger than half the address space issue.)


But unsigned ints *don't* have the property that a + 1 > a for all a.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 8 '06 #51
Ben Pfaff wrote:
Andrew Reilly <an*************@areilly.bpc-users.org> writes:
So the standards body broke decades of practice and perfectly safe
and reasonable code to support a *hypothetical* implementation
that was so stupid that it checked pointer values, rather than
pointer *use*?


You can still write your code to make whatever assumptions you
like. You just can't assume that it will work portably. If,
for example, you are writing code for a particular embedded
architecture with a given compiler, then it may be reasonable
to make assumptions beyond those granted by the standard.


Which was the point of my little article in the first place. If,
for one reason or another, you need to make assumptions, document
what they are. To do this you first need to be able to recognize
those assumptions. You may easily find you could avoid making the
assumption in the first place without penalty, as in this
particular "p = p - 1;" case.

Why is everybody concentrating on the one real error in the example
code, and not on the hidden assumptions. Errors are correctible,
assumptions need to be recognized.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
Mar 8 '06 #52
msg
OK, I've made enough of a fool of myself already. I'll go and have that
second cup of coffee for the morning, before I start going on about having
the standard support non-2's complement integers, or machines that have no
arithmetic right shifts...


I get queasy reading the rants against 1's complement architectures; I
wish Seymour Cray were still around to address this.

Michael Grigoni
Cybertheque Museum
Mar 8 '06 #53
Andrew Reilly schrieb:
On Tue, 07 Mar 2006 13:28:37 -0500, Arthur J. O'Dwyer wrote:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist. So
we have undefined behavior.


Only because the standard says so. Didn't have to be that way. There are
plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of iterating
through an array at a non-unit stride, using the normal p < s + N
termination condition. The loop finishes with p > s + N and the standard
says "pow, you're dead", when the semantically identical code written with
integer indexes has no legal problems.


I have encountered situations where
free(p);
....
if (p == q)
leads to the platform's equivalent of the much beloved
"segmentation fault". Your theory means that this should
have worked. Assigning NULL or a valid address to p after
freeing avoids the error.

<OT>Incidentally, in gnu.gcc.help there is a discussion about
much the same situation in C++ where someone gets in trouble
for delete a; .... if (a == b) ...
Happens only for multiple inheritance and only for gcc.
Thread starts at <Eg0Pf.110735$sa3.34921@pd7tw1no>
</OT>

[snip!]

Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Mar 8 '06 #54
msg

Since most coding for AS/400s were (is still?) done in COBOL and PL/1
Most coding was in flavors of RPG.

since AS/400s are actually Power processors with a JIT over the top now


There is a large installed base of CISC (non Power-PC) AS-400s; they use
a flat memory model reminiscent of the Multics design.

Michael Grigoni
Cybertheque Museum

Mar 8 '06 #55
On Wed, 08 Mar 2006 03:33:10 +0000, Keith Thompson wrote:
But unsigned ints *don't* have the property that a + 1 > a for all a.


My last comment on the thread, hopefully:

No, they don't, but when you're doing operations on pointer derivations
that are all in some sense "within the same object", even if hanging
outside it, (i.e., by dint of being created by adding integers to a single
initial pointer), then the loop termination condition is, in a very real
sense, a ptrdif_t, and *should* be computed that way. The difference can
be both positive and negative.

The unsigned comparison a + n > a fails for some values of a, but the
ptrdiff_t (signed) comparison a + n - a > 0 is indeed true for all a and
n > 0, so that's what should be used. And it *is* what is used on most
processors that do comparison by subtraction (even if that's wrapped in a
non-destructive cmp).

I actually agree completely with the piece of K&R that you posted a few
posts ago, where it was pointed out that pointer arithmetic only makes
sense for pointers within the same object (array). Since C doesn't tell
you that the pointer that your function has been given isn't somewhere in
the middle of a real array, my aesthetic sense is that conceptually,
arrays (as pointers, within functions) extend infinitely (or at least to
the range of int) in *both* directions, as far as pointer arithmetic
within a function is concerned. Actually accessing values outside of the
bounds of the real array that has been allocated somewhere obviously
contravenes the "same object" doctrine, and it's up to the logic of the
caller and the callee to avoid that.

Now it has been amply explained that my conception of how pointer
arithmetic ought to work is not the way the standard describes, even
though it *is* the way I have experienced it in all of the C
implementations that it has obviously been my good fortune to encounter.
I consider that to be a pity, and obviously some of my code wouldn't
survive a translation to a Boroughs or AS/400 machine (or perhaps even to
some operating environments on 80286es). Oh, well. I can live with that.
It's not really the sort of code that I'd expect to find there, and I
don't expect to encounter such constraints in the future, but I *will* be
more careful, and will keep my eyes more open.

Thanks to all,

--
Andrew

Mar 8 '06 #56
Randy Howard wrote:
Andrew Reilly wrote
On Tue, 07 Mar 2006 15:59:37 -0700, Al Balmer wrote:

An implementation might choose, for valid reasons, to prefetch
the data that pointer is pointing to. If it's in a segment not
allocated ...


Hypothetical hardware that traps on *speculative* loads isn't
broken by design? I'd love to see the initialization sequences,
or the task switching code that has to make sure that all pointer
values are valid before they're loaded. No, scratch that. I've
got better things to do.


This is a lot of whining about a specific problem that can
easily be remedied just by changing the loop construction. The
whole debate is pretty pointless in that context, unless you
have some religious reason to insist upon the method in the
original.


Er, didn't I point that fix out in the original article? That was
the only error in the original sample code, all other problems can
be tied to assumptions, which may be valid on any given piece of
machinery. The point is to avoid making such assumptions, which
requires recognizing their existence in the first place.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
Mar 8 '06 #57
On Tue, 07 Mar 2006 19:10:10 -0500, CBFalconer wrote:
Why is everybody concentrating on the one real error in the example code,
and not on the hidden assumptions. Errors are correctible, assumptions
need to be recognized.


Well I, for one, commented on the hidden assumption that must be made for
what you call "the one real error" to actually be an error -- but it was
not recognised! ;-)

[At the top of your original post did not, in fact, claim this was an
error but you call it a "real error" later on.]

I feel that your points would have been better made using other examples.
The context of the code made me read the C as little more than pseudo-code
with the added advantage that a C compiler might, with a following wind,
produce something like the assembler version (which in turn has its own
assumptions but you were not talking about that).

I found Eric Sosman's "if (buffer + space_required > buffer_end) ..."
example more convincing, because I have seen that in programs that are
intended to be portable -- I am pretty sure I have written such things
myself in my younger days. Have you other more general examples of
dangerous assumptions that can sneak into code? A list of the "top 10
things you might be assuming" would be very interesting.

--
Ben.
Mar 8 '06 #58
CBFalconer wrote
(in article <44***************@yahoo.com>):
Randy Howard wrote:
This is a lot of whining about a specific problem that can
easily be remedied just by changing the loop construction. The
whole debate is pretty pointless in that context, unless you
have some religious reason to insist upon the method in the
original.


Er, didn't I point that fix out in the original article?


Yes, which is precisely why I'm surprised at the ensuing debate
over the original version, as my comments should reflect.

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Mar 8 '06 #59
Rod Pemberton wrote
(in article <du***********@news3.infoave.net>):

"Gerry Quinn" <ge****@DELETETHISindigo.ie> wrote in message
news:MP************************@news1.eircom.net.. .
In article <44***************@yahoo.com>, cb********@yahoo.com says...
These assumptions are generally made because of familiarity with
the language. As a non-code example, consider the idea that the
faulty code is written by blackguards bent on foulling the
language. The term blackguards is not in favor these days, and for
good reason.


About as good a reason as the term niggardly, as far as I can tell.
Perhaps the words are appropriate in a post relating to fatal
assumptions.


I didn't know what he meant either. Not being racist (at least I hope not),
I went GIYF'ing. I think it might be a reference to some Dungeons and
Dragons persona or something. Unfortunately, he'd need to clarify...


Oh come on. Doesn't anyone own a dictionary anymore, or have a
vocabulary which isn't found solely on digg, slashdot or MTV?

blackguard:
A thoroughly unprincipled person; a scoundrel.
A foul-mouthed person.
Does everything have to become a racism experiment?

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Mar 8 '06 #60
On 2006-03-08, Randy Howard <ra*********@FOOverizonBAR.net> wrote:
blackguard:
A thoroughly unprincipled person; a scoundrel.
A foul-mouthed person.
Sure, that's what it _means_, but...
Does everything have to become a racism experiment?


the question is one of etymology.
Mar 8 '06 #61
Andrew Reilly <an*************@areilly.bpc-users.org> writes:
On Wed, 08 Mar 2006 03:33:10 +0000, Keith Thompson wrote:
But unsigned ints *don't* have the property that a + 1 > a for all a.


My last comment on the thread, hopefully:

No, they don't, but when you're doing operations on pointer derivations
that are all in some sense "within the same object", even if hanging
outside it, (i.e., by dint of being created by adding integers to a single
initial pointer), then the loop termination condition is, in a very real
sense, a ptrdif_t, and *should* be computed that way. The difference can
be both positive and negative.


Um, I always thought that "within" and "outside" were two different
things.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 8 '06 #62
Arthur J. O'Dwyer wrote:
If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist.
So we have undefined behavior.
The expression pa+1 is similar, but with one special case. If pa
points to the last element in the array, you might expect that pa+1
would be
undefined; but actually the C standard specifically allows you to
evaluate pa+1 in that case. Dereferencing that pointer, or incrementing
it /again/,
however, invoke undefined behavior.

This is pure theology. the simple fact is that you can't GUARANTEE that
p++, or p--, or for that matter p itself, points to anything in
particular, unless you know something about p. And if you know about p,
you are OK. What's your problem?

Paul Burke
Mar 8 '06 #63
On Wed, 08 Mar 2006 09:17:39 +0000, Keith Thompson wrote:
Andrew Reilly <an*************@areilly.bpc-users.org> writes:
On Wed, 08 Mar 2006 03:33:10 +0000, Keith Thompson wrote:
But unsigned ints *don't* have the property that a + 1 > a for all a.


My last comment on the thread, hopefully:

No, they don't, but when you're doing operations on pointer derivations
that are all in some sense "within the same object", even if hanging
outside it, (i.e., by dint of being created by adding integers to a single
initial pointer), then the loop termination condition is, in a very real
sense, a ptrdif_t, and *should* be computed that way. The difference can
be both positive and negative.


Um, I always thought that "within" and "outside" were two different
things.


Surely the camel's nose is already through the gate, on that one, with the
explicit allowance of "one element after"? How does that fit with all of
the conniptions expressed here about things that fall over dead if a
pointer even looks at an address that isn't part of the object? One out,
all out.

--
Andrew

Mar 8 '06 #64
Ben Bacarisse <be********@bsb.me.uk> wrote:
On Tue, 07 Mar 2006 10:31:09 +0000, pete wrote:
David Brown wrote:

CBFalconer wrote:

> http://www.azillionmonkeys.com/qed/asmexample.html
>
> Some sneaky hidden assumptions here:
> 1. p = s - 1 is valid. Not guaranteed. Careless coding.

Not guaranteed in what way? You are not guaranteed that p will be a
valid pointer, but you don't require it to be a valid pointer - all that
is required is that "p = s - 1" followed by "p++" leaves p equal to s.
I'm not good enough at the laws of C to tell you if this is valid,


Merely subtracting 1 from s, renders the entire code undefined. You're
"off the map" as far as the laws of C are concerned. On comp.lang.c,
we're mostly interested in what the laws of C *do* say is guaranteed to
work.


It seems to me ironic that, in a discussion about hidden assumptions, the
truth of this remark requires a hidden assumption about how the function
is called. Unless I am missing something big, p = s - 1 is fine unless s
points to the first element of an array (or worse)[1].


It's an implementation of strlen(). One must expect it to be called with
any pointer to a valid string - and those are usually pointers to the
first byte of a memory block.

Richard
Mar 8 '06 #65
Andrew Reilly wrote:
On Wed, 08 Mar 2006 09:17:39 +0000, Keith Thompson wrote:
Andrew Reilly <an*************@areilly.bpc-users.org> writes:
On Wed, 08 Mar 2006 03:33:10 +0000, Keith Thompson wrote:
But unsigned ints *don't* have the property that a + 1 > a for all a.
My last comment on the thread, hopefully:

No, they don't, but when you're doing operations on pointer derivations
that are all in some sense "within the same object", even if hanging
outside it, (i.e., by dint of being created by adding integers to a single
initial pointer), then the loop termination condition is, in a very real
sense, a ptrdif_t, and *should* be computed that way. The difference can
be both positive and negative.

Um, I always thought that "within" and "outside" were two different
things.


Surely the camel's nose is already through the gate, on that one, with the
explicit allowance of "one element after"? How does that fit with all of
the conniptions expressed here about things that fall over dead if a
pointer even looks at an address that isn't part of the object? One out,
all out.


As previously stated, that only requires using one extra byte or, in the
worst case of a HW word pointer, one extra word.
--
Flash Gordon, living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidelines and intro:
http://clc-wiki.net/wiki/Intro_to_clc
Mar 8 '06 #66
In article <00*****************************@news.verizon.net> ,
ra*********@FOOverizonBAR.net says...
Rod Pemberton wrote
(in article <du***********@news3.infoave.net>):
"Gerry Quinn" <ge****@DELETETHISindigo.ie> wrote in message
news:MP************************@news1.eircom.net.. .
In article <44***************@yahoo.com>, cb********@yahoo.com says...

These assumptions are generally made because of familiarity with
the language. As a non-code example, consider the idea that the
faulty code is written by blackguards bent on foulling the
language. The term blackguards is not in favor these days, and for
good reason.

About as good a reason as the term niggardly, as far as I can tell.
Perhaps the words are appropriate in a post relating to fatal
assumptions.
I didn't know what he meant either. Not being racist (at least I hope not),
I went GIYF'ing. I think it might be a reference to some Dungeons and
Dragons persona or something. Unfortunately, he'd need to clarify...


Oh come on. Doesn't anyone own a dictionary anymore, or have a
vocabulary which isn't found solely on digg, slashdot or MTV?

blackguard:
A thoroughly unprincipled person; a scoundrel.
A foul-mouthed person.

Yes - and what 'good reason' is there for not using the term?

Does everything have to become a racism experiment?

That was my point - the expression like many has no clear etymology,
but there doesn't seem to have been any racial connection. Even if
there had been, I'm not sure this is a strong reason for not using it
(there's got to be a statute of limitations somewhere), but at least it
would be some sort of rationale.

Of course there are those who object to every figure in which the
adjective 'black' has negative connotations.

- Gerry Quinn
Mar 8 '06 #67
Paul Burke <pa**@scazon.com> writes:
Arthur J. O'Dwyer wrote:
If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't
exist.
So we have undefined behavior.
The expression pa+1 is similar, but with one special case. If pa
points to the last element in the array, you might expect that pa+1
would be
undefined; but actually the C standard specifically allows you to
evaluate pa+1 in that case. Dereferencing that pointer, or
incrementing it /again/,
however, invoke undefined behavior.


This is pure theology. the simple fact is that you can't GUARANTEE
that p++, or p--, or for that matter p itself, points to anything in
particular, unless you know something about p. And if you know about
p, you are OK. What's your problem?


Are you quite sure that you know what the word "theology" means?

What Arthur wrote above is entirely correct. (Remember that undefined
behavior includes the possibility, but not the guarantee, of the code
doing exactly what you expect it to do, whatever that might be.)

What's your problem?

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 8 '06 #68
Jordan Abel said:
On 2006-03-08, Randy Howard <ra*********@FOOverizonBAR.net> wrote:
blackguard:
A thoroughly unprincipled person; a scoundrel.
A foul-mouthed person.


Sure, that's what it _means_, but...
Does everything have to become a racism experiment?


the question is one of etymology.


OE blaec, and OFr garter (the latter from from OHGer warten; OE weardian)

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Mar 8 '06 #69
Keith Thompson said:
Um, I always thought that "within" and "outside" were two different
things.


Ask Jack to lend you his bottle. You'll soon change your mind.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Mar 8 '06 #70

"Gerry Quinn" <ge****@DELETETHISindigo.ie> wrote in message
news:MP***********************@news1.eircom.net...
In article <00*****************************@news.verizon.net> ,
ra*********@FOOverizonBAR.net says...
Rod Pemberton wrote
(in article <du***********@news3.infoave.net>):
"Gerry Quinn" <ge****@DELETETHISindigo.ie> wrote in message
news:MP************************@news1.eircom.net.. .
> In article <44***************@yahoo.com>, cb********@yahoo.com says...>
>> These assumptions are generally made because of familiarity with
>> the language. As a non-code example, consider the idea that the
>> faulty code is written by blackguards bent on foulling the
>> language. The term blackguards is not in favor these days, and for
>> good reason.
>
> About as good a reason as the term niggardly, as far as I can tell.
> Perhaps the words are appropriate in a post relating to fatal
> assumptions.

I didn't know what he meant either. Not being racist (at least I hope not), I went GIYF'ing. I think it might be a reference to some Dungeons and
Dragons persona or something. Unfortunately, he'd need to clarify...


Oh come on. Doesn't anyone own a dictionary anymore, or have a
vocabulary which isn't found solely on digg, slashdot or MTV?

blackguard:
A thoroughly unprincipled person; a scoundrel.
A foul-mouthed person.


Yes - and what 'good reason' is there for not using the term?
Does everything have to become a racism experiment?


Of course there are those who object to every figure in which the
adjective 'black' has negative connotations.


True. Mostly black people. This is a _true_ story. A number of years ago,
I was hired by a company which had a large number of black employees. I was
in the smallest minority as a white person. On the second day, I realized
there were no pens or pencils in the desk. Each floor in the building had
it's own "supply manager." So, I walked over the supply manager, a black
female, and asked for a blue pen. She politely said she didn't have a blue
pen. So, I asked for a black pen. To which she stood up and yelled at the
top of her lungs: "WHY DO YOU WANT A FUCKING BLACK PEN? DON'T YOU WANT A
GODDAMN WHITE PEN?" Of course, I was in shock, and a bit stunned since it
seemed I that I had just been setup. The entire floor of predominantly
black people were standing up and staring at me and I didn't see them
laughing. So, I politely replied: "Yes, as long as you have some black
paper." She grunted, looked away, and angrily handed me a blue pen... Of
course, the word "black" was never used there for anything even if it was
explicitly black.
Rod Pemberton
Mar 8 '06 #71
On 8 Mar 2006 08:08:48 GMT, Jordan Abel <ra*******@gmail.com> wrote:
On 2006-03-08, Randy Howard <ra*********@FOOverizonBAR.net> wrote:
blackguard:
A thoroughly unprincipled person; a scoundrel.
A foul-mouthed person.


Sure, that's what it _means_, but...
Does everything have to become a racism experiment?


the question is one of etymology.


?
What do you imagine the etymology to be?

--
Al Balmer
Sun City, AZ
Mar 8 '06 #72

Al Balmer wrote:
On 8 Mar 2006 08:08:48 GMT, Jordan Abel <ra*******@gmail.com> wrote:
On 2006-03-08, Randy Howard <ra*********@FOOverizonBAR.net> wrote:
blackguard:
A thoroughly unprincipled person; a scoundrel.
A foul-mouthed person.


Sure, that's what it _means_, but...
Does everything have to become a racism experiment?


the question is one of etymology.


?
What do you imagine the etymology to be?


FWIW, from <http://www.wordorigins.org/wordorb.htm>:

====
Blackguard

The exact etymology of this term for a villain is a bit uncertain. What
is known is that it is literally from black guard; it is English in
origin; and it dates to at least 1532.

The two earliest senses (it is impossible to tell which one came first)
are:

* the lowest servants in a household (often those in charge of the
scullery), or the servants and camp followers of an army.
* attendants or guards, either dressed in black, of low character,
or attending a criminal.

The OED2 doesn't dismiss the possibility that there may literally have
been a company of soldiers at Westminster called the Black Guard, but
no direct evidence of this exists.

The earliest known citation (1532) uses the term blake garde to refer
to torch bearers at a funeral. A 1535 cite refers to the Black Guard of
the King's kitchen, a scullery reference. The second sense of a guard
of attendants appears in 1563 in reference to a retinue of Dominican
friars--who would be in black robes.

The sense of the vagabond or criminal class doesn't appear until the
1680s. And the modern sense of a scoundrel dates to the 1730s.
====

Nothing racist there...

--
BR, Vladimir

Mar 8 '06 #73
CBFalconer wrote:
We often find hidden, and totally unnecessary, assumptions being
made in code. The following leans heavily on one particular
example, which happens to be in C. However similar things can (and
do) occur in any language.

These assumptions are generally made because of familiarity with
the language. As a non-code example, consider the idea that the
faulty code is written by blackguards bent on foulling the
language. The term blackguards is not in favor these days, and for
good reason. However, the older you are, the more likely you are
to have used it since childhood, and to use it again, barring
specific thought on the subject. The same type of thing applies to
writing code.

I hope, with this little monograph, to encourage people to examine
some hidden assumptions they are making in their code. As ever, in
dealing with C, the reference standard is the ISO C standard.
Versions can be found in text and pdf format, by searching for N869
and N1124. [1] The latter does not have a text version, but is
more up-to-date.

We will always have innocent appearing code with these kinds of
assumptions built-in. However it would be wise to annotate such
code to make the assumptions explicit, which can avoid a great deal
of agony when the code is reused under other systems.

In the following example, the code is as downloaded from the
referenced URL, and the comments are entirely mine, including the
'every 5' linenumber references.

/* Making fatal hidden assumptions */
/* Paul Hsiehs version of strlen.
http://www.azillionmonkeys.com/qed/asmexample.html

Some sneaky hidden assumptions here:
1. p = s - 1 is valid. Not guaranteed. Careless coding.
2. cast (int) p is meaningful. Not guaranteed.
3. Use of 2's complement arithmetic.
4. ints have no trap representations or hidden bits.
5. 4 == sizeof(int) && 8 == CHAR_BIT.
6. size_t is actually int.
7. sizeof(int) is a power of 2.
8. int alignment depends on a zeroed bit field.

Since strlen is normally supplied by the system, the system
designer can guarantee all but item 1. Otherwise this is
not portable. Item 1 can probably be beaten by suitable
code reorganization to avoid the initial p = s - 1. This
is a serious bug which, for example, can cause segfaults
on many systems. It is most likely to foul when (int)s
has the value 0, and is meaningful.

He fails to make the valid assumption: 1 == sizeof(char).
*/

#define hasNulByte(x) ((x - 0x01010101) & ~x & 0x80808080)
#define SW (sizeof (int) / sizeof (char))

int xstrlen (const char *s) {
const char *p; /* 5 */
int d;

p = s - 1;
do {
p++; /* 10 */
if ((((int) p) & (SW - 1)) == 0) {
do {
d = *((int *) p);
p += SW;
} while (!hasNulByte (d)); /* 15 */
p -= SW;
}
} while (*p != 0);
return p - s;
} /* 20 */

Let us start with line 1! The constants appear to require that
sizeof(int) be 4, and that CHAR_BIT be precisely 8. I haven't
really looked too closely, and it is possible that the ~x term
allows for larger sizeof(int), but nothing allows for larger
CHAR_BIT. A further hidden assumption is that there are no trap
values in the representation of an int. Its functioning is
doubtful when sizeof(int) is less that 4. At the least it will
force promotion to long, which will seriously affect the speed.

This is an ingenious and speedy way of detecting a zero byte within
an int, provided the preconditions are met. There is nothing wrong
with it, PROVIDED we know when it is valid.

In line 2 we have the confusing use of sizeof(char), which is 1 by
definition. This just serves to obscure the fact that SW is
actually sizeof(int) later. No hidden assumptions have been made
here, but the usage helps to conceal later assumptions.

Line 4. Since this is intended to replace the systems strlen()
function, it would seem advantageous to use the appropriate
signature for the function. In particular strlen returns a size_t,
not an int. size_t is always unsigned.

In line 8 we come to a biggie. The standard specifically does not
guarantee the action of a pointer below an object. The only real
purpose of this statement is to compensate for the initial
increment in line 10. This can be avoided by rearrangement of the
code, which will then let the routine function where the
assumptions are valid. This is the only real error in the code
that I see.

In line 11 we have several hidden assumptions. The first is that
the cast of a pointer to an int is valid. This is never
guaranteed. A pointer can be much larger than an int, and may have
all sorts of non-integer like information embedded, such as segment
id. If sizeof(int) is less than 4 the validity of this is even
less likely.

Then we come to the purpose of the statement, which is to discover
if the pointer is suitably aligned for an int. It does this by
bit-anding with SW-1, which is the concealed sizeof(int)-1. This
won't be very useful if sizeof(int) is, say, 3 or any other
non-poweroftwo. In addition, it assumes that an aligned pointer
will have those bits zero. While this last is very likely in
todays systems, it is still an assumption. The system designer is
entitled to assume this, but user code is not.

Line 13 again uses the unwarranted cast of a pointer to an int.
This enables the use of the already suspicious macro hasNulByte in
line 15.

If all these assumptions are correct, line 19 finally calculates a
pointer difference (which is valid, and of type size_t or ssize_t,
but will always fit into a size_t). It then does a concealed cast
of this into an int, which could cause undefined or implementation
defined behaviour if the value exceeds what will fit into an int.
This one is also unnecessary, since it is trivial to define the
return type as size_t and guarantee success.

I haven't even mentioned the assumption of 2's complement
arithmetic, which I believe to be embedded in the hasNulByte
macro. I haven't bothered to think this out.

Would you believe that so many hidden assumptions can be embedded
in such innocent looking code? The sneaky thing is that the code
appears trivially correct at first glance. This is the stuff that
Heisenbugs are made of. Yet use of such code is fairly safe if we
are aware of those hidden assumptions.


I guess I will have to keep all this in mind the next time I copy C
code off of a web page devoted to x86 assembly hacks and try to
get it to run on a machine with 24-bit ones-complement integers.

;-)

- C
Mar 8 '06 #74
On 8 Mar 2006 08:52:13 -0800, "Vladimir S. Oka"
<no****@btopenworld.com> wrote:

Al Balmer wrote:
On 8 Mar 2006 08:08:48 GMT, Jordan Abel <ra*******@gmail.com> wrote:
>On 2006-03-08, Randy Howard <ra*********@FOOverizonBAR.net> wrote:
>> blackguard:
>> A thoroughly unprincipled person; a scoundrel.
>> A foul-mouthed person.
>
>Sure, that's what it _means_, but...
>
>> Does everything have to become a racism experiment?
>
>the question is one of etymology.


?
What do you imagine the etymology to be?


FWIW, from <http://www.wordorigins.org/wordorb.htm>:


Actually, I wasn't asking that. I wondered what Jordan was imagining
it to be.

--
Al Balmer
Sun City, AZ
Mar 8 '06 #75
Al Balmer wrote:
On 8 Mar 2006 08:52:13 -0800, "Vladimir S. Oka"
<no****@btopenworld.com> wrote:

Al Balmer wrote:
On 8 Mar 2006 08:08:48 GMT, Jordan Abel <ra*******@gmail.com> wrote:

>On 2006-03-08, Randy Howard <ra*********@FOOverizonBAR.net> wrote:
>> blackguard:
>> A thoroughly unprincipled person; a scoundrel.
>> A foul-mouthed person.
>
>Sure, that's what it _means_, but...
>
>> Does everything have to become a racism experiment?
>
>the question is one of etymology.

?
What do you imagine the etymology to be?


FWIW, from <http://www.wordorigins.org/wordorb.htm>:


Actually, I wasn't asking that. I wondered what Jordan was imagining
it to be.


Ah, sorry. I didn't read the lot carefully enough.

--
BR, Vladimir

There was a young lady named Mandel
Who caused quite a neighborhood scandal
By coming out bare
On the main village square
And frigging herself with a candle.

Mar 8 '06 #76
James Dow Allen wrote:
Mr. Hsieh immediately does p++ and his code will be correct if then
p == s. I don't question Chuck's argument, or whether the C standard
allows the C compiler to trash the hard disk when it sees p=s-1,
but I'm sincerely curious whether anyone knows of an *actual*
environment
where p == s will ever be false after (p = s-1; p++).
...


There are actual environments where 's - 1' alone is enough to cause a
crash. In fact, any non-flat memory model environment (i.e. environment
with 'segment:offset' pointers) would be a good candidate. The modern
x86 will normally crash, unless the implementation takes specific steps
to avoid it.

--
Best regards,
Andrey Tarasevich
Mar 8 '06 #77
CBFalconer wrote:
...
int xstrlen (const char *s) {
const char *p; /* 5 */
int d;

p = s - 1;
do {
p++; /* 10 */
if ((((int) p) & (SW - 1)) == 0) {
do {
d = *((int *) p);
p += SW;
} while (!hasNulByte (d)); /* 15 */
p -= SW;
}
} while (*p != 0);
return p - s;
} /* 20 */
...
Line 13 again uses the unwarranted cast of a pointer to an int.
This enables the use of the already suspicious macro hasNulByte in
line 15.
...


This is not exactly correct. Line 13 uses a cast of a 'char*' pointer to
an 'int*' pointer, not to an 'int'. This is relatively OK, especially
compared to the "less predictable" pointer->int casts.

After that the char array memory pointed by the resultant 'int*' pointer
is reinterpreted as an 'int' object. The validity of this is covered by
the previous assumptions.

--
Best regards,
Andrey Tarasevich
Mar 8 '06 #78
Andrew Reilly wrote:
...
Bah, humbug. Think I'll go back to assembly language, where pointers do
what you tell them to, and don't complain about it to their lawyers.
...


Incorrect. It is not about "lawyers", it is about actual _crashes_. The
reason why 's - 1' itself can (an will) crash on certain platforms is
the same as the one that will make it crash in exactly the same way in
"assembly language" on such platforms.

Trying to implement the same code in assembly language on such a
platform would specifically force you to work around the potential
crash, sacrificing efficiency for safety. In other words, you'd be
forced to use different techniques for doing 's - 1' in contexts where
it might underflow and in contexts where it definitely will not underflow.

C language, on the other hand, doesn't offer two different '-' operators
to for these two specific situations. Instead C language outlaws (in
essence) pointer underflows. This is a perfectly reasonable approach for
a higher level language.

--
Best regards,
Andrey Tarasevich
Mar 8 '06 #79
On 2006-03-07 13:11:24 -0500, "Peter Harrison" <pe***@cannock.ac.uk> said:

"Ben Pfaff" <bl*@cs.stanford.edu> wrote in message
news:87************@benpfaff.org...
Paul Burke <pa**@scazon.com> writes:
My simple mind must be missing something big here. If for pointer p,
(p-1) is deprecated because it's not guaranteed that it points to
anything sensible, why is p++ OK? There's no boundary checking in C
(unless you put it in).


It seems I too have a simple mind. I read the recent replies to this
and found myself not sure I am better off.

This is what I think I understand:

int x;
int *p;
int *q;

p = &x; /* is OK */


Correct, p now points to x
p = &x + 1; /* is OK even though we have no idea what p points to */
Basically. p now points "one past the end" of x. You're allowed to
compare p to (&x, &x + 1 or NULL), as well as subtract one from p, but
you're not allowed to dereference p.
p = &x + 6; /* is undefined - does this mean that p may not be the */
/* address six locations beyond x? */
/* or just that we don't know what is there? */
No, undefined means that the program could have just crashed here, and
never got to the point of assigning *anything* (indeterminate or not)
to p.
p = &x - 1; / as previous */
After the (&x + 6), all bets are off.
But the poster was comparing with p++, so...

p = &x; /* so far so good */
p++; /* still ok (?) but we dont know what is there */
Correct, p is now a "one past the end", just as with (p = &x + 1).
p++; /* is this now undefined? */
Yes, it is undefined. The program may have just crashed at this point.
I guess _my_ question is - in this context does 'undefined' mean just
that we cannot say anything about what the pointer points to or that we
cannot say anything about the value of the pointer.
No, 'undefined' means that the program could do anything at all.
Undefined means that there is no defined behavior whatsoever as far as
the standard is concerned.
So for example:

p = &x;
q = &x;
p = p+8;
q = q+8;

should p and q have the same value or is that undefined.


p and q may not even have values.

--
Clark S. Cox, III
cl*******@gmail.com

Mar 8 '06 #80
>Keith Thompson said:
Um, I always thought that "within" and "outside" were two different
things.

In article <du**********@nwrdmz03.dmz.ncs.ea.ibs-infra.bt.com>
Richard Heathfield <in*****@invalid.invalid> wrote:Ask Jack to lend you his bottle. You'll soon change your mind.


To clarify a bit ...

A mathematician named Klein
Thought the Moebius band was divine
Said he, "If you glue
The edges of two
You'll get a weird bottle like mine!"

:-)

(A Moebius band has only one side. It is a two-dimensional object
that exists only in a 3-dimensional [or higher] space. A Klein
bottle can only be made in a 4-dimensional [or higher] space, and
is a 3-D object with only one side. The concept can be carried on
indefinitely, but a Klein bottle is hard enough to contemplate
already.)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Mar 8 '06 #81
On 2006-03-08, Vladimir S. Oka <no****@btopenworld.com> wrote:
Al Balmer wrote:
On 8 Mar 2006 08:52:13 -0800, "Vladimir S. Oka"
<no****@btopenworld.com> wrote:

Al Balmer wrote:
On 8 Mar 2006 08:08:48 GMT, Jordan Abel <ra*******@gmail.com> wrote:

>On 2006-03-08, Randy Howard <ra*********@FOOverizonBAR.net> wrote:
>> blackguard:
>> A thoroughly unprincipled person; a scoundrel.
>> A foul-mouthed person.
>
>Sure, that's what it _means_, but...
>
>> Does everything have to become a racism experiment?
>
>the question is one of etymology.

?
What do you imagine the etymology to be?

FWIW, from <http://www.wordorigins.org/wordorb.htm>:


Actually, I wasn't asking that. I wondered what Jordan was imagining
it to be.


Ah, sorry. I didn't read the lot carefully enough.


I don't imagine it to be anything. I suspect others do, and that's why
there is a potential for accusations of racism
Mar 8 '06 #82
On Wed, 08 Mar 2006 11:52:52 -0800, Andrey Tarasevich
<an**************@hotmail.com> wrote:
James Dow Allen wrote:
Mr. Hsieh immediately does p++ and his code will be correct if then
p == s. I don't question Chuck's argument, or whether the C standard
allows the C compiler to trash the hard disk when it sees p=s-1,
but I'm sincerely curious whether anyone knows of an *actual*
environment
where p == s will ever be false after (p = s-1; p++).
...


There are actual environments where 's - 1' alone is enough to cause a
crash. In fact, any non-flat memory model environment (i.e. environment
with 'segment:offset' pointers) would be a good candidate. The modern
x86 will normally crash, unless the implementation takes specific steps
to avoid it.


Exactly which x86 mode are you referring to ?

16 bit real mode, virtual86 mode or some 32 mode (which are after all
segmented modes with all segmented registers with the same value) ?

If s is stored in 16 bit mode in ES:DX with DX=0, then p=s-1 would
need to decrement ES by one and store 000F in DX. Why would reloading
ES cause any traps, since no actual memory reference is attempted ?
Doing p++ would most likely just increment DX by one to 0010, thus
ES:DX would point to s again, which is a legal address, but with a
different internal representation.

IIRC some 32 bit addressing mode would trap if one tried to load the
segment register, but again, how could the caller generate such
constructs as s = ES:0 at least from user mode. In practice s = ES:0
could only be set by a kernel mode routine calling a user mode
routine, so this is really an issue only with main() parameters.

Paul

Mar 8 '06 #83
Andrey Tarasevich wrote:
James Dow Allen wrote:
Mr. Hsieh immediately does p++ and his code will be correct if then
p == s. I don't question Chuck's argument, or whether the C standard
allows the C compiler to trash the hard disk when it sees p=s-1,
but I'm sincerely curious whether anyone knows of an *actual*
environment where p == s will ever be false after (p = s-1; p++).
...


There are actual environments where 's - 1' alone is enough to cause a
crash. In fact, any non-flat memory model environment (i.e. environment
with 'segment:offset' pointers) would be a good candidate. The modern
x86 will normally crash, unless the implementation takes specific steps
to avoid it.


This illustrates the fact that usenet threads are uncontrollable.
I wrote the original to draw attention to hidden assumptions, and
it has immediately degenerated into thrashing about the one real
error in the sample code. I could have corrected and eliminated
that error by a slight code rework, but then I would have modified
Mr Hsiehs code. There were at least seven further assumptions,
most of which were necessary for the purposes of the code, but
strictly limited its applicability.

My aim was to get people to recognize and document such hidden
assumptions, rather than leaving them lying there to create sneaky
bugs in apparently portable code.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
Mar 8 '06 #84
On Wed, 08 Mar 2006 13:14:39 -0800, Andrey Tarasevich wrote:
Andrew Reilly wrote:
...
Bah, humbug. Think I'll go back to assembly language, where pointers do
what you tell them to, and don't complain about it to their lawyers.
...
Incorrect. It is not about "lawyers", it is about actual _crashes_. The
reason why 's - 1' itself can (an will) crash on certain platforms is
the same as the one that will make it crash in exactly the same way in
"assembly language" on such platforms.


No, my point was that the language lawyers have taken a perfectly
appealing and generally applicable abstraction, and outlawed certain
obvious constructions on the flimsy grounds that it was easier to
pervert the abstraction than to support it on some uncommon (or indeed
hypothetical) hardware.
Trying to implement the same code in assembly language on such a
platform would specifically force you to work around the potential
crash, sacrificing efficiency for safety. In other words, you'd be
forced to use different techniques for doing 's - 1' in contexts where
it might underflow and in contexts where it definitely will not
underflow.
The assembly language version of the algorithm would *not* crash, because
the assembly language of the perverted platform on which that was a
possibility would require a construction (probably using an explicit
integer array index, rather than pointer manipulation) that would cause
exactly zero inefficiency or impairment of safety. (Because the index is
only *used* in-bounds.)
C language, on the other hand, doesn't offer two different '-' operators
to for these two specific situations. Instead C language outlaws (in
essence) pointer underflows.
No it doesn't. The C language allows "inner" pointers to be passed to
functions, with no other way for the function to tell whether s - 1 is
legal or illegal in any particular call context. It is therefore clear
what the abstraction of pointer arithmetic implies. That some platforms
(may) have a problem with this is not the language's fault. It's just a
bit harder to support C on them. That's OK. There are plenty of other
languages that don't allow that construct at all (or even have pointers as
such), and they were clearly the targets in mind for the people who
developed such hardware. The standard authors erred. It should have been
incumbant on implementers on odd platforms to support the full power of
the language or not at all, rather than for all other (C-like) platforms
to carry the oddness around with them, in their code. However, it's clear
that's a very old mistake, and no-one's going to back away from it now.
This is a perfectly reasonable approach for a higher level language.


C is not a higher-level language. It's a universal assembler. Pick
another one.

--
Andrew

Mar 8 '06 #85
In article <pa****************************@areilly.bpc-users.org>,
Andrew Reilly <an*************@areilly.bpc-users.org> wrote:
On Tue, 07 Mar 2006 13:28:37 -0500, Arthur J. O'Dwyer wrote:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist. So
we have undefined behavior.


Only because the standard says so. Didn't have to be that way. There are
plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of iterating
through an array at a non-unit stride, using the normal p < s + N
termination condition. The loop finishes with p > s + N and the standard
says "pow, you're dead", when the semantically identical code written with
integer indexes has no legal problems.


Consider a typical implementation with 32 bit pointers and objects that
can be close to 2 GByte in size.

typedef struct { char a [2000000000]; } giantobject;

giantobject anobject;

giantobject* p = &anobject;
giantobject* q = &anobject - 1;
giantobject* r = &anobject + 1;
giantobject* s = &anobject + 2;

It would be very hard to implement this in a way that both q and s would
be valid; for example, it would be very hard to achieve that q < p, p <
r and r < s are all true. If q and s cannot be both valid, and there
isn't much reason why one should be valid and the other shouldn't, then
neither can be used in a program with any useful guarantees by the
standard.
Mar 8 '06 #86
"Clark S. Cox III" wrote:
"Peter Harrison" <pe***@cannock.ac.uk> said:
Paul Burke <pa**@scazon.com> writes:

My simple mind must be missing something big here. If for
pointer p, (p-1) is deprecated because it's not guaranteed that
it points to anything sensible, why is p++ OK? There's no
boundary checking in C (unless you put it in).

It's not deprecated, it's illegal. Once you have involved UB all
bets are off. Without the p-1 the p++ statements are fine, as long
as they don't advance the pointer more than one past the end of the
object.

It seems I too have a simple mind. I read the recent replies to this
and found myself not sure I am better off.

This is what I think I understand:

int x;
int *p;
int *q;

p = &x; /* is OK */


Correct, p now points to x


and a statement --p or p-- would be illegal. However p++ would be
legal. But *(++p) would be illegal, because it dereferences past
the confines of the object x.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
Mar 8 '06 #87
In article <pa****************************@areilly.bpc-users.org>,
Andrew Reilly <an*************@areilly.bpc-users.org> wrote:
On Tue, 07 Mar 2006 22:26:57 +0000, Keith Thompson wrote:
Andrew Reilly <an*************@areilly.bpc-users.org> writes:
On Tue, 07 Mar 2006 13:28:37 -0500, Arthur J. O'Dwyer wrote:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist.
So we have undefined behavior.

Only because the standard says so. Didn't have to be that way. There
are plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of
iterating through an array at a non-unit stride, using the normal p < s
+ N termination condition. The loop finishes with p > s + N and the
standard says "pow, you're dead", when the semantically identical code
written with integer indexes has no legal problems.


The standard is specifically designed to allow for architectures where
constructing an invalid pointer value can cause a trap even if the pointer
is not dereferenced.


And are there any? Any in common use? Any where the equivalent (well
defined) pointer+offset code would be slower?


Question: If the C Standard guarantees that for any array a, &a [-1]
should be valid, should it also guarantee that &a [-1] != NULL and that
&a [-1] < &a [0] and &a [-1] < &a [0]?

In that case, what happens when I create an array with a single element
that is an enormously large struct?
Mar 8 '06 #88
In article <pa****************************@areilly.bpc-users.org>,
Andrew Reilly <an*************@areilly.bpc-users.org> wrote:
On Wed, 08 Mar 2006 00:56:03 +0000, Robin Haigh wrote:
It's not always equivalent. The trouble starts with

char a[8];
char *p;

for ( p = a+1 ; p < a+8 ; p += 2 ) {}

intending that the loop terminates on p == a+9 (since it skips a+8). But
how do we know that a+9 > a+8 ? If the array is right at the top of some
kind of segment, the arithmetic might have wrapped round.


a+9 > a+8 because a + 9 - (a + 8) == 1, which is > 0. Doesn't matter if
the signed or unsigned pointer value wrapped around in an intermediate
term. On many machines that's how the comparison is done anyway. You're
suggesting that having the compiler ensure that a+8 doesn't wrap around
wrt a is OK, but a+9 is too hard. I don't buy it.


I just tried the following program (CodeWarrior 10 on MacOS X):

#include <stdio.h>

#define SIZE (50*1000000L)
typedef struct {
char a [SIZE];
} bigstruct;

static bigstruct bigarray [8];

int main(void)
{
printf("%lx\n", (unsigned long) &bigarray [0]);
printf("%lx\n", (unsigned long) &bigarray [9]);
printf("%lx\n", (unsigned long) &bigarray [-1]);

if (&bigarray [-1] < & bigarray [0])
printf ("Everything is fine\n");
else
printf ("The C Standard is right: &bigarray [-1] is broken\n");

return 0;
}

The output is:

2008ce0
1cd30160
ff059c60
The C Standard is right: &bigarray [-1] is broken
Mar 8 '06 #89
On 8 Mar 2006 20:35:24 GMT, Chris Torek <no****@torek.net> wrote:
Keith Thompson said:
Um, I always thought that "within" and "outside" were two different
things.


In article <du**********@nwrdmz03.dmz.ncs.ea.ibs-infra.bt.com>
Richard Heathfield <in*****@invalid.invalid> wrote:
Ask Jack to lend you his bottle. You'll soon change your mind.


To clarify a bit ...

A mathematician named Klein
Thought the Moebius band was divine
Said he, "If you glue
The edges of two
You'll get a weird bottle like mine!"

:-)

(A Moebius band has only one side. It is a two-dimensional object
that exists only in a 3-dimensional [or higher] space. A Klein
bottle can only be made in a 4-dimensional [or higher] space, and
is a 3-D object with only one side. The concept can be carried on
indefinitely, but a Klein bottle is hard enough to contemplate
already.)


But that was Felix. Who's Jack?

--
Al Balmer
Sun City, AZ
Mar 9 '06 #90
On Thu, 09 Mar 2006 09:13:24 +1100, Andrew Reilly
<an*************@areilly.bpc-users.org> wrote:
C is not a higher-level language. It's a universal assembler. Pick
another one.


Nice parrot. I think the original author of that phrase meant it as a
joke.

I spent 25 years writing assembler. C is a higher-level language.

--
Al Balmer
Sun City, AZ
Mar 9 '06 #91
Andrew Reilly <an*************@areilly.bpc-users.org> writes:
[...]
C is not a higher-level language.
It's higher-level than some, lower than others. I'd call it a
medium-level language.
It's a universal assembler.


Not in any meaningful sense of the word "assembler".

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 9 '06 #92
In article <12*************@news.supernews.com> Andrey Tarasevich <an**************@hotmail.com> writes:
....
The first time I see this code, but:
const char *p; /* 5 */ .... if ((((int) p) & (SW - 1)) == 0) { ....
This will not result in the desired answer on the Cray 1.
On the Cray 1 a byte pointer has the word address (64 bit words)
in the lower 48 bits and a byte offset in the upper 16 bits.
So this code actually tests whether the *word* address is even.
And so the code will fail to give the correct answer in the
following case:
char f[] = "0123456789";
int i;
f[1] = 0;
i = strlen(f + 2);
when f starts at an even word address it will give the answer 1
instead of the correct 8.
d = *((int *) p);


Note that in here the byte-offset in the pointer is ignored, so
d points to the integer that contains the character array:
"0\000234567".

Again an hidden assumption I think. (It is exactly this hidden
assumption that made porting of a particular program extremely
difficult to the Cray 1. The assumption was that in a word
pointer the lowest bit was 0, and that bit was used for
administrative purposes.)
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Mar 9 '06 #93
In article <12*************@corp.supernews.com> msg <msg@_cybertheque.org_> writes:
OK, I've made enough of a fool of myself already. I'll go and have that
second cup of coffee for the morning, before I start going on about having
the standard support non-2's complement integers, or machines that have no
arithmetic right shifts...


I get queasy reading the rants against 1's complement architectures; I
wish Seymour Cray were still around to address this.


There are quite a few niceties indeed. Negation of a number is really
simple, just a logical operation, and there are others. This means
simpler hardware for the basic operations on signed objects, except for
the carry. It was only when I encountered the PDP that I saw the first
2's complement machine.

On the other hand, when Seymour Cray started his own company, those
machines where 2's complement. And he shifted from 60 to 64 bit
words, but still retained octal notation (he did not like hexadecimal
at all).
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Mar 9 '06 #94
On Wed, 08 Mar 2006 18:26:35 -0500, CBFalconer wrote:
"Clark S. Cox III" wrote:
"Peter Harrison" <pe***@cannock.ac.uk> said:
Paul Burke <pa**@scazon.com> writes:

> My simple mind must be missing something big here. If for
> pointer p, (p-1) is deprecated because it's not guaranteed that
> it points to anything sensible, why is p++ OK? There's no
> boundary checking in C (unless you put it in).


It's not deprecated, it's illegal. Once you have involved UB all
bets are off. Without the p-1 the p++ statements are fine, as long
as they don't advance the pointer more than one past the end of the
object.


It's no more "illegal" than any of the other undefined behaviour that you
pointed out in that code snippet. There aren't different classes of
undefined behaviour, are there?

I reckon I'll just go with the undefined flow, in the interests of
efficient, clean code on the architectures that I target. I'll make sure
that I supply a document specifying how the compilers must behave for all
of the undefined behaviours that I'm relying on, OK? I have no interest
in trying to make my code work on architectures for which they don't hold.

Of course, that list will pretty much just describe the usual flat-memory,
2's compliment machine that is actually used in almost all circumstances
in the present day, anyway. Anyone using anything else already knows that
they're in a world of trouble and that all bets are off.

--
Andrew

Mar 9 '06 #95
Now responding to the basic article:

In article <44***************@yahoo.com> cb********@maineline.net writes:
....
#define hasNulByte(x) ((x - 0x01010101) & ~x & 0x80808080) Let us start with line 1! The constants appear to require that
sizeof(int) be 4, and that CHAR_BIT be precisely 8. I haven't
really looked too closely, and it is possible that the ~x term
allows for larger sizeof(int), but nothing allows for larger
CHAR_BIT.
It does not allow for larger sizeof(int) (as it does not allow for
other values of CHAR_BIT). When sizeof(int) > 4 it will only show
whether there is a zero byte in the low order four bytes. When
sizeof(int) < 4 it will give false positives. Both constants have
to be changed when sizeof(int) != 4. Moreover, it will not work on
1's complement or sign-magnitude machines. Using unsigned here is
most appropriate.
if ((((int) p) & (SW - 1)) == 0) { Then we come to the purpose of the statement, which is to discover
if the pointer is suitably aligned for an int. It does this by
bit-anding with SW-1, which is the concealed sizeof(int)-1. This
won't be very useful if sizeof(int) is, say, 3 or any other
non-poweroftwo. In addition, it assumes that an aligned pointer
will have those bits zero. While this last is very likely in
todays systems, it is still an assumption. The system designer is
entitled to assume this, but user code is not.


It is false on the Cray 1 and its derivatives. See another article
by me where I show that it may give wrong answers.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Mar 9 '06 #96
On Wed, 08 Mar 2006 23:53:23 +0000, Christian Bau wrote:
Only because the standard says so. Didn't have to be that way. There are
plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of iterating
through an array at a non-unit stride, using the normal p < s + N
termination condition. The loop finishes with p > s + N and the standard
says "pow, you're dead", when the semantically identical code written with
integer indexes has no legal problems.
Consider a typical implementation with 32 bit pointers and objects that
can be close to 2 GByte in size.


Yeah, my world-view doesn't allow individual objects to occupy half the
address space or more. I'm comfortable with that restriction, but I can
accept that there may be others that aren't. They're wrong, of course :-)
typedef struct { char a [2000000000]; } giantobject;

giantobject anobject;

giantobject* p = &anobject;
giantobject* q = &anobject - 1;
giantobject* r = &anobject + 1;
giantobject* s = &anobject + 2;

It would be very hard to implement this in a way that both q and s would
be valid; for example, it would be very hard to achieve that q < p, p <
r and r < s are all true. If q and s cannot be both valid, and there
isn't much reason why one should be valid and the other shouldn't, then
neither can be used in a program with any useful guarantees by the
standard.


Yes, very hard indeed. Partition your object or use a machine with bigger
addresses. Doesn't seem like a good enough reason to me to break a very
useful abstraction.

Posit: you've got N bits to play with, both for addresses and integers.
You need to be able to form a ptrdiff_t, which is a signed quantity, to
compute d = anobject.a[i] - anobject.a[j], for any indices i,j within the
range of the array. The range of signed quantities is just less than half
that of unsigned. That range must therefore define how large any
individual object can be. I.e., half of your address space. Neat, huh?

Yeah, yeah, for any complicated problem there's an answer that is simple,
neat and wrong.

--
Andrew

Mar 9 '06 #97
Andrew Reilly <an*************@areilly.bpc-users.org> writes:
On Wed, 08 Mar 2006 18:26:35 -0500, CBFalconer wrote:
"Clark S. Cox III" wrote:
"Peter Harrison" <pe***@cannock.ac.uk> said:
> Paul Burke <pa**@scazon.com> writes:
>
>> My simple mind must be missing something big here. If for
>> pointer p, (p-1) is deprecated because it's not guaranteed that
>> it points to anything sensible, why is p++ OK? There's no
>> boundary checking in C (unless you put it in).
It's not deprecated, it's illegal. Once you have involved UB all
bets are off. Without the p-1 the p++ statements are fine, as long
as they don't advance the pointer more than one past the end of the
object.


It's no more "illegal" than any of the other undefined behaviour that you
pointed out in that code snippet. There aren't different classes of
undefined behaviour, are there?


Right, "illegal" probably isn't the best word to describe undefined
behavior. An implementation is required to diagnose syntax errors and
constraint violations; it's specifically *not* required to diagnose
undefined behavior (though it's allowed to do so).
I reckon I'll just go with the undefined flow, in the interests of
efficient, clean code on the architectures that I target. I'll make sure
that I supply a document specifying how the compilers must behave for all
of the undefined behaviours that I'm relying on, OK? I have no interest
in trying to make my code work on architectures for which they don't hold.
Ok, you can do that if you like. If you can manage to avoid undefined
behavior altogether, your code is likely to work on *any* system with
a conforming C implementation; if not, it may break when ported to
some exotic system.

For example, code that makes certain seemingly reasonable assumptions
about pointer representations will fail on Cray vector systems. I've
run into such code myself; the corrected code was actually simpler and
cleaner.

If you write code that depends on undefined behavior, *and* there's a
real advantage in doing so on some particular set of platforms, *and*
you don't mind that your code could fail on other platforms, then
that's a perfectly legitimate choice. (If you post such code here in
comp.lang.c, you can expect us to point out the undefined behavior;
some of us might possibly be overly enthusiastic in pointing it out.)
Of course, that list will pretty much just describe the usual flat-memory,
2's compliment machine that is actually used in almost all circumstances
in the present day, anyway. Anyone using anything else already knows that
they're in a world of trouble and that all bets are off.


All bets don't *need* to be off if you're able to stick to what the C
standard actually guarantees.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 9 '06 #98
On Thu, 09 Mar 2006 00:00:34 +0000, Christian Bau wrote:
Question: If the C Standard guarantees that for any array a, &a [-1]
should be valid, should it also guarantee that &a [-1] != NULL
Probably, since NULL has been given the guarantee that it's unique in some
sense. In an embedded environment, or assembly language, the construct
could of course produce NULL (for whatever value you pick for NULL), and
NULL would not be special. I don't know that insisting on the existence of
a unique and special NULL pointer value is one of the standard's crowning
achievements, either. It's convenient for lots of things, but it's just
not the way simple hardware works, particularly at the limits.
and that
&a [-1] < &a [0]
Sure, in the ptrdiff sense that I mentioned before.
I.e., (a - 1) - (a + 0) < 0 (indeed, identically -1)
In that case, what happens when I create an array with a single element
that is an enormously large struct?


Go nuts. If your address space is larger than your integer range, (as, is
the case for I32LP64 machines), your compiler might have to make sure that
it performs the difference calculation to sufficient precision.

I still feel comfortable about this failing to work for objects larger
than half the address space, or even for objects larger than the range of
an int. That's IMO, a much less uncomfortable restriction than the one
that the standard seems to have stipulated, which is that the simple and
obvious pointer arithmetic that you've used in your examples works in some
situations and doesn't work in others. (Remember: it's all good if those
array references are in a function that was itself passed (&foo[n], for
n>=1) as the argument.)

Cheers,

--
Andrew

Mar 9 '06 #99
Andrew Reilly <an*************@areilly.bpc-users.org> writes:
On Wed, 08 Mar 2006 23:53:23 +0000, Christian Bau wrote: [...]
Consider a typical implementation with 32 bit pointers and objects that
can be close to 2 GByte in size.


Yeah, my world-view doesn't allow individual objects to occupy half the
address space or more. I'm comfortable with that restriction, but I can
accept that there may be others that aren't. They're wrong, of course :-)


I can easily imagine a program that needs to manipulate a very large
data set (for a scientific simulation, perhaps). For a data set that
won't fit into memory all at once, loading as much of it as possible
can significantly improve performance.
typedef struct { char a [2000000000]; } giantobject;

giantobject anobject;

giantobject* p = &anobject;
giantobject* q = &anobject - 1;
giantobject* r = &anobject + 1;
giantobject* s = &anobject + 2;

It would be very hard to implement this in a way that both q and s would
be valid; for example, it would be very hard to achieve that q < p, p <
r and r < s are all true. If q and s cannot be both valid, and there
isn't much reason why one should be valid and the other shouldn't, then
neither can be used in a program with any useful guarantees by the
standard.


Yes, very hard indeed. Partition your object or use a machine with bigger
addresses. Doesn't seem like a good enough reason to me to break a very
useful abstraction.


Your "very useful abstraction" is not something that has *ever* been
guaranteed by any C standard or reference manual.
Posit: you've got N bits to play with, both for addresses and integers.
You need to be able to form a ptrdiff_t, which is a signed quantity, to
compute d = anobject.a[i] - anobject.a[j], for any indices i,j within the
range of the array. The range of signed quantities is just less than half
that of unsigned. That range must therefore define how large any
individual object can be. I.e., half of your address space. Neat, huh?


The standard explicitly allows for the possibility that pointer
subtraction within a single object might overflow (if so, it invokes
undefined behavior). Or, given that C99 requires 64-bit integer
types, making ptrdiff_t larger should avoid the problem for any
current systems (I don't expect to see full 64-bit address spaces for
a long time).

The standard is full of compromises. Not everyone likes all of them.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 9 '06 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.