473,842 Members | 1,837 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Making Fatal Hidden Assumptions

We often find hidden, and totally unnecessary, assumptions being
made in code. The following leans heavily on one particular
example, which happens to be in C. However similar things can (and
do) occur in any language.

These assumptions are generally made because of familiarity with
the language. As a non-code example, consider the idea that the
faulty code is written by blackguards bent on foulling the
language. The term blackguards is not in favor these days, and for
good reason. However, the older you are, the more likely you are
to have used it since childhood, and to use it again, barring
specific thought on the subject. The same type of thing applies to
writing code.

I hope, with this little monograph, to encourage people to examine
some hidden assumptions they are making in their code. As ever, in
dealing with C, the reference standard is the ISO C standard.
Versions can be found in text and pdf format, by searching for N869
and N1124. [1] The latter does not have a text version, but is
more up-to-date.

We will always have innocent appearing code with these kinds of
assumptions built-in. However it would be wise to annotate such
code to make the assumptions explicit, which can avoid a great deal
of agony when the code is reused under other systems.

In the following example, the code is as downloaded from the
referenced URL, and the comments are entirely mine, including the
'every 5' linenumber references.

/* Making fatal hidden assumptions */
/* Paul Hsiehs version of strlen.
http://www.azillionmonkeys.com/qed/asmexample.html

Some sneaky hidden assumptions here:
1. p = s - 1 is valid. Not guaranteed. Careless coding.
2. cast (int) p is meaningful. Not guaranteed.
3. Use of 2's complement arithmetic.
4. ints have no trap representations or hidden bits.
5. 4 == sizeof(int) && 8 == CHAR_BIT.
6. size_t is actually int.
7. sizeof(int) is a power of 2.
8. int alignment depends on a zeroed bit field.

Since strlen is normally supplied by the system, the system
designer can guarantee all but item 1. Otherwise this is
not portable. Item 1 can probably be beaten by suitable
code reorganization to avoid the initial p = s - 1. This
is a serious bug which, for example, can cause segfaults
on many systems. It is most likely to foul when (int)s
has the value 0, and is meaningful.

He fails to make the valid assumption: 1 == sizeof(char).
*/

#define hasNulByte(x) ((x - 0x01010101) & ~x & 0x80808080)
#define SW (sizeof (int) / sizeof (char))

int xstrlen (const char *s) {
const char *p; /* 5 */
int d;

p = s - 1;
do {
p++; /* 10 */
if ((((int) p) & (SW - 1)) == 0) {
do {
d = *((int *) p);
p += SW;
} while (!hasNulByte (d)); /* 15 */
p -= SW;
}
} while (*p != 0);
return p - s;
} /* 20 */

Let us start with line 1! The constants appear to require that
sizeof(int) be 4, and that CHAR_BIT be precisely 8. I haven't
really looked too closely, and it is possible that the ~x term
allows for larger sizeof(int), but nothing allows for larger
CHAR_BIT. A further hidden assumption is that there are no trap
values in the representation of an int. Its functioning is
doubtful when sizeof(int) is less that 4. At the least it will
force promotion to long, which will seriously affect the speed.

This is an ingenious and speedy way of detecting a zero byte within
an int, provided the preconditions are met. There is nothing wrong
with it, PROVIDED we know when it is valid.

In line 2 we have the confusing use of sizeof(char), which is 1 by
definition. This just serves to obscure the fact that SW is
actually sizeof(int) later. No hidden assumptions have been made
here, but the usage helps to conceal later assumptions.

Line 4. Since this is intended to replace the systems strlen()
function, it would seem advantageous to use the appropriate
signature for the function. In particular strlen returns a size_t,
not an int. size_t is always unsigned.

In line 8 we come to a biggie. The standard specifically does not
guarantee the action of a pointer below an object. The only real
purpose of this statement is to compensate for the initial
increment in line 10. This can be avoided by rearrangement of the
code, which will then let the routine function where the
assumptions are valid. This is the only real error in the code
that I see.

In line 11 we have several hidden assumptions. The first is that
the cast of a pointer to an int is valid. This is never
guaranteed. A pointer can be much larger than an int, and may have
all sorts of non-integer like information embedded, such as segment
id. If sizeof(int) is less than 4 the validity of this is even
less likely.

Then we come to the purpose of the statement, which is to discover
if the pointer is suitably aligned for an int. It does this by
bit-anding with SW-1, which is the concealed sizeof(int)-1. This
won't be very useful if sizeof(int) is, say, 3 or any other
non-poweroftwo. In addition, it assumes that an aligned pointer
will have those bits zero. While this last is very likely in
todays systems, it is still an assumption. The system designer is
entitled to assume this, but user code is not.

Line 13 again uses the unwarranted cast of a pointer to an int.
This enables the use of the already suspicious macro hasNulByte in
line 15.

If all these assumptions are correct, line 19 finally calculates a
pointer difference (which is valid, and of type size_t or ssize_t,
but will always fit into a size_t). It then does a concealed cast
of this into an int, which could cause undefined or implementation
defined behaviour if the value exceeds what will fit into an int.
This one is also unnecessary, since it is trivial to define the
return type as size_t and guarantee success.

I haven't even mentioned the assumption of 2's complement
arithmetic, which I believe to be embedded in the hasNulByte
macro. I haven't bothered to think this out.

Would you believe that so many hidden assumptions can be embedded
in such innocent looking code? The sneaky thing is that the code
appears trivially correct at first glance. This is the stuff that
Heisenbugs are made of. Yet use of such code is fairly safe if we
are aware of those hidden assumptions.

I have cross-posted this without setting follow-ups, because I
believe that discussion will be valid in all the newsgroups posted.

[1] The draft C standards can be found at:
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/>

--
"If you want to post a followup via groups.google.c om, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell. org/google/>
Also see <http://www.safalra.com/special/googlegroupsrep ly/>

Mar 6 '06
351 13191
On 9 Mar 2006 12:34:20 -0800, "Ed Prochak" <ed*******@gmai l.com>
wrote:
Andrew Reilly wrote:

Yeah, me to. Still do, regularly, on processors that will never have a C
compiler. C is as close to a universal assembler as we've got at the
moment. It doesn't stick it's neck out too far, although a more
deliberately designed universal assembler would be a really good thing.
(It's on my list of things to do...)


Would a better universal assembler be more like assembler or more like
high level languages? I really think C hit very close to the optimal
balance.


I don't see much room for a "universal" assembler between C and a
traditional assembler, since the instruction sets can vary quite a
lot.

However, there exists a group of intermediate languages that try to
solve some problems in traditional assembler. One problem is the
management of local labels for branch targets, the other is that with
usually one opcode/source line the program can be quite long. Both
these things make it get a general view of what is going on, since
only a small fraction of the operations will fit into the editor
screen.

To handle the label management, some kind of conditional and repeat
blocks can be easily created. Allowing simple assignment statements
with assembly language operands makes it easier to but multiple
instructions on a single line.

Such languages have been implemented either as a preprocessor to an
ordinary assembler or directly as a macro set at least on PDP-11 and
VAX. The source code for PDP-11 might look like this:

IF R5 EQ #7
R3 = Base(R2) + (R0)+ - R1 + #4
ELSE
R3 = #5
SETF ; "Special" machine instruction
END_IF

would generate

CMP R5,#7
BNE 9$ ; Branch on opposite condition to else part
MOV Base(R2),R3
ADD (R0)+,R3
SUB R1,R3
ADD #4,R3
BR 19$ ; Jump over else part
9$: ; Else part
MOV #5,R3
SETF ; Copied directly from source
19$:

The assignment statement was always evaluated left to right or no
parenthesis were available to change the evaluation order. The
conditional expression consisted of one or two addressing mode
expression and a relational operator that translated to a conditional
branch instruction.

There is no point of trying to invent new constructions for each
machine instructions, so there is no problems in inserting any
"Special" machine instructions in the source file (in this case SETF),
which is copied directly to the pure assembler file.

For a different processor, the operands would of course be different,
but the control structures would nearly identical.

Paul

Mar 10 '06 #121

Keith Thompson wrote:
"Ed Prochak" <ed*******@gmai l.com> writes:
Andrew Reilly wrote: [...]
Yeah, me to. Still do, regularly, on processors that will never have a C
compiler. C is as close to a universal assembler as we've got at the
moment. It doesn't stick it's neck out too far, although a more
deliberately designed universal assembler would be a really good thing.
(It's on my list of things to do...)


Would a better universal assembler be more like assembler or more like
high level languages? I really think C hit very close to the optimal
balance.


Whether C is a "universal assembler" is an entirely separate question
from whether C is "good", or "better" than something else, or close to
some optimal balance.

As I understand the term, an assembly language is a symbolic language
in which the elements of the language map one-to-one (or nearly so)
onto machine-level instructions.


The one-to-one mapping is broken for a macro assembler. Often there are
macros defined for subroutine entry and exit, loops and other things.
the only difference is that everyone defines their own macros, while in
C everybody uses the same "macros".
Most assembly languages are, of
course, machine-specific, since they directly specify the actual
instructions.
x++; in most machines can map to a single instruction
which is different from the instruction for ++x;
One could imagine a more generic assembler that uses
some kind of pseudo-instructions that can be translated more or less
one-to-one to actual machine instructions. C, though it's closer to
the machine than some languages, is not an assembler in this sense; in
a C program, you specify what you want the machine to do, not what
instructions it should use to do it.
I would agree that if an assembler must be a one-to-one mapping from
source line to opcode, then C doesn't fit. I just don't agree with that
definition of assembler.

<OT>Forth might be an interesting data point in this discussion, but
if you're going to go into that, please drop comp.lang.c from the
newsgroups.</OT>


Forth is definitely a contender.

nice discussion.
ed

Mar 10 '06 #122
On 10 Mar 2006 06:44:22 -0800, "Ed Prochak" <ed*******@gmai l.com>
wrote:

Keith Thompson wrote:
"Ed Prochak" <ed*******@gmai l.com> writes:
> Andrew Reilly wrote: [...]
>> Yeah, me to. Still do, regularly, on processors that will never have a C

<snip>
The one-to-one mapping is broken for a macro assembler. Often there are
macros defined for subroutine entry and exit, loops and other things.
the only difference is that everyone defines their own macros, while in
C everybody uses the same "macros".
Not really. Many assemblers have predefined macros for various things,
and C programmers write macros using preprocessor directives.
Most assembly languages are, of
course, machine-specific, since they directly specify the actual
instructions.
x++; in most machines can map to a single instruction
which is different from the instruction for ++x;


Huh? As a standalone statement, if x is an integer type, I'd expect
both to be mapped to the machine's equivalent of INC x. If it's
embedded in a larger statement, or it's a pointer, it's likely that
several instructions will be generated, and a compiler (including a C
compiler) will do things that a macro assembler won't do.
One could imagine a more generic assembler that uses
some kind of pseudo-instructions that can be translated more or less
one-to-one to actual machine instructions.

Proposed decades ago, and there has been some implementation.
C, though it's closer to
the machine than some languages, is not an assembler in this sense; in
a C program, you specify what you want the machine to do, not what
instructions it should use to do it.


I would agree that if an assembler must be a one-to-one mapping from
source line to opcode, then C doesn't fit. I just don't agree with that
definition of assembler.


Nor does anyone else, since the invention of macros. However, C
doesn't fit any widely accepted definition of assembler. You can have
your own definition of assembler, as long as you don't expect folks to
know what you're talking about.

--
Al Balmer
Sun City, AZ
Mar 10 '06 #123

Richard G. Riley wrote:
"Ed"posted the following on 2006-03-10:
[]
The one-to-one mapping is broken for a macro assembler. Often there are
macros defined for subroutine entry and exit, loops and other things.
the only difference is that everyone defines their own macros, while in
C everybody uses the same "macros".


But a mnemonic representing an instruction is still just that. The
macro part is nothing more than rolling up of things for brevity. A
subsequent disassembly will reveal all the hidden gore.


But it will NOT display the original macro. There is no 1 to 1 mapping
from source to code.
Most assembly languages are, of
course, machine-specific, since they directly specify the actual
instructions.
x++; in most machines can map to a single instruction
which is different from the instruction for ++x;


How do you see them being different at the assembler level? They are
not are they? Its just when you do the (pseudo ASM) INC_REGx or ADDL 02,REGx or
whatever that matters isnt it?


Actuall I still think in PDP assembler at times (my first
assemblerprogra mming).
so y=x++; really does map to a single instruction which both moves the
value to y and increments x (which had to be held in a register IIRC)
e.g if we have

y=++x;

then the pseduo assembler is
INC x
move y,x
MOV R1,x
A: MOV y,R1++
## I may have the syntax wrong, it's been a LONG time
where as

y=x++

is
move y,x
INC x
MOV R1,x
B: MOV y,++R1

The opcodes for those two instructions (lines A and B) are different in
PDP assembler.

Not taking into account expression return value that are CPU
equivalent. Admittedly I havent dabbled in later instruction sets for
the new post 80386 CPUs so please slap me down or better explain the
above if not right : its interesting.
I haven't played much in the intel realm since about the 286, and I
haven't done much assembly at all for about 10years. Even the last
embedded project I worked on with a tiny 8bit micro had a C compiler,
so I did nearly nothing in assembler. C makes it so much easier. I've
had the opinion of C as assembler since I first learned it (about
1983).

Some other languages do so much more for you that you might be scared
to look at the disassembly. e.g. languages that do array bounds
checking for you will generate much more code for a[y]=x; than does C.
You can picture the assembly code for C in your head without much
difficulty. The same doesn't hold true for some other languages.
One could imagine a more generic assembler that uses
some kind of pseudo-instructions that can be translated more or less
one-to-one to actual machine instructions. C, though it's closer to
the machine than some languages, is not an assembler in this sense; in
a C program, you specify what you want the machine to do, not what
instructions it should use to do it.
I would agree that if an assembler must be a one-to-one mapping from
source line to opcode, then C doesn't fit. I just don't agree with that
definition of assembler.


Sorry for coming late, but how do you see an assembler? In common
parlance it has always been (in my world) a program for converting
instruction set mnemonics into equivalent opcodes which run natively
on the target CPU.


I told you, a macro assembler does not work that way. One macro might
expand not just to multiple mnemonics, but to different mnemonics
depending on parameters. It is not 1 to 1 from source to assembly
mnemonics (let alone opcodes). A macro assembler can abstract just a
little or quite a lot away from the target machine. Depends on how you
use it. So while , an assembler is [] a program for converting
instruction set mnemonics into equivalent opcodes which run natively
on the target CPU. there's nothing about that conversion being one-to-one (mnemonic to
opcode)

even without macros, the one-to-one doesn't work if in the instruction
set the opcode for moving registers differs from moving memory, so
MOV R2,R!
differs from
MOV B,A
where R1 and R2 are register identifiers and A and B are memory
location. Yet we talk about the MOVe mnemonic as if both were the same
operation.

C's assignment operator maps about as closely to those opcodes as that
MOV mneumonic does. That's why I say it's a glorified assembler. You
have about as good an idea of what code is generated as you do with a
good assembler (as long as we can ignore the compiler's obtimizer).

--
"A desk is a dangerous place from which to view the world" - LeCarre.


Nice quote.
ed

Mar 10 '06 #124
Al Balmer wrote:
It doesn't have to make sure. It's free to segfault. You write funny
code, you pay the penalty (or your customers do.) Modern hardware does
a lot of speculation. It can preload or even precompute both branches
of a conditional, for example.


Hmm, so if I'm decrementing a divisior, and branching off somewhere
else before the actual divide instruction if the would be divisor is
zero, and your precomputation of both branches traps a division by zero
that a literal execution of my program would never perform... whose
fault is that?

I suspect that exception handling in speculative execution is a problem
that has been looked into.

Mar 10 '06 #125
On 10 Mar 2006 11:42:24 -0800, cs********@hotm ail.com wrote:
Al Balmer wrote:
It doesn't have to make sure. It's free to segfault. You write funny
code, you pay the penalty (or your customers do.) Modern hardware does
a lot of speculation. It can preload or even precompute both branches
of a conditional, for example.
Hmm, so if I'm decrementing a divisior, and branching off somewhere
else before the actual divide instruction if the would be divisor is
zero, and your precomputation of both branches traps a division by zero
that a literal execution of my program would never perform... whose
fault is that?

Not something to worry about, though you'd have to ask an expert why
:-) I suspect that this stuff is below the level of exception
triggers.
I suspect that exception handling in speculative execution is a problem
that has been looked into.


--
Al Balmer
Sun City, AZ
Mar 10 '06 #126

In article <11************ **********@z34g 2000cwc.googleg roups.com>, "Ed Prochak" <ed*******@gmai l.com> writes:
Keith Thompson wrote:
Most assembly languages are, of
course, machine-specific, since they directly specify the actual
instructions.


x++; in most machines can map to a single instruction


Sure, if "most machines" excludes load/store architectures, and
machines which cannot operate directly on an object of the size of
whatever x happens to be, and all the cases where "x" is a pointer to
an object of a size other than the machine's addressing granularity...

I suppose you could argue that "can" in your claim is intended to be
weak - that, for "most machines" (with a conforming C implementation,
presumably), there exists at least one C program containing the
statement "x++;", and a conforming C implementation which will
translate that statement to a single machine instruction.

But that's a very small claim. All machines "can" map that statement
to multiple instructions as well; many "can" map it to zero
instructions in that sense (taking advantage of auto-increment modes
or the like). What can happen says very little about what will.

The presence in C of syntactic sugar for certain simple operations
like "x++" doesn't support the claim that C is somehow akin to
assembler in any case. One distinguishing feature of assembler is
a *lack* of syntactic sugar. (Macros aren't a counterexample
because they're purely lexical constructs; in principle they're
completely separate from code generation.)

C isn't assembler because:

- It doesn't impose a strict mapping between (preprocessed) source
and generated code. The "as if" clause allows the implementation
to have the generated code differ significantly from a strict
interpretation of the source acting on the virtual machine.

- It has generalized constructs (expressions) which can result in
the implementation generating arbitrarily complex code.

--
Michael Wojcik mi************@ microfocus.com

Any average educated person can turn out competent verse. -- W. H. Auden
Mar 10 '06 #127
On 10 Mar 2006 20:15:45 GMT, mw*****@newsguy .com (Michael Wojcik)
wrote:
The presence in C of syntactic sugar for certain simple operations
like "x++" doesn't support the claim that C is somehow akin to
assembler in any case. One distinguishing feature of assembler is
a *lack* of syntactic sugar. (Macros aren't a counterexample
because they're purely lexical constructs; in principle they're
completely separate from code generation.)


Mnemonics and symbolic addresses in assemblers are just syntactic
sugar built on the binary machine code :-).

Entering machine codes in hex or octal is also syntactic sugar.

Paul

Mar 10 '06 #128
In article <11************ **********@i39g 2000cwa.googleg roups.com>,
<cs********@hot mail.com> wrote:
Hmm, so if I'm decrementing a divisior, and branching off somewhere
else before the actual divide instruction if the would be divisor is
zero, and your precomputation of both branches traps a division by zero
that a literal execution of my program would never perform... whose
fault is that? I suspect that exception handling in speculative execution is a problem
that has been looked into.


[Getting off-topic for comp.lang.c...]

Yes. For example on the MIPS architecture, an exception state is
inserted into the flow, but the exception itself is not taken
unless the exception "graduates" ; the exception is supressed if
the conditional results turn out to be such that it was not needed.

In the MIPS IV instruction set, divide can be done as
"multiply by the reciprical", and it is not uncommon to schedule
the reciprical operation ahead of time, before the code has had
time to check whether the denominator is 0. The non-zeroness
is speculated so as to get a "head start" on the time-consuming
division operation.

If I recall correctly, a fair bit of the multi-instruction pipelining
on MIPS is taken up with controls to handle speculation properly.
--
Prototypes are supertypes of their clones. -- maplesoft
Mar 10 '06 #129
"Ed Prochak" <ed*******@gmai l.com> writes:
Keith Thompson wrote:

[...]
Whether C is a "universal assembler" is an entirely separate question
from whether C is "good", or "better" than something else, or close to
some optimal balance.

As I understand the term, an assembly language is a symbolic language
in which the elements of the language map one-to-one (or nearly so)
onto machine-level instructions.


The one-to-one mapping is broken for a macro assembler. Often there are
macros defined for subroutine entry and exit, loops and other things.
the only difference is that everyone defines their own macros, while in
C everybody uses the same "macros".


There's a continuum from raw machine language to very high-level
languages. Macro assembler is only a very small step up from
non-macro assembler. C is a *much* bigger step up from that. Some C
constructs may happen to map to single instructions for *some*
compiler/CPU combinations; they might map to multiple instructions, or
even none, for others. An assignment statement might copy a single
scalar value (integer, floating-point, or pointer) -- or it might copy
an entire structure; the C code looks the same, but the machine code
is radically different.

Using entirely arbitrary units of high-level-ness, I'd call machine
language close to 0, assembly language 10, macro assembler 15, and C
about 50. It might be useful to have something around 35 or so.
(This is, of course, mostly meaningless.)

Assembly language is usually untyped; types are specified by which
instruction you use, not by the types of the operands. C, by
contrast, associates types with variables. It often figures out how
to implement an operation based on the types of its operands, and many
operations are disallowed (assigning a floating-point value to a
pointer, for example).

I know the old joke that C combines the power of assembly language
with the flexibility of assembly language. I even think it's funny.
But it's not realistic, at least for C programmers who care about
writing good portable code.

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 10 '06 #130

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.