By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,086 Members | 1,875 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,086 IT Pros & Developers. It's quick & easy.

Why do integer-to-double conversions do not happen in printf

P: n/a
I was running code like:

#include <stdio.h>

int main()
{
printf("%f\n", 9/5);

return 0;
}

and saw that the value being printed was -0.000000 (gnu) or 0.000000
(msvc6). I was expecting 2.000000.
Next I tried this:

#include <stdio.h>

int main()
{
int x = 27837;
/**1**/ double f = x;
printf("%f\n", x);

return 0;
}

and the value now printed was -1.98376 (essentially garbage, gnu) and
0.000000 (msvc6). If I comment out the line I have marked with /** 1
**/, then it goes back to printing -0.000000 (gnu) or 0.000000 (msvc6).
Does the declaration of a double cause linking with some floating point
libraries which causes the difference on GNU?

Why does not an automatic cast happen?
I then tried the following two:

#include <stdio.h>

int main()
{
int x = 27837;
double f = x;
printf("%f\n", f);

return 0;
}

This works fine - we get 27837.000000 as expected.

And this:

#include <stdio.h>

int main()
{

int x = 27837;
void * v = &x;
double *f = v;
printf("%f\n", *f);

return 0;
}

and this once again gives output, which on gnu has an uncanny
similarity to the garbage it printed before just when I uncommented the
"double f = x" line.
What's going on?
-- Arindam

Oct 27 '06 #1
Share this Question
Share on Google+
17 Replies


P: n/a
ar**************@gmail.com wrote:
I was running code like:

#include <stdio.h>

int main()
{
printf("%f\n", 9/5);

return 0;
}

and saw that the value being printed was -0.000000 (gnu) or 0.000000
(msvc6). I was expecting 2.000000.
Why? You're trying to print an int value (the result of 9/5, ie, 1)
with a float format. Only the gods know what you'll get printed,
since the standard says it's undefined. The result is going to
depends on the details of your implementation's numeric representations.

[It seems you can persuade gcc to spot this particular case using
the -Wall command-line option.]
Next I tried this:

#include <stdio.h>

int main()
{
int x = 27837;
/**1**/ double f = x;
printf("%f\n", x);

return 0;
}

and the value now printed was -1.98376 (essentially garbage, gnu) and
0.000000 (msvc6).
You're printing a different integer as a floating value. More undefined
behaviour. As the doctor says, "Don't do that.".
If I comment out the line I have marked with /** 1
**/, then it goes back to printing -0.000000 (gnu) or 0.000000 (msvc6).
Not here it doesn't.
Does the declaration of a double cause linking with some floating point
libraries which causes the difference on GNU?

Why does not an automatic cast happen?
Because there's no such thing as an "automatic cast", and no need here
for an implicit conversion.
I then tried the following two:

#include <stdio.h>

int main()
{
int x = 27837;
double f = x;
printf("%f\n", f);

return 0;
}

This works fine - we get 27837.000000 as expected.
/Now/ you're printing a float as a float (strictly, a double
as a double), and it's well-defined, and the implementation
implements it.
And this:

#include <stdio.h>

int main()
{
int x = 27837;
void * v = &x;
double *f = v;
printf("%f\n", *f);
And /now/ you're trying to fetch a double value from a location
that holds integer values. Why do you expect this to be remotely
meaningful?
return 0;
}

and this once again gives output, which on gnu has an uncanny
similarity to the garbage it printed before just when I uncommented the
"double f = x" line.
Probably because it's getting the same integer bits and then trying to
treat them as floating bits.
What's going on?
You're doing something that is conspicuously undefined, and have
had the good fortune to get obvious gibberish results, alterting
you to the problem.

--
Chris "back home once again ...." Dollin
"Never ask that question!" Ambassador Kosh, /Babylon 5/

Oct 27 '06 #2

P: n/a

ar**************@gmail.com wrote:
I was running code like:

#include <stdio.h>

int main()
{
printf("%f\n", 9/5);

return 0;
}

and saw that the value being printed was -0.000000 (gnu) or 0.000000
(msvc6). I was expecting 2.000000.
Why were you expecting that?

You told printf to expect a floating-point number and passed it an
integer (dividing two integers yields an integer result). What you got
as output then was undefined.
Next I tried this:
[snip]
int x = 27837;
/**1**/ double f = x;
printf("%f\n", x);
[snip]

You again told printf() to expect a floating-point number and passed an
integaer.
and the value now printed was -1.98376 (essentially garbage, gnu) and
0.000000 (msvc6). If I comment out the line I have marked with /** 1
**/, then it goes back to printing -0.000000 (gnu) or 0.000000 (msvc6).
Does the declaration of a double cause linking with some floating point
libraries which causes the difference on GNU?
Who knows, and who cares? You aren't doing anything sensible, so the
results need not make sense.
Why does not an automatic cast happen?
Because the compiler has no way of knowing that a cast is needed.

[snip]
int x = 27837;
double f = x;
printf("%f\n", f);
[snip]
This works fine - we get 27837.000000 as expected.
Of course... You passed printf the type of data you said you'd pass
(well, sort of - you said you'd pass a float but passed a double, I
can't remember off-hand whether float would be automatically cast to
double here - if not, float and double must be the same on your
platform).

[snip]
int x = 27837;
void * v = &;
v contains the address of the integer representation of 27837.
double *f = v;
f contains the address of the integer representation of 27837.
printf("%f\n", *f);
You treat the integer representation of 27837 as if it were a floating
point number - why?

Oct 27 '06 #3

P: n/a
ar**************@gmail.com wrote:
I was running code like:

#include <stdio.h>

int main()
{
printf("%f\n", 9/5);

return 0;
}

and saw that the value being printed was -0.000000 (gnu) or 0.000000
(msvc6). I was expecting 2.000000.
Next I tried this:

#include <stdio.h>

int main()
{
int x = 27837;
/**1**/ double f = x;
printf("%f\n", x);

return 0;
}

and the value now printed was -1.98376 (essentially garbage, gnu) and
0.000000 (msvc6). If I comment out the line I have marked with /** 1
**/, then it goes back to printing -0.000000 (gnu) or 0.000000 (msvc6).
Does the declaration of a double cause linking with some floating point
libraries which causes the difference on GNU?

Why does not an automatic cast happen?
I then tried the following two:

#include <stdio.h>

int main()
{
int x = 27837;
double f = x;
printf("%f\n", f);

return 0;
}

This works fine - we get 27837.000000 as expected.

And this:

#include <stdio.h>

int main()
{

int x = 27837;
void * v = &x;
double *f = v;
printf("%f\n", *f);

return 0;
}

and this once again gives output, which on gnu has an uncanny
similarity to the garbage it printed before just when I uncommented the
"double f = x" line.
What's going on?
-- Arindam
You have to understand that because printf uses varargs (and the same issue
applies to any function using varargs) the compiler cannot do conversion
for you. The only thing printf knows is that the first argument is a const
char *; the type of all other arguments has to be inferred from the format
itself. By specifying %f you have specified that the corresponding
argument will be a double, but you have passed a variety of different
types.
(If you specify the -Wall option to the Gnu C it would have pointed that
out)

--
Bill Medland
Oct 27 '06 #4

P: n/a
ar**************@gmail.com wrote:
I was running code like:

#include <stdio.h>

int main()
{
printf("%f\n", 9/5);

return 0;
}

and saw that the value being printed was -0.000000 (gnu) or 0.000000
(msvc6). I was expecting 2.000000.
There was no reason to "expect" anything, since the
behavior is undefined. The expression `9/5' calls for
dividing two ints, and the result of an int division is also
an int (in this case, 1). The "%f" specifier converts a
double value, but you have instead provided an int -- the
behavior is undefined, and anything can happen.
>
Next I tried this:

#include <stdio.h>

int main()
{
int x = 27837;
/**1**/ double f = x;
printf("%f\n", x);
Undefined behavior again, for the same reason as above: "%f"
needs a double value, but you have provided `x' which is an int.
return 0;
}

and the value now printed was -1.98376 (essentially garbage, gnu) and
0.000000 (msvc6). If I comment out the line I have marked with /** 1
**/, then it goes back to printing -0.000000 (gnu) or 0.000000 (msvc6).
Does the declaration of a double cause linking with some floating point
libraries which causes the difference on GNU?
Maybe. Maybe not. It is a matter of little interest:
undefined behavior is, well, undefined.
Why does not an automatic cast happen?
This is one of the weak points of C. An automatic conversion
(not an "automatic cast;" there is no such thing) can only occur
if the compiler knows what type is required. In the case of a
"variadic" function like printf(), the compiler doesn't know what
type the second argument should be -- after all, different printf()
calls could require different types for the second argument. In
the absence of information about the required type, the compiler
simply trusts you to get things right. When you don't, ...

By the way, you keep mentioning "GNU" which I guess means that
you're using the gcc compiler. That compiler has sub rosa knowledge
about how printf() works, and can check the arguments for agreement
with the format string. It can't actually correct your errors, but
it can issue warnings and let you correct them yourself. Check the
compiler documentation and learn how to enable the warnings.
>
I then tried the following two:

#include <stdio.h>

int main()
{
int x = 27837;
double f = x;
printf("%f\n", f);

return 0;
}

This works fine - we get 27837.000000 as expected.
You provided a double value as required by the "%f"
specifier, and everything worked. When you follow the rules,
good things happen.
And this:

#include <stdio.h>

int main()
{

int x = 27837;
void * v = &x;
double *f = v;
printf("%f\n", *f);

return 0;
}

and this once again gives output, which on gnu has an uncanny
similarity to the garbage it printed before just when I uncommented the
"double f = x" line.
This is called "type punning," and it's usually a mistake.
You have provided the expression `*f' whose type is double, as
required, but the pointer `f' does not actually point to a double
variable. The result? Undefined behavior again.
>
What's going on?
When you make mistakes, strange things happen.

--
Eric Sosman
es*****@acm-dot-org.invalid
Oct 27 '06 #5

P: n/a
ma**********@pobox.com wrote:
ar**************@gmail.com wrote:
<snipped>
[snip]
> int x = 27837;
double f = x;
printf("%f\n", f);
[snip]
>This works fine - we get 27837.000000 as expected.

Of course... You passed printf the type of data you said you'd pass
(well, sort of - you said you'd pass a float but passed a double, I
can't remember off-hand whether float would be automatically cast to
double here - if not, float and double must be the same on your
platform).
For printf and friends %f is a double, as varargs arguments
get promoted (to double in this case).
For scanf and friends %f is a float, as you are passing a
pointer.

--
imalone
Oct 27 '06 #6

P: n/a
Eric Sosman wrote:
ar**************@gmail.com wrote:
I was running code like:

#include <stdio.h>

int main()
{
printf("%f\n", 9/5);

return 0;
}

and saw that the value being printed was -0.000000 (gnu) or 0.000000
(msvc6). I was expecting 2.000000.

There was no reason to "expect" anything, since the
behavior is undefined. The expression `9/5' calls for
dividing two ints, and the result of an int division is also
an int (in this case, 1). The "%f" specifier converts a
double value, but you have instead provided an int -- the
behavior is undefined, and anything can happen.

Next I tried this:

#include <stdio.h>

int main()
{
int x = 27837;
/**1**/ double f = x;
printf("%f\n", x);

Undefined behavior again, for the same reason as above: "%f"
needs a double value, but you have provided `x' which is an int.
return 0;
}

and the value now printed was -1.98376 (essentially garbage, gnu) and
0.000000 (msvc6). If I comment out the line I have marked with /** 1
**/, then it goes back to printing -0.000000 (gnu) or 0.000000 (msvc6).
Does the declaration of a double cause linking with some floating point
libraries which causes the difference on GNU?

Maybe. Maybe not. It is a matter of little interest:
undefined behavior is, well, undefined.
Why does not an automatic cast happen?

This is one of the weak points of C. An automatic conversion
(not an "automatic cast;" there is no such thing) can only occur
if the compiler knows what type is required. In the case of a
"variadic" function like printf(), the compiler doesn't know what
type the second argument should be -- after all, different printf()
calls could require different types for the second argument. In
the absence of information about the required type, the compiler
simply trusts you to get things right. When you don't, ...

By the way, you keep mentioning "GNU" which I guess means that
you're using the gcc compiler. That compiler has sub rosa knowledge
about how printf() works, and can check the arguments for agreement
with the format string. It can't actually correct your errors, but
it can issue warnings and let you correct them yourself. Check the
compiler documentation and learn how to enable the warnings.

I then tried the following two:

#include <stdio.h>

int main()
{
int x = 27837;
double f = x;
printf("%f\n", f);

return 0;
}

This works fine - we get 27837.000000 as expected.

You provided a double value as required by the "%f"
specifier, and everything worked. When you follow the rules,
good things happen.
And this:

#include <stdio.h>

int main()
{

int x = 27837;
void * v = &x;
double *f = v;
printf("%f\n", *f);

return 0;
}

and this once again gives output, which on gnu has an uncanny
similarity to the garbage it printed before just when I uncommented the
"double f = x" line.

This is called "type punning," and it's usually a mistake.
You have provided the expression `*f' whose type is double, as
required, but the pointer `f' does not actually point to a double
variable. The result? Undefined behavior again.

What's going on?

When you make mistakes, strange things happen.
Haha! I agree with all of you. See I have been rather hard-pressed
trying to convince my Indian (as in people living in India) mates that
an undefined behaviour is an undefined behaviour and we should stop at
that.

I guess printf more or less translates everything you pass to it at the
unprototyped positions as a (signed or unsigned) long (that includes
pointers) or a double. I am not sure what it does for a long double.

The two later pieces of code I wrote was just to understand what printf
would probably have been doing - certainly not doing a 'value' cast
like that which happens in an implicit cast:

int x = 5;
double f = x;

Rather trying to interpret 4 bytes of int as say 8 bytes of double.
Something that happens this way:

int x = 5;
void * v = &x;
double * d = v;
printf ( "%lf\n", *d );

And at least I expected that this would give me the same garbage as
simply writing:

printf("%f", x); /* x is an int */
Through this I could at least make out that there are no implicit casts
of this nature happening.
>
--
Eric Sosman
es*****@acm-dot-org.invalid

It's a different story that so much vociferous demonstration still
could not convince some of them what it really is, and that undefined
is undefined.

I am reminded of this:

"Against stupidity, the Gods themselves contend in vain" --- Friedrich
Schiller

I wish I could leave the URL for this gory debate ... but that's on
Orkut so not everyone can peep into it :)

Cheers
-- Arindam

Oct 27 '06 #7

P: n/a


ar**************@gmail.com wrote On 10/27/06 10:50,:
[...]

I guess printf more or less translates everything you pass to it at the
unprototyped positions as a (signed or unsigned) long (that includes
pointers) or a double. I am not sure what it does for a long double.
Your guess is incorrect. Spend some time with your C
textbook or other reference and learn about the "default
argument promotions." They come into play every time you
call a variadic function like printf, so you need to know
what they will and won't do for you. (They also operate
when you call a function that has no prototype in scope, but
if you're wise you'll never do that.)
The two later pieces of code I wrote was just to understand what printf
would probably have been doing - certainly not doing a 'value' cast
like that which happens in an implicit cast:
There is no such thing as an "implicit cast," just as
there is no such thing as the "automatic cast" you mentioned
in your earlier post. A cast is an operator, something you
write into your code just like `+' or `&' or `sizeof', and
is always explicit -- never implicit, never automatic.

A cast is an operator that causes a conversion. There
are situations where conversions occur without casts, just
as there are situations where additions occur without a +
operator. Do you speak of a[i] as involving an "implicit
plus?" Then don't speak of `double f = 1;' as involving
an "implicit cast."
[... UB snipped ...]

And at least I expected that this would give me the same garbage as
simply writing: [...]
Undefined behavior is undefined. There is no reason to
expect undefined behavior to be consistent, or repeatable, or
even to make sense.

--
Er*********@sun.com

Oct 27 '06 #8

P: n/a
ar**************@gmail.com wrote:
>
I was running code like:

#include <stdio.h>

int main()
{
printf("%f\n", 9/5);
return 0;
}

and saw that the value being printed was -0.000000 (gnu) or 0.000000
(msvc6). I was expecting 2.000000.
.... snip ...
>
What's going on?
You are lying to the compiler, and it is biting back. Where I come
from the value of 9/5 is 1, of type int.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Oct 27 '06 #9

P: n/a
ar**************@gmail.com wrote:
printf("%f\n", 9/5);
printf("%f\n", 9.0/5.0);
Oct 27 '06 #10

P: n/a
Eric Sosman <Er*********@sun.comwrites:
[...]
Undefined behavior is undefined. There is no reason to
expect undefined behavior to be consistent, or repeatable, or
even to make sense.
That's true in the most general case. However, if you have some
understanding of the internal workings of your system, observing the
specific effects of an instance of undefined behavior can often help
in tracking down the cause.

For example, if you see a strange integer value being printed, and you
happen to recognize that its representation corresponds to a plausible
pointer value, or a plausible floating-point value, it can help point
to the cause of the problem.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Oct 27 '06 #11

P: n/a
CBFalconer <cb********@yahoo.comwrites:
ar**************@gmail.com wrote:
>I was running code like:

#include <stdio.h>

int main()
{
printf("%f\n", 9/5);
return 0;
}

and saw that the value being printed was -0.000000 (gnu) or 0.000000
(msvc6). I was expecting 2.000000.
... snip ...
>>
What's going on?

You are lying to the compiler, and it is biting back. Where I come
from the value of 9/5 is 1, of type int.
Strictly speaking, he's lying to the runtime library, not to the
compiler.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Oct 27 '06 #12

P: n/a
ar**************@gmail.com writes:
[...]
Does the declaration of a double cause linking with some floating point
libraries which causes the difference on GNU?
What do you mean by GNU?

<OT>
GNU is a large project which has produced a number of software
components, including a compiler collection (gcc), a C runtime library
(glibc), a text editor (emacs), and many other things. The compiler
and the runtime library are separate components; on many systems, gcc
uses the native runtime library, *not* necessarily glibc.
</OT>

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Oct 27 '06 #13

P: n/a


Keith Thompson wrote On 10/27/06 15:33,:
Eric Sosman <Er*********@sun.comwrites:
[...]
> Undefined behavior is undefined. There is no reason to
expect undefined behavior to be consistent, or repeatable, or
even to make sense.


That's true in the most general case. However, if you have some
understanding of the internal workings of your system, observing the
specific effects of an instance of undefined behavior can often help
in tracking down the cause.

For example, if you see a strange integer value being printed, and you
happen to recognize that its representation corresponds to a plausible
pointer value, or a plausible floating-point value, it can help point
to the cause of the problem.
That's the a posteriori or forensic view: Reasoning
backwards from the lack of skid marks and the presence of
empty beer cans to a hypothesis about the (late) driver's
likely condition. This is an important viewpoint to be
able to take, and goes with a whole set of important skills
(which are, IMHO, inadequately taught -- but I digress).

I wrote about the a priori view, as implied by the
O.P.'s statement that he "expected" a particular outcome
from undefined behavior. If Billy Boozer gets behind the
wheel there is no surety that he will wrap himself around
a tree, or that he will get safely home, or that the same
thing will happen this time as happened last time.

--
Er*********@sun.com

Oct 27 '06 #14

P: n/a

"Eric Sosman" <Er*********@sun.comwrote in message
news:1161962195.169666@news1nwk...
There is no such thing as an "implicit cast," just as
there is no such thing as the "automatic cast" you mentioned
in your earlier post. A cast is an operator, something you
write into your code just like `+' or `&' or `sizeof', and
is always explicit -- never implicit, never automatic.
Do you know where the term "implicit cast" originated? Was it just early
terminology to describe K&R C conversions?

I'm completely familiar with the term although it doesn't seem to appear in
any of the C references I currently use. Internet searches seem to pull up
Java or C++, but I'm unfamiliar with them. I understand "implicit cast" to
mean a widening or narrowing conversion between default types. The term
usually comes to mind in the rare case when a conversion fails: e.g., when a
default type which is 16-bit in size for a specific compiler is assigned to
a default type which is 32-bits in size, and the upper 16-bits of the 32-bit
type fail to clear, thereby requiring an explicit cast or bitwise operation.

Some of the earliest posts include use of "implicit int" by Chris Torek and
Steve Summit::
CT http://groups.google.com/group/comp....cd6dcae2d21428
SS http://groups.google.com/group/comp....5f41433d4173bd

Basically, Steve Summit said an "implicit cast" refers to operations
required by the C language in "X3.159, section 3.3.16.1, p. 54, lines
26-28". Since there isn't a one-to-one correspondence between the ANSI and
ISO specifications, could someone post those three lines?
Rod Pemberton

Oct 27 '06 #15

P: n/a
Rod Pemberton wrote:
"Eric Sosman" <Er*********@sun.comwrote in message
news:1161962195.169666@news1nwk...
> There is no such thing as an "implicit cast," just as
there is no such thing as the "automatic cast" you mentioned
in your earlier post. A cast is an operator, something you
write into your code just like `+' or `&' or `sizeof', and
is always explicit -- never implicit, never automatic.

Do you know where the term "implicit cast" originated? Was it just early
terminology to describe K&R C conversions?

I'm completely familiar with the term although it doesn't seem to appear in
any of the C references I currently use. Internet searches seem to pull up
Java or C++, but I'm unfamiliar with them. I understand "implicit cast" to
mean a widening or narrowing conversion between default types. The term
usually comes to mind in the rare case when a conversion fails: e.g., when a
default type which is 16-bit in size for a specific compiler is assigned to
a default type which is 32-bits in size, and the upper 16-bits of the 32-bit
type fail to clear, thereby requiring an explicit cast or bitwise operation.

Some of the earliest posts include use of "implicit int" by Chris Torek and
Steve Summit::
CT http://groups.google.com/group/comp....cd6dcae2d21428
SS http://groups.google.com/group/comp....5f41433d4173bd

Basically, Steve Summit said an "implicit cast" refers to operations
required by the C language in "X3.159, section 3.3.16.1, p. 54, lines
26-28". Since there isn't a one-to-one correspondence between the ANSI and
ISO specifications, could someone post those three lines?

I would guess that there is no "implicit cast" at all. There are
"implicit conversions" provided by the language and "explicit
conversions" provided by casting.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Oct 28 '06 #16

P: n/a
On Fri, 27 Oct 2006 19:42:55 -0400, in comp.lang.c , "Rod Pemberton"
<do*********@bitfoad.cmmwrote:
>
"Eric Sosman" <Er*********@sun.comwrote in message
news:1161962195.169666@news1nwk...
> There is no such thing as an "implicit cast," just as
there is no such thing as the "automatic cast" you mentioned
in your earlier post. A cast is an operator, something you
write into your code just like `+' or `&' or `sizeof', and
is always explicit -- never implicit, never automatic.

Do you know where the term "implicit cast" originated?
As far as I'm aware its nothing more than a mistake, common amongst
newbies and all too often not unlearned later in life. Its like saying
"should of" or using "enormity" to mean very big.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Oct 28 '06 #17

P: n/a
Mark McIntyre wrote:
>
On Fri, 27 Oct 2006 19:42:55 -0400, in comp.lang.c , "Rod Pemberton"
<do*********@bitfoad.cmmwrote:

"Eric Sosman" <Er*********@sun.comwrote in message
news:1161962195.169666@news1nwk...
There is no such thing as an "implicit cast," just as
there is no such thing as the "automatic cast" you mentioned
in your earlier post. A cast is an operator, something you
write into your code just like `+' or `&' or `sizeof', and
is always explicit -- never implicit, never automatic.
Do you know where the term "implicit cast" originated?

As far as I'm aware its nothing more than a mistake, common amongst
newbies and all too often not unlearned later in life. Its like saying
"should of" or using "enormity" to mean very big.
.... or "typecast", instead of "cast".

--
pete
Oct 29 '06 #18

This discussion thread is closed

Replies have been disabled for this discussion.