By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,850 Members | 1,026 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,850 IT Pros & Developers. It's quick & easy.

Strange bit corruption in a double

P: n/a
I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????

FWIW, I know the function to print the bit pattern of the double
is correct:

void print_binary_double(double value)
{
unsigned char *a;
a = (unsigned char *)&value;

int bytes = sizeof(double);
for (int i = 0; i < bytes; i++) {
print_binary_uc(*a);
printf(" ");
a++;
}
printf("\n");
}
void print_binary_uc(unsigned char value)
{
unsigned char value2;
int i;
int len = sizeof(unsigned char) * 8;
for (i = len-1; i >= 0; i--)
{
value2 = value & ((unsigned char)1 << i);
printf("%d", value2 ? 1 : 0);
}
}

Jan 14 '07 #1
Share this Question
Share on Google+
25 Replies


P: n/a
"Digital Puer" <di**********@hotmail.comwrote in message
news:11**********************@v45g2000cwv.googlegr oups.com...
I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.
I haven't analyzed the bit pattern you provided, but the information you've
presented isn't consistent with "corruption".

Assume that the number is positive, but very small (let's say 10^(-30)).
Then no version of printf with a practical number of decimal places will
show anything but zero. Additionally, it would test as positive as you
indicated above.

In order to print this number, you'd to use the "%e" rather than the "%f"
format specifier.

Most binary scientific notation (i.e. float, double) formats contain a
binary exponent, which I suspect in this case is a very negative number.
The fact that there are a lot of "1"s set in the number are not inconsistent
with a very small positive number.

Try "%e", and post your results ...
Jan 14 '07 #2

P: n/a

"Digital Puer" <di**********@hotmail.comwrote in message
news:11**********************@v45g2000cwv.googlegr oups.com...
I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????

FWIW, I know the function to print the bit pattern of the double
is correct:

void print_binary_double(double value)
{
unsigned char *a;
a = (unsigned char *)&value;

int bytes = sizeof(double);
for (int i = 0; i < bytes; i++) {
print_binary_uc(*a);
printf(" ");
a++;
}
printf("\n");
}
void print_binary_uc(unsigned char value)
{
unsigned char value2;
int i;
int len = sizeof(unsigned char) * 8;
for (i = len-1; i >= 0; i--)
{
value2 = value & ((unsigned char)1 << i);
printf("%d", value2 ? 1 : 0);
}
}
I'll bet your format specifier needs tweeking. The source is sloppy-looking
too. LS
Jan 14 '07 #3

P: n/a
Lane Straatman said:
>
"Digital Puer" <di**********@hotmail.comwrote in message
news:11**********************@v45g2000cwv.googlegr oups.com...
<snip>
>void print_binary_double(double value)
{
unsigned char *a;
a = (unsigned char *)&value;

int bytes = sizeof(double);
for (int i = 0; i < bytes; i++) {
print_binary_uc(*a);
printf(" ");
a++;
}
printf("\n");
}
void print_binary_uc(unsigned char value)
{
unsigned char value2;
int i;
int len = sizeof(unsigned char) * 8;
for (i = len-1; i >= 0; i--)
{
value2 = value & ((unsigned char)1 << i);
printf("%d", value2 ? 1 : 0);
}
}
I'll bet your format specifier needs tweeking. The source is
sloppy-looking too. LS
How would you improve the source?
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jan 14 '07 #4

P: n/a

"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:E6******************************@bt.com...
Lane Straatman said:
>>
"Digital Puer" <di**********@hotmail.comwrote in message
news:11**********************@v45g2000cwv.googleg roups.com...

<snip>
>>void print_binary_double(double value)
{
unsigned char *a;
a = (unsigned char *)&value;

int bytes = sizeof(double);
for (int i = 0; i < bytes; i++) {
print_binary_uc(*a);
printf(" ");
a++;
}
printf("\n");
}
void print_binary_uc(unsigned char value)
{
unsigned char value2;
int i;
int len = sizeof(unsigned char) * 8;
for (i = len-1; i >= 0; i--)
{
value2 = value & ((unsigned char)1 << i);
printf("%d", value2 ? 1 : 0);
}
}
I'll bet your format specifier needs tweeking. The source is
sloppy-looking too. LS

How would you improve the source?
Whitespace. LS
Jan 14 '07 #5

P: n/a
Lane Straatman said:
>
"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:E6******************************@bt.com...
>Lane Straatman said:
>>I'll bet your format specifier needs tweeking. The source is
sloppy-looking too.

How would you improve the source?
Whitespace.
man indent if you care enough. Yes, whitespace matters, but it can be added
automatically and trivially to your exact requirements. I have my own
whitespace preferences, which not everybody shares, but "layout not in
accord with my preferences" and "sloppy-looking" are different concepts.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jan 14 '07 #6

P: n/a
"Digital Puer" <di**********@hotmail.comwrites:
I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????
There are lots of numbers that are consistent with this data. Any
number too small to have a non-zero decimal digit in the default
precision used by %f format may still be very much != 0.0 and 0.0.

I think your bit pattern represents a number in the order of
4.3e-305. The %g format will print it as will (on my gcc) %.310f!

--
Ben.
Jan 14 '07 #7

P: n/a
Digital Puer wrote:
I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????

FWIW, I know the function to print the bit pattern of the double
is correct:

void print_binary_double(double value)
{
unsigned char *a;
a = (unsigned char *)&value;

int bytes = sizeof(double);
for (int i = 0; i < bytes; i++) {
print_binary_uc(*a);
printf(" ");
a++;
}
printf("\n");
}
void print_binary_uc(unsigned char value)
{
unsigned char value2;
int i;
int len = sizeof(unsigned char) * 8;
for (i = len-1; i >= 0; i--)
{
value2 = value & ((unsigned char)1 << i);
printf("%d", value2 ? 1 : 0);
}
}
You've got something cocked up. I get..

11001000 00010100 00010100 00001001 10001100 00000010 10111110 00000000
Exp = 1153 (131)
000 10000011
Man = .10100 00010100 00001001 10001100 00000010 10111110 00000000
-1.7080703671901993e+39

...from your 'bits' above.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Jan 14 '07 #8

P: n/a
Ben Bacarisse wrote:
"Digital Puer" <di**********@hotmail.comwrites:
>I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????

There are lots of numbers that are consistent with this data. Any
number too small to have a non-zero decimal digit in the default
precision used by %f format may still be very much != 0.0 and 0.0.

I think your bit pattern represents a number in the order of
4.3e-305. The %g format will print it as will (on my gcc) %.310f!
No Ben. There is only one value consistent with the 'bits' data as
presented. It is a 64-bit double on x86 architecture and is unique.

This particular value, expressed by 'printf(".16e", v)' is..

-1.7080703671901993e+39

...precisely.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Jan 14 '07 #9

P: n/a
On 13 Jan 2007 22:00:05 -0800, "Digital Puer"
<di**********@hotmail.comwrote:
>I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????
Others have explained why very small non-zero values will print as
zero.
>
FWIW, I know the function to print the bit pattern of the double
is correct:
Only if "correct" means specific to your system and either C99 or
extensions allowed.
>
void print_binary_double(double value)
{
unsigned char *a;
a = (unsigned char *)&value;

int bytes = sizeof(double);
C89 does not permit declarations after statements.
for (int i = 0; i < bytes; i++) {
print_binary_uc(*a);
printf(" ");
a++;
}
printf("\n");
}
void print_binary_uc(unsigned char value)
{
unsigned char value2;
int i;
int len = sizeof(unsigned char) * 8;
Assumes 8-bit characters. Look up CHAR_BIT in your reference.
for (i = len-1; i >= 0; i--)
{
value2 = value & ((unsigned char)1 << i);
printf("%d", value2 ? 1 : 0);
}
}

Remove del for email
Jan 14 '07 #10

P: n/a
On Sun, 14 Jan 2007 01:12:07 -0500, "David T. Ashley" <dt*@e3ft.com>
wrote:
>"Digital Puer" <di**********@hotmail.comwrote in message
news:11**********************@v45g2000cwv.googleg roups.com...
>I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I haven't analyzed the bit pattern you provided, but the information you've
presented isn't consistent with "corruption".

Assume that the number is positive, but very small (let's say 10^(-30)).
Then no version of printf with a practical number of decimal places will
show anything but zero. Additionally, it would test as positive as you
indicated above.

In order to print this number, you'd to use the "%e" rather than the "%f"
format specifier.

Most binary scientific notation (i.e. float, double) formats contain a
binary exponent, which I suspect in this case is a very negative number.
The fact that there are a lot of "1"s set in the number are not inconsistent
with a very small positive number.
The number of '1"s set in the number has almost nothing to do with the
magnitude of the number. It only indicates how many *different*
powers of 2 are used to represent the number (or exponent).

Take a 32-bit integer that requires 15 "1"s. Convert it to a 64-bit
double. The non-exponent portion will still have 15 "1"s (one may be
implied). Divide this by a large power of 2 but avoiding underflow.
The quotient is now a very small non-zero value but the only change in
the result should be in the exponent. The 15 "1"s should still be
there and in the same positions.
>
Try "%e", and post your results ...

Remove del for email
Jan 14 '07 #11

P: n/a
Joe Wright <jo********@comcast.netwrites:
Ben Bacarisse wrote:
>"Digital Puer" <di**********@hotmail.comwrites:
>>I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????
There are lots of numbers that are consistent with this data. Any
number too small to have a non-zero decimal digit in the default
precision used by %f format may still be very much != 0.0 and 0.0.
I think your bit pattern represents a number in the order of
4.3e-305. The %g format will print it as will (on my gcc) %.310f!
No Ben. There is only one value consistent with the 'bits' data as
presented.
The OP included code that produces it so the meaning of the byte
sequence is, indeed, unambiguous but IEEE floats are usually shown
the "other way round".
It is a 64-bit double on x86 architecture and is unique.

This particular value, expressed by 'printf(".16e", v)' is..

-1.7080703671901993e+39

..precisely.
This won't print as 0.000000 as reported. Of course the report may
have been wrong. Printing a double set to 4.273545594095197e-305
using the OP's code produces the output the OP gave. Your value is
same byte sequence but in reverse. My value matches all of the OP's
reported data.

--
Ben.
Jan 14 '07 #12

P: n/a
Ben Bacarisse wrote:
Joe Wright <jo********@comcast.netwrites:
>Ben Bacarisse wrote:
>>"Digital Puer" <di**********@hotmail.comwrites:

I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????
There are lots of numbers that are consistent with this data. Any
number too small to have a non-zero decimal digit in the default
precision used by %f format may still be very much != 0.0 and 0.0.
I think your bit pattern represents a number in the order of
4.3e-305. The %g format will print it as will (on my gcc) %.310f!
No Ben. There is only one value consistent with the 'bits' data as
presented.

The OP included code that produces it so the meaning of the byte
sequence is, indeed, unambiguous but IEEE floats are usually shown
the "other way round".
>It is a 64-bit double on x86 architecture and is unique.

This particular value, expressed by 'printf(".16e", v)' is..

-1.7080703671901993e+39

..precisely.

This won't print as 0.000000 as reported. Of course the report may
have been wrong. Printing a double set to 4.273545594095197e-305
using the OP's code produces the output the OP gave. Your value is
same byte sequence but in reverse. My value matches all of the OP's
reported data.
Indeed.

00000000 10111110 00000010 10000110 00001001 00010100 00010100 11001000
Exp = 11 (-1011)
100 00001101
Man = .11110 00000010 10000110 00001001 00010100 00010100 11001000
4.2735455940951970e-305

Sorry.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Jan 14 '07 #13

P: n/a
Joe Wright <jo********@comcast.netwrites:
Ben Bacarisse wrote:
>"Digital Puer" <di**********@hotmail.comwrites:
>>I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????
There are lots of numbers that are consistent with this data. Any
number too small to have a non-zero decimal digit in the default
precision used by %f format may still be very much != 0.0 and 0.0.
I think your bit pattern represents a number in the order of
4.3e-305. The %g format will print it as will (on my gcc) %.310f!
No Ben. There is only one value consistent with the 'bits' data as
presented. It is a 64-bit double on x86 architecture and is unique.

This particular value, expressed by 'printf(".16e", v)' is..

-1.7080703671901993e+39

..precisely.
Which would not appear as zero when printed with "%f".

The OP can easily figure this out using "%g" or "%e". The rest of us
can only guess what the OP actually means by the bit sequence he's
showing us (unless he also shows us the code for his "small function
to print the bits").

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jan 14 '07 #14

P: n/a
Keith Thompson <ks***@mib.orgwrites:
Joe Wright <jo********@comcast.netwrites:
>Ben Bacarisse wrote:
>>"Digital Puer" <di**********@hotmail.comwrites:

I'm getting a very weird bit corruption in a double. I am on an Intel
Red Hat Linux box. uname -a returns:
Linux foo.com 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:33:58 EDT 2006
i686 i686 i386 GNU/Linux

I have a "double" variable that is set to 0.00. Some number
crunching then occurs, and later on, when I printf this variable
with printf("%f"), I am getting 0.00000.

However, when I compare
if (variable == 0.0), I get false.
and if (variable 0.0), I get true.

I then ran a small function to print the bits of this variable and
found that its bit pattern is quite odd:

printf = 0.000000000000000
bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110
00000000

Any ideas??????
There are lots of numbers that are consistent with this data. Any
number too small to have a non-zero decimal digit in the default
precision used by %f format may still be very much != 0.0 and 0.0.
I think your bit pattern represents a number in the order of
4.3e-305. The %g format will print it as will (on my gcc) %.310f!
No Ben. There is only one value consistent with the 'bits' data as
presented. It is a 64-bit double on x86 architecture and is unique.

This particular value, expressed by 'printf(".16e", v)' is..

-1.7080703671901993e+39

..precisely.

Which would not appear as zero when printed with "%f".

The OP can easily figure this out using "%g" or "%e". The rest of us
can only guess what the OP actually means by the bit sequence he's
showing us (unless he also shows us the code for his "small function
to print the bits").
He did. That is how I knew what the value was.

--
Ben.
Jan 15 '07 #15

P: n/a
Ben Bacarisse <be********@bsb.me.ukwrites:
Keith Thompson <ks***@mib.orgwrites:
[...]
>The OP can easily figure this out using "%g" or "%e". The rest of us
can only guess what the OP actually means by the bit sequence he's
showing us (unless he also shows us the code for his "small function
to print the bits").

He did. That is how I knew what the value was.
So he did. D'oh!

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jan 15 '07 #16

P: n/a
Ben Bacarisse wrote:
>>"Digital Puer" <di**********@hotmail.comwrites:

bits = 11001000 00010100 00010100 00001001 10001100 00000010 10111110 00000000

[...] 4.273545594095197e-305 [...] matches all of the OP's reported
data.
Not that it matters very much, but it only matches 7 of the 8 bytes.
The OP's 0x8A becomes 0x86 in your value. The value matching the OP's
bits is (approximately) 4.273554285789957e-305.

- Ernie http://home.comcast.net/~erniew
Jan 15 '07 #17

P: n/a
Joe Wright wrote:
00000000 10111110 00000010 10000110 00001001 00010100 00010100 11001000
Exp = 11 (-1011)
The exponent is -1012.

- Ernie http://home.comcast.net/~erniew
Jan 15 '07 #18

P: n/a
Ernie Wright wrote:
Joe Wright wrote:
>00000000 10111110 00000010 10000110 00001001 00010100 00010100 11001000
Exp = 11 (-1011)

The exponent is -1012.

- Ernie http://home.comcast.net/~erniew
It depends on where you think the binary point is. Consider..

01000000 00010100 00000000 00000000 00000000 00000000 00000000 00000000
Exp = 1025 (3)
000 00000011
Man = .10100 00000000 00000000 00000000 00000000 00000000 00000000
5.0000000000000000e+00

In my view of things, the range of the exponent is 0..2047 and the bias
of the exponent is 1022.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Jan 15 '07 #19

P: n/a
Ben Bacarisse wrote:
I think your bit pattern represents a number in the order of
4.3e-305. The %g format will print it as will (on my gcc) %.310f!

Thanks, everyone, for your help. When I printf with %.15e,
I get 4.273558636127927e-305. Looks like there is problem
with that variable somewhere else. I didn't know about the %e
and %g flags, so thanks for showing that.

Jan 15 '07 #20

P: n/a
In article <11**********************@s34g2000cwa.googlegroups .com"Digital Puer" <di**********@hotmail.comwrites:
Ben Bacarisse wrote:
I think your bit pattern represents a number in the order of
4.3e-305. The %g format will print it as will (on my gcc) %.310f!

Thanks, everyone, for your help. When I printf with %.15e,
I get 4.273558636127927e-305. Looks like there is problem
with that variable somewhere else. I didn't know about the %e
and %g flags, so thanks for showing that.
Much more likely is that the answer is correct, but that due to rounding
the small error comes in.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Jan 16 '07 #21

P: n/a

"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:55******************************@bt.com...
Lane Straatman said:
>>
"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:E6******************************@bt.com...
>>Lane Straatman said:

I'll bet your format specifier needs tweeking. The source is
sloppy-looking too.

How would you improve the source?
Whitespace.

man indent if you care enough. Yes, whitespace matters, but it can be
added
automatically and trivially to your exact requirements. I have my own
whitespace preferences, which not everybody shares, but "layout not in
accord with my preferences" and "sloppy-looking" are different concepts.
Keith pointed that the format specifier was the fix-it, which is the first
thing I suspected, not knowing any of the specifiers involved. The only
reason I reply was to say I thought this thread is becoming hilarious and
interesting. I want to ask what the range and bias mean, but I don't want
to butt in while they're figuring it out. LS
Jan 16 '07 #22

P: n/a
Joe Wright wrote:
Ernie Wright wrote:
>Joe Wright wrote:
>>00000000 10111110 00000010 10000110 00001001 00010100 00010100 11001000
Exp = 11 (-1011)

The exponent is -1012.

It depends on where you think the binary point is.
If we want to talk about floating-point in a way that's consistent with
the existing standards, we aren't free to move it around.

There's a reason it's put in a specific place. In decimal,

1.0 x 10^0
0.1 x 10^1
0.01 x 10^2
10.0 x 10^-1

are obviously all the same number. In fact, there are an infinite
number of ways to write this number by moving the decimal point and
adjusting the exponent. In order to agree on a unique representation,
we need an additional constraint: we require that the significand (the
first part) satisfy

1 <= significand < base

This particular representation is said to be normalized.

If you ignore this convention, you'll have a much harder time following
the discussion of IEEE 754, particularly about things like denormals.

- Ernie http://home.comcast.net/~erniew
Jan 16 '07 #23

P: n/a
Ernie Wright wrote:
Joe Wright wrote:
>Ernie Wright wrote:
>>Joe Wright wrote:
00000000 10111110 00000010 10000110 00001001 00010100 00010100 11001000
Exp = 11 (-1011)

The exponent is -1012.

It depends on where you think the binary point is.

If we want to talk about floating-point in a way that's consistent with
the existing standards, we aren't free to move it around.

There's a reason it's put in a specific place. In decimal,

1.0 x 10^0
0.1 x 10^1
0.01 x 10^2
10.0 x 10^-1

are obviously all the same number. In fact, there are an infinite
number of ways to write this number by moving the decimal point and
adjusting the exponent. In order to agree on a unique representation,
we need an additional constraint: we require that the significand (the
first part) satisfy

1 <= significand < base

This particular representation is said to be normalized.

If you ignore this convention, you'll have a much harder time following
the discussion of IEEE 754, particularly about things like denormals.

- Ernie http://home.comcast.net/~erniew
I learned what I know about floating point at Philco Computers tech
school in 1963. We started with the proposition that the mantissa was a
fraction, always less than 1. Consider..

01000000 00010100 00000000 00000000 00000000 00000000 00000000 00000000
Exp = 1025 (3)
000 00000011
Man = .10100 00000000 00000000 00000000 00000000 00000000 00000000
5.0000000000000000e+00

...The mantissa must be .101 in my case and the biased exponent must be
3. I have seen some IEEE description of this value as 1.01 with a biased
exponent of 2. I can see their point. I can go either way.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Jan 16 '07 #24

P: n/a
Joe Wright wrote:
Ernie Wright wrote:
> 1 <= significand < base

This particular representation is said to be normalized.

I learned what I know about floating point at Philco Computers tech
school in 1963. We started with the proposition that the mantissa was
a fraction, always less than 1.
I'm pretty sure this is why the neologism "significand" was invented for
this component of the number, to avoid overloading "mantissa."

Pre-computer, as I'm sure you know, the mantissa was the fractional part
of a common (base-10) logarithm. My recollection from school is that
"mantissa" was also the name given to the coefficient (the left side) of
a number written in scientific notation.

If the scientific notation coefficient is normalized (it's between 1 and
the base), then the log of the coefficient is the mantissa of the log of
the number. For example, 1500 in scientific notation is

1.5 x 10^3

The coefficient is 1.5. log10( 1.5 ) = 0.1761. log10( 1500 ) = 3.1761.
The mantissa of log10( 1500 ) is equal to log10( 1.5 ).

So we have two definitions of "mantissa":

1. the fractional part of a logarithm
2. the coefficient of a number in scientific notation

These are clearly different things, but we can "align" the two meanings
by assuming that coefficients are normalized. The coefficient is then
the antilog of the definition-1 mantissa.

The significand in the binary representation of IEEE floating-point
numbers is *not* the definition-1 mantissa. It's the coefficient, and
for the sake of simplicity, it's conventionally the *normalized* one.
Conventionally, because it obviously doesn't make any difference
numerically if we move the radix point and adjust the exponent
accordingly.

For what it's worth, I learned about floating-point formats in 1985 or a
little before. That was the year I took a VAX assembly language class.

- Ernie http://home.comcast.net/~erniew
Jan 17 '07 #25

P: n/a
Ernie Wright <er****@comcast.netwrites:
Joe Wright wrote:
>I learned what I know about floating point at Philco Computers tech
school in 1963. We started with the proposition that the mantissa was
a fraction, always less than 1.

I'm pretty sure this is why the neologism "significand" was invented for
this component of the number, to avoid overloading "mantissa."

Pre-computer, as I'm sure you know, the mantissa was the fractional part
of a common (base-10) logarithm. My recollection from school is that
"mantissa" was also the name given to the coefficient (the left side) of
a number written in scientific notation.
According to Wikipedia, from the article on "significand":

Use of "mantissa"

main article: mantissa

The original word used in American English to describe the
coefficient of floating-point numbers in computer hardware, later
called the significand, seems to have been mantissa (see Burks et
al., below), and as of 2005 this usage remains common in
computing and among computer scientists. However, this use of
mantissa is discouraged by the IEEE floating-point standard
committee and by some professionals such as William Kahan and
Donald Knuth, because it conflicts with the pre-existing usage of
mantissa for the fractional part of a logarithm (see also common
logarithm).

The older meaning of mantissa is related to the IEEE's
significand in that the fractional part of a logarithm is the
logarithm of the significand for the same base, plus a constant
depending on the normalization. (The integer part of the
logarithm requires no such manipulation to relate to the
floating-point exponent.)

The logarithmic meaning of mantissa dates to the 18th century
(according to the OED), from its general English meaning (now
archaic) of "minor addition", which stemmed from the Latin word
for "makeweight" (which in turn may have come from
Etruscan). Significand is a 20th century neologism.
--
"If I've told you once, I've told you LLONG_MAX times not to
exaggerate."
--Jack Klein
Jan 17 '07 #26

This discussion thread is closed

Replies have been disabled for this discussion.