473,378 Members | 1,422 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,378 software developers and data experts.

problem with float

Hello Everyone,

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.

int main()
{
float f1;
printf("Enter a float value : \n");
scanf("%f",&f1);
printf ("float value entered is : %f\n",f1);
}

if i give 22.2 as input, i am getting 22.200001 as output.
if i give 22.22 as input, i am getting 22.219999 as ouput.

but i want 22.200000 and 22.220000 as output.

is there anyway to do it.

P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000

is there anyway to do it.

Thanks and Regards,
P.Balaji

P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000. And if i use double instead of
float, i can get the desired result but i want to use only float.

May 23 '07 #1
35 2155
ba********@gmail.com said:
Hello Everyone,

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.

int main()
{
float f1;
printf("Enter a float value : \n");
At this point, the behaviour of the program is undefined, because you
called a variadic function without a valid prototype in scope. To fix
this, add

#include <stdio.h>

at the top of your program.
scanf("%f",&f1);
How do you know this worked? You forgot to check the return value of
scanf.
printf ("float value entered is : %f\n",f1);
}

if i give 22.2 as input, i am getting 22.200001 as output.
if i give 22.22 as input, i am getting 22.219999 as ouput.
Neither of these values is capable of being stored precisely in a finite
number of bits, using pure binary representation. Therefore, some
imprecision is inevitable. The 22 part is easy, but 0.2 is impossible.

To demonstrate this, let's try to do the impossible.

Working in binary now...

0.1 (binary) is 1/2 which is too high.
0.01 (binary) is 1/4 which is too high.
0.001 (binary) is 1/8 which is too low.
0.0011 (binary) is 3/16 which is too low.
0.00111 (binary) is 7/32 which is too high.
0.001101 (binary) is 13/64 which is too high.
0.0011001 (binary) is 25/128 which is too low.
0.00110011 (binary) is 51/256 which is too low.
0.001100111 (binary) is 103/512 which is too high.
0.0011001101 (binary) is 205/1024 which is too high.
0.00110011001 (binary) is 409/2048 which is too low.
0.001100110011 (binary) is 819/4192 which is too low.
0.0011001100111 (binary) is 1639/8192 which is too high.
0.00110011001101 (binary) is 3277/16384 which is too high.
0.001100110011001 (binary) is 6553/32768 which is too low.
0.0011001100110011 (binary) is 13107/65536 which is too low.
__^^__^^__^^__^^

Are you seeing a pattern yet? Well, that pattern goes on forever.
but i want 22.200000 and 22.220000 as output.

is there anyway to do it.
Use double instead of float to increase the precision (%f is still fine
in your printf, but for double you'll need %lf for the scanf format
specifier), and use %.6f to restrict the output to six decimal places.
If you find that the numbers are coming out very very slightly low, add
0.0000001 to them before printing.
P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000. And if i use double instead of
float, i can get the desired result but i want to use only float.
Why? float is mostly an anachronism. There is very little reason to use
it at all, and no reason whatsoever for learners to use it.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
May 23 '07 #2
ba********@gmail.com wrote:
Hello Everyone,

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.
[snip]
P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000. And if i use double instead of
float, i can get the desired result but i want to use only float.
Hi, Bala

You /really/ need to read the article "What Every Computer Scientist Should
Know About Floating-Point Arithmetic" originally published in the March, 1991
issue of "Computing Surveys" (the journal of the ACM). It explains why you
never get absolutely accurate results from a floatingpoint number.

http://docs.sun.com/source/806-3568/ncg_goldberg.html

--
Lew Pitcher

Master Codewright & JOAT-in-training | Registered Linux User #112576
http://pitcher.digitalfreehold.ca/ | GPG public key available by request
---------- Slackware - Because I know what I'm doing. ------

May 23 '07 #3
Richard Heathfield wrote:
ba********@gmail.com said:
>And if i use double instead of float, i can get the desired result
but i want to use only float.

Why? float is mostly an anachronism. There is very little reason to
use it at all,
Actually, float is more popular than ever in some domains, where its
smaller precision and range compared to double are either irrelevant or
less important than space and speed. It's also much more widely used in
special-purpose hardware, e.g. graphics processors and the SIMD CPU
extensions.
and no reason whatsoever for learners to use it.
They should start out using double, I agree, but at some point they
should learn the tradeoffs. More importantly, they should be told that
using double is *not* a cure for finite precision.

- Ernie http://home.comcast.net/~erniew
May 23 '07 #4
Ernie Wright said:
Richard Heathfield wrote:
>ba********@gmail.com said:
>>but i want to use only float.

Why? float is mostly an anachronism. There is very little reason to
use it at all,

Actually, float is more popular than ever in some domains, where its
smaller precision and range compared to double are either irrelevant
or
less important than space and speed. It's also much more widely used
in special-purpose hardware, e.g. graphics processors and the SIMD CPU
extensions.
Yes, but those are very little reasons. Important within their domain,
sure, but *in general* (i.e. outside those specialist domains) it still
makes sense to use double rather than float.
>and no reason whatsoever for learners to use it.

They should start out using double, I agree, but at some point they
should learn the tradeoffs. More importantly, they should be told
that using double is *not* a cure for finite precision.
Quite so. I hope I made that clear in my original reply.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
May 23 '07 #5
Ernie Wright <er****@comcast.netwrites:
Richard Heathfield wrote:
>ba********@gmail.com said:
>>And if i use double instead of float, i can get the desired result
but i want to use only float.
Why? float is mostly an anachronism. There is very little reason to
use it at all,

Actually, float is more popular than ever in some domains, where its
smaller precision and range compared to double are either irrelevant or
less important than space and speed. It's also much more widely used in
special-purpose hardware, e.g. graphics processors and the SIMD CPU
extensions.
Using float rather than double will *probably* save space (though I've
used systems where both are 64 bits), but it won't necessarily give
you greater speed. It will on some systems, but on others float
calculations may be no faster than double calculations.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
May 23 '07 #6
In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>Using float rather than double will *probably* save space (though I've
used systems where both are 64 bits), but it won't necessarily give
you greater speed. It will on some systems, but on others float
calculations may be no faster than double calculations.
It may even be slower, especially if the processor does the
calculations themselves in double precision and then has to take
an extra step to step them down to single precision. (If you
are supposedly working with a lower precision number, then
it can be an error to carry around too much precision internally.)
--
"No one has the right to destroy another person's belief by
demanding empirical evidence." -- Ann Landers
May 23 '07 #7
Walter Roberson wrote On 05/23/07 16:05,:
In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:

>>Using float rather than double will *probably* save space (though I've
used systems where both are 64 bits), but it won't necessarily give
you greater speed. It will on some systems, but on others float
calculations may be no faster than double calculations.


It may even be slower, especially if the processor does the
calculations themselves in double precision and then has to take
an extra step to step them down to single precision. (If you
are supposedly working with a lower precision number, then
it can be an error to carry around too much precision internally.)
<off-topic reason="C don' need no steenkin' speeds">

Even on CPUs where float-to-internal-and-back takes
extra processing, float is often faster than double by
virtue of being smaller. Contemporary CPUs spend a large
amount of time sitting idle, waiting for memory to disgorge
some data to be crunched. If you can deliver twice as many
items per cache line fill or per page fault or whatever,
you'll reduce the amount of time the CPU wastes twiddling
its silicon thumbs. That will usually far outweigh a few
cycles spent converting between formats.

That's assuming good data locality, of course: Nice,
orderly matrix operations or FFTs or convolutions or that
kind of thing. If you're bouncing all over memory at the
mercy of a linked data structure and the whim of a random
number generator, the cache will thrache anyhow.

(Can I claim a coinage for "thrache?" Google finds 523
hits, but few seem computer-related.)

</off-topic>

--
Er*********@sun.com
May 23 '07 #8
Keith Thompson wrote:
Using float rather than double will *probably* save space (though I've
used systems where both are 64 bits), but it won't necessarily give
you greater speed. It will on some systems, but on others float
calculations may be no faster than double calculations.
Most of the speed gains are because more floats will fit in a given
amount of cache, or more can be read from disk with a given bandwidth,
and so on, and this is very broadly applicable. The actual calculations
tend to take exactly the same amount of time with modern hardware.

- Ernie http://home.comcast.net/~erniew
May 23 '07 #9
Richard Heathfield wrote:
Ernie Wright said:
>Richard Heathfield wrote:
>>float is mostly an anachronism.
>Actually, float is more popular than ever in some domains,

Yes, but those are very little reasons. Important within their domain,
sure, but *in general* (i.e. outside those specialist domains) it still
makes sense to use double rather than float.
Well, that's a step away from anachronism, but I still wouldn't agree.
Doubles should be used by beginners and in "all else being equal" cases.
Beyond that, the choice depends on what you're doing. The use of floats
isn't and shouldn't be confined to esoteric domains.

"Always use double" has a cargo cult tinge to it. I'm not implying
that's what you said, just that the distance from it to what you said
seems uncomfortably short. It's often the advice for every numerical
ill, and it's usually wrong. Like casts, double precision can hide
problems temporarily, but it isn't a cure for them. Much better to
actually understand the numerical behavior of your code.

- Ernie http://home.comcast.net/~erniew
May 23 '07 #10
Ernie Wright said:

<snip>
"Always use double" has a cargo cult tinge to it. I'm not implying
that's what you said, just that the distance from it to what you said
seems uncomfortably short.
I prefer to think of it as a "lie-to-students" - that is, it's a
strategy that works perfectly well until they know enough about
programming in general and floating-point in particular to realise that
there are circumstances in which it *doesn't* work, by which time
they'll also know enough to deal with those circumstances appropriately
and effectively.

It's a bit like the claim that main always returns int. This is
*sufficiently* true to be a good "lie-to-students". By the time they
realise there are circumstances in which it isn't true, they will also
realise *why* they were led down that particular garden path,
appreciate that the reasoning of the "liar-to-students" was valid, and
understand when and why main sometimes doesn't return int (and indeed
may not even be the entry point to the program).

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
May 23 '07 #11
Lew Pitcher wrote:
ba********@gmail.com wrote:
>Hello Everyone,

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.
[snip]
>P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000. And if i use double instead of
float, i can get the desired result but i want to use only float.

Hi, Bala

You /really/ need to read the article "What Every Computer Scientist Should
Know About Floating-Point Arithmetic" originally published in the March, 1991
issue of "Computing Surveys" (the journal of the ACM). It explains why you
never get absolutely accurate results from a floatingpoint number.

http://docs.sun.com/source/806-3568/ncg_goldberg.html
The "never" is overstated. The value 6.75 will be represented absolutely
accurately. So 5.5, 1.25 etc. because the values have 'binary' fraction
parts, 1/2, 1/4, 1/8 etc.

01000000 11011000 00000000 00000000
Exp = 129 (3)
00000011
Man = .11011000 00000000 00000000
6.75000000e+00

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
May 23 '07 #12
On May 23, 10:56 pm, Ernie Wright <ern...@comcast.netwrote:
Richard Heathfield wrote:
Ernie Wright said:
Richard Heathfield wrote:
>float is mostly an anachronism.
Actually, float is more popular than ever in some domains,
Yes, but those are very little reasons. Important within their domain,
sure, but *in general* (i.e. outside those specialist domains) it still
makes sense to use double rather than float.

Well, that's a step away from anachronism, but I still wouldn't agree.
Doubles should be used by beginners and in "all else being equal" cases.
Beyond that, the choice depends on what you're doing. The use of floats
isn't and shouldn't be confined to esoteric domains.

"Always use double" has a cargo cult tinge to it. I'm not implying
that's what you said, just that the distance from it to what you said
seems uncomfortably short. It's often the advice for every numerical
ill, and it's usually wrong. Like casts, double precision can hide
problems temporarily, but it isn't a cure for them. Much better to
actually understand the numerical behavior of your code.
It's true that the difference in space taken up by a double and a
float may be tiny, but a huge number multiplied by a tiny number can
still end up as a large number - if you have very large arrays then
float is probably the sensible choice.
- Ernie http://home.comcast.net/~erniew

May 24 '07 #13
On Wed, 23 May 2007 17:27:34 +0000, Richard Heathfield
<rj*@see.sig.invalidwrote:
>Ernie Wright said:
>Richard Heathfield wrote:
>>ba********@gmail.com said:

but i want to use only float.

Why? float is mostly an anachronism. There is very little reason to
use it at all,

Actually, float is more popular than ever in some domains, where its
smaller precision and range compared to double are either irrelevant
or
less important than space and speed. It's also much more widely used
in special-purpose hardware, e.g. graphics processors and the SIMD CPU
extensions.

Yes, but those are very little reasons. Important within their domain,
sure, but *in general* (i.e. outside those specialist domains) it still
makes sense to use double rather than float.
I understand why you're saying this, but surely the golden rule is to
use the sharpest tool for the job?

Another little reason I have to use float instead of double is I have
a variety of CPUs that only have a 32bit FPU and the double type is
all emulated.

Jim
May 24 '07 #14
Fr************@googlemail.com wrote:
On May 23, 10:56 pm, Ernie Wright <ern...@comcast.netwrote:
>Richard Heathfield wrote:
>>Ernie Wright said:
Richard Heathfield wrote:
float is mostly an anachronism.
Actually, float is more popular than ever in some domains,
Yes, but those are very little reasons. Important within their domain,
sure, but *in general* (i.e. outside those specialist domains) it still
makes sense to use double rather than float.
Well, that's a step away from anachronism, but I still wouldn't agree.
Doubles should be used by beginners and in "all else being equal" cases.
Beyond that, the choice depends on what you're doing. The use of floats
isn't and shouldn't be confined to esoteric domains.

"Always use double" has a cargo cult tinge to it. I'm not implying
that's what you said, just that the distance from it to what you said
seems uncomfortably short. It's often the advice for every numerical
ill, and it's usually wrong. Like casts, double precision can hide
problems temporarily, but it isn't a cure for them. Much better to
actually understand the numerical behavior of your code.

It's true that the difference in space taken up by a double and a
float may be tiny, but a huge number multiplied by a tiny number can
still end up as a large number - if you have very large arrays then
float is probably the sensible choice.
The difference in size of float and double objects is not 'tiny' and is
usually a factor of two.

What do you mean by huge and tiny numbers in terms of floating point?

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
May 24 '07 #15
Lew Pitcher wrote:
>
.... snip ...
>
You /really/ need to read the article "What Every Computer
Scientist Should Know About Floating-Point Arithmetic" originally
published in the March, 1991 issue of "Computing Surveys" (the
journal of the ACM). It explains why you never get absolutely
accurate results from a floatingpoint number.

http://docs.sun.com/source/806-3568/ncg_goldberg.html
That statement is wrong (the 'never'). Most systems will handle
some range of integers with perfect accuracy.

--
If you want to post a followup via groups.google.com, ensure
you quote enough for the article to make sense. Google is only
an interface to Usenet; it's not Usenet itself. Don't assume
your readers can, or ever will, see any previous articles.
More details at: <http://cfaj.freeshell.org/google/>

--
Posted via a free Usenet account from http://www.teranews.com

May 24 '07 #16
ba********@gmail.com wrote:
>
When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.
floats (and doubles) are not exact. They are closest
approximations. They can usually represent exactly only some
limited range of integers, and a few other specific values.

You can round the printed result to some specified level by
specification in printf.

--
If you want to post a followup via groups.google.com, ensure
you quote enough for the article to make sense. Google is only
an interface to Usenet; it's not Usenet itself. Don't assume
your readers can, or ever will, see any previous articles.
More details at: <http://cfaj.freeshell.org/google/>

--
Posted via a free Usenet account from http://www.teranews.com

May 24 '07 #17
Eric Sosman wrote:
>
.... snip ...
>
(Can I claim a coinage for "thrache?" Google finds 523
hits, but few seem computer-related.)
You can also claim 'steenkin' if you wish. At least as far as I am
concerned. Enjoy.

--
If you want to post a followup via groups.google.com, ensure
you quote enough for the article to make sense. Google is only
an interface to Usenet; it's not Usenet itself. Don't assume
your readers can, or ever will, see any previous articles.
More details at: <http://cfaj.freeshell.org/google/>

--
Posted via a free Usenet account from http://www.teranews.com

May 24 '07 #18
>>>>"RH" == Richard Heathfield <rj*@see.sig.invalidwrites:

RHI prefer to think of it as a "lie-to-students" - that is, it's
RHa strategy that works perfectly well until they know enough
RHabout programming in general and floating-point in particular
RHto realise that there are circumstances in which it *doesn't*
RHwork, by which time they'll also know enough to deal with
RHthose circumstances appropriately and effectively.

When I engage in a lie-to-students this way -- and for several years,
I taught music theory, which is founded on several major
lies-to-students[1], I made a point of mentioning that it was, in
fact, a lie-to-students. They seemed to appreciate the honesty, and
when they came up with a counterexample that didn't fit the lie, it
was much easier to say "remember that I told you I was going to lie to
you about some things? That's one of them, and you'll probably be
talking about how it *really* works this time next year" than "well,
yes, I know that Bach wrote that, and no, I'm not saying he's wrong,
but I'd have to fail him if he turned in an exercise like that!"

(Or, famously, "When *you* are Beethoven, then you, too, can write
like that.")

Charlton
[1] OT, but at least not about C++ or Windows: The biggest lie is that
there are pitch combinations that are objectively consonant and
dissonant. If you limit your study only to European and American
music from AD 1000 to the present, you can see that this is not the
case: consonance and dissonance are elements of style. But you have
to pick a starting place, and if you hedge too much, you don't get
anywhere and the students don't have the grounding to understand it
anyway, and so freshman music theory students are told that some pitch
combinations are objectively consonant and dissonant.

--
Charlton Wilbur
cw*****@chromatico.net
May 24 '07 #19
ba********@gmail.com writes:
When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.
The floating point numbers are a subset of the real numbers.

This subset contains all integers whose absolute value is small.

As it is a subset, it should be evident that the result of an operation on
one or two floating point numbers is usually not a floating point number,
so the exact result has to be rounded to stay in the subset. So when you
compute with floating point numbers, most operation introduces a rounding
error and the combination of these errors can become significant.

Usually the subset is based on a binary representation (*) and so most non
integer numbers represented in a decimal notion are not in the set, even
such simple number as 0.1 (this is similar to the fact that simple numbers
like 1/3 or 1/7 have no finite representation in a decimal notation). So
when you use a decimal notation to give a floating point number, you can't
expect that a rounding error won't be introduced during the conversion to
floating point.

(*) Some are pushing for the use of decimal floating point, which would
solve this second problem but increase the previous one and also increase
the difficulty of the analysis needed to get precise estimations of the
rounding error. IBM just announced a processor with hardware support for
decimal floating point. The C commitee has in project a Technical report
describing extentions to handle them.

--
Jean-Marc
May 24 '07 #20

"Charlton Wilbur" <cw*****@chromatico.netha scritto nel messaggio
news:87************@mithril.chromatico.net...
[1] OT, but at least not about C++ or Windows: The biggest lie is that
there are pitch combinations that are objectively consonant and
dissonant. If you limit your study only to European and American
music from AD 1000 to the present, you can see that this is not the
case: consonance and dissonance are elements of style. But you have
to pick a starting place, and if you hedge too much, you don't get
anywhere and the students don't have the grounding to understand it
anyway, and so freshman music theory students are told that some pitch
combinations are objectively consonant and dissonant.
<ot>
It all depends by what you mean by 'objectively': a fifth is
(approximately under 12-TET) a 3:2 ratio, meaning that the fourth
harmonic of C overlaps with the third harmonic of G (counting the
fundamental frequency as the zeroth harmonic and so on).
Instead, the semitone is a 1 : 1.0595 ratio, meaning that if you
play C and C#[1] together you will hear beatings. Obviously,
concluding that a fifth is always 'better' than a semitone is a
non-sequitur.
</ot>

[1] Whoops... I caused it to be no longer a "not-about-Microsoft
OT"... :-)
May 24 '07 #21
>>>>"A" == Army1987 <pl********@for.itwrites:

A<otIt all depends by what you mean by 'objectively': a fifth
Ais (approximately under 12-TET) a 3:2 ratio, meaning that the
Afourth harmonic of C overlaps with the third harmonic of G
A(counting the fundamental frequency as the zeroth harmonic and
Aso on). Instead, the semitone is a 1 : 1.0595 ratio, meaning
Athat if you play C and C#[1] together you will hear
Abeatings. Obviously, concluding that a fifth is always 'better'
Athan a semitone is a non-sequitur. </ot>

Ah, but you're defining "consonance" to mean "has fewer beats when the
interval is played." This would make consonance and dissonance
dependent on tuning. But this is not so, because when you play a
12-tet fifth, there are unpleasant beats between the upper note and
the third harmonic of the lower note. (This is more apparent in the
12-tet major third, where the difference between the just interval and
the 12-tet interval is wider.) And yet the 12-tet perfect fith and
12-tet major third are still both considered consonances.

Alternately, look at the Notre Dame school, or the ars subtilior,
where the perfect fourth is considered a consonance and thirds
dissonances, and where consonance and dissonance is reckoned solely in
relation to the bass; after the time it takes to accustom yourself to
the style, a chord of fourths and fifths where each is in a consonant
relationship to the bass can sound perfectly natural as a
resting-point, even though the major second or seventh found in the
middle would be considered dissonant and unstable in later styles.

Rationalizing the rudiments of Western music has been attempted and
failed by better minds than you or I -- Schenker, Reti, Helmholtz,
Riemann (not the mathematician). Trying to derive the major scale
from the overtone series is like trying to derive English grammar from
anatomy, or C syntax (back on topic! hah!) from the properties of silicon.

Charlton
--
Charlton Wilbur
cw*****@chromatico.net
May 24 '07 #22
Jean-Marc Bourguet wrote, On 24/05/07 20:51:

<snip>
(*) Some are pushing for the use of decimal floating point, which would
solve this second problem but increase the previous one and also increase
the difficulty of the analysis needed to get precise estimations of the
rounding error. IBM just announced a processor with hardware support for
decimal floating point. The C commitee has in project a Technical report
describing extentions to handle them.
The proposal for decimal floating point is not intended to solve the
general problems of floating point. To a large extent it is designed to
solve the problems of financial software, where the algorithm is often
defined in terms of how many decimal places you have to work to, what
rounding you do etc. Having decimal floating point would allow the
company that employs me to significantly simplify some code in the
application that makes most of the companies money whilst also reducing
the possibility of error.
--
Flash Gordon
May 24 '07 #23
On 24 May 2007 21:51:08 +0200, Jean-Marc Bourguet <jm@bourguet.org>
wrote:
>ba********@gmail.com writes:
>When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.

The floating point numbers are a subset of the real numbers.
In fact, there are infinity real numbers between each consecutive pair
of binary floating point numbers, so it's a very tiny subset ;-)

Jim
May 25 '07 #24
On May 24, 10:40 am, Richard Heathfield <r...@see.sig.invalidwrote:
Ernie Wright said:
"Always use double" has a cargo cult tinge to it.

I prefer to think of it as a "lie-to-students" - that is, it's a
strategy that works perfectly well until they know enough about
programming in general and floating-point in particular to realise that
there are circumstances in which it *doesn't* work
I'm not a fan of lie-to-students, especially in a
case like this where it would be very simple to
say something like:
"Always use double because it usually turns out
to be just as fast but you get extra accuracy"
and then they can file away that justification,
and more importantly, it makes them aware that
there may be cases where you use float (even if
they don't know what those cases are at the moment),
and the inquisitive students can follow it up later.

In school, I was taught electricity using the
"dump-truck" analogy: each electron is like
a little truck full of energy, and when it gets
to the bulb it dumps its load and we see the light,
then it goes back into the battery and gets a new
load.

This might explain an utterly simple circuit but
it breaks down very quickly. For example, a circuit
with two bulbs of differing resistances. I was
able to compute the right answer to the current
flow in each branch by using the formulae provided,
but I could never figure out how each dump-truck
decided which way to turn at the intersection and
why they decided that a certain ratio of them would
go one way when they didn't even know what was ahead.
In fact it was not until long after I had finished
my physics minor at university that I read some
websites with proper explanations and I was able
to finally disabuse myself of the ingrained misconception!

A similar thing applies for other people with air
flight; it is woefully common to find people who
think planes fly because the air going over the
wing has to go faster "to catch up with" the air
under the wing.
(I've even seen this printed in a kids education
section of a newspaper).

May 25 '07 #25
Old Wolf said:

<snip>
In school, I was taught electricity using the
"dump-truck" analogy: each electron is like
a little truck full of energy, and when it gets
to the bulb it dumps its load and we see the light,
then it goes back into the battery and gets a new
load.

This might explain an utterly simple circuit but
it breaks down very quickly. For example, a circuit
with two bulbs of differing resistances. I was
able to compute the right answer to the current
flow in each branch by using the formulae provided,
but I could never figure out how each dump-truck
decided which way to turn at the intersection and
why they decided that a certain ratio of them would
go one way when they didn't even know what was ahead.
Easy - they just pick the shortest queue!
A similar thing applies for other people with air
flight; it is woefully common to find people who
think planes fly because the air going over the
wing has to go faster "to catch up with" the air
under the wing.
Didn't they tell you at school how planes really fly? It's all done with
string.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
May 25 '07 #26
Richard Heathfield wrote:
[...]

Didn't they tell you at school how planes really fly? It's all done with
string.
I hear that the new crop of no-frills airlines charge
extra for terminating characters.

(On a few flights I've encountered characters I wouldn't
mind seeing terminated.)

--
Eric Sosman
es*****@acm-dot-org.invalid
May 25 '07 #27
JimS wrote:
On 24 May 2007 21:51:08 +0200, Jean-Marc Bourguet <jm@bourguet.org>
wrote:
>ba********@gmail.com writes:
>>When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.
The floating point numbers are a subset of the real numbers.

In fact, there are infinity real numbers between each consecutive pair
of binary floating point numbers, so it's a very tiny subset ;-)
In fact, it's worse than that:

There are as many real numbers between any two floating point numbers as
there are real numbers (this is the level of infinity known as "aleph
1", if I've remembered my number theory correctly).

There are even more integers than there are floating point numbers, as
the set of floating point numbers is a finite set, but the number of
integers is not as 'large' as the number of real numbers (the set of
real numbers is a 'bigger infinity' than the set of integers).

May 25 '07 #28
Matt van de Werken wrote:
There are even more integers than there are floating point numbers, as
the set of floating point numbers is a finite set,
That's an interesting restriction you've added there.

--
"How am I to understand if you won't teach me?" - Trippa, /Falling/

Hewlett-Packard Limited registered office: Cain Road, Bracknell,
registered no: 690597 England Berks RG12 1HN

May 25 '07 #29
>>>>"CD" == Chris Dollin <ch**********@hp.comwrites:

CDMatt van de Werken wrote:
>There are even more integers than there are floating point
numbers, as the set of floating point numbers is a finite set,
CDThat's an interesting restriction you've added there.

I think "floating point numbers" there is implicitly "floating point
numbers representable in a particular floating point format," because
that makes the statement make sense.

Charlton

--
Charlton Wilbur
cw*****@chromatico.net
May 25 '07 #30
Matt van de Werken wrote:
JimS wrote:
.... snip ...
>>
In fact, there are infinity real numbers between each consecutive
pair of binary floating point numbers, so it's a very tiny subset ;-)

In fact, it's worse than that:

There are as many real numbers between any two floating point
numbers as there are real numbers (this is the level of infinity
known as "aleph 1", if I've remembered my number theory correctly).

There are even more integers than there are floating point numbers,
as the set of floating point numbers is a finite set, but the number
of integers is not as 'large' as the number of real numbers (the set
of real numbers is a 'bigger infinity' than the set of integers).
No, I think you've misremembered it. IIRC Aleph0 is a countable
infinity, i.e. each value can be put in 1 to 1 correspondence with
an integer. Aleph1 is not countable.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

May 25 '07 #31
CBFalconer wrote On 05/25/07 11:25,:
Matt van de Werken wrote:
>>JimS wrote:
... snip ...
>>>In fact, there are infinity real numbers between each consecutive
pair of binary floating point numbers, so it's a very tiny subset ;-)

In fact, it's worse than that:

There are as many real numbers between any two floating point
numbers as there are real numbers (this is the level of infinity
known as "aleph 1", if I've remembered my number theory correctly).

There are even more integers than there are floating point numbers,
as the set of floating point numbers is a finite set, but the number
of integers is not as 'large' as the number of real numbers (the set
of real numbers is a 'bigger infinity' than the set of integers).

No, I think you've misremembered it. IIRC Aleph0 is a countable
infinity, i.e. each value can be put in 1 to 1 correspondence with
an integer. Aleph1 is not countable.
He didn't claim that Aleph1 is countable; in fact, he
characterized it as a "bigger infinity" than the number of
integers and hence UNcountable.

He also claimed (with a bit of a disclaimer) that Aleph1
is the number of real numbers, which is in a sense neither
true nor false. Gödel and Cohen, separately, showed that the
Continuum Hypothesis can neither be proved nor disproved in
Zermelo-Fraenkel systems; you can construct a consistent
mathematics by taking either the truth or the falsity of the
C.H. as an axiom (assuming Z-F itself is consistent). It's
analogous to Euclid's Parallel Postulate: you can't prove or
disprove the P.P. from Euclid's other axioms, and you can
generate self-consistent geometries by either affirming or
denying it.

Thus, whether Aleph1 is the smallest infinity greater
than Aleph0 or whether there exist other infinities between
them is a matter of personal taste.

--
Er*********@sun.com
May 25 '07 #32
Matt van de Werken <ma**@exemail.com.auwrote:
In fact, it's worse than that:
There are as many real numbers between any two floating point numbers as
there are real numbers
ok
(this is the level of infinity known as "aleph
1", if I've remembered my number theory correctly).
Not quite. The number of reals is written as "c" and this is not
the same thing as aleph_1. Google for "continuum hypothesis" for
details. In particular, see the Wikipedia article near the top,
where (to stay tenously on-topic :-) "c" is written as "pow(2,aleph_0)".
--
pa at panix dot com
May 25 '07 #33
On 2007-05-25, in comp.lang.c, Eric Sosman wrote:
He also claimed (with a bit of a disclaimer) that Aleph1
is the number of real numbers, which is in a sense neither
true nor false. Gödel and Cohen, separately, showed that the
Continuum Hypothesis can neither be proved nor disproved in
Zermelo-Fraenkel systems; you can construct a consistent
mathematics by taking either the truth or the falsity of the
C.H. as an axiom (assuming Z-F itself is consistent).
The statement "Zermelo-Fraenkel set theory is consistent" is neither
provable nor disprovable in Zermelo-Fraenkel set theory. Does this mean that
"Zermelo-Fraenkel set theory is consistent", which, when spelled out, states
that there is no sequence of strings satisfying certain syntactic criteria,
is neither true nor false? Can we "construct a consistent mathematics",
whatever that means, by taking either the truth or the falsity of the claim
that Zermelo-Fraenkel set theory is consistent as an axiom?
Thus, whether Aleph1 is the smallest infinity greater
than Aleph0 or whether there exist other infinities between
them is a matter of personal taste.
It is provable without the continuum hypothesis that aleph-1 is the smallest
cardinal greater than aleph-0. The cardinality of the continuum might be
bigger than aleph-1, or, in absence of choice, incomparable with alpeh-1.

As to matter of taste, is it similarly a matter of taste whether there
exists an infinite set or not? After all, it is neither provable nor
refutable from the other axioms that there exists an infinite set.

--
Aatu Koskensilta (aa**************@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
- Ludwig Wittgenstein, Tractatus Logico-Philosophicus
May 26 '07 #34
CBFalconer wrote:
Matt van de Werken wrote:
>JimS wrote:
... snip ...
>>In fact, there are infinity real numbers between each consecutive
pair of binary floating point numbers, so it's a very tiny subset ;-)
In fact, it's worse than that:

There are as many real numbers between any two floating point
numbers as there are real numbers (this is the level of infinity
known as "aleph 1", if I've remembered my number theory correctly).

There are even more integers than there are floating point numbers,
as the set of floating point numbers is a finite set, but the number
of integers is not as 'large' as the number of real numbers (the set
of real numbers is a 'bigger infinity' than the set of integers).

No, I think you've misremembered it. IIRC Aleph0 is a countable
infinity, i.e. each value can be put in 1 to 1 correspondence with
an integer. Aleph1 is not countable.
Hey, it's thinking about things like this that got
Cantor locked in the rubber room. ;-)

--
+----------------------------------------------------------------+
| Charles and Francis Richmond richmond at plano dot net |
+----------------------------------------------------------------+
May 27 '07 #35
"Eric Sosman" <Er*********@sun.comha scritto nel messaggio
news:1180116022.210019@news1nwk...
>
Thus, whether Aleph1 is the smallest infinity greater
than Aleph0 or whether there exist other infinities between
them is a matter of personal taste.
With the definition usually used in English, aleph1 is *defined*
as the smallest infinity greater than aleph0, 2^(aleph0) is called
beth1 or c, and the continuum hypotesis is phrased as
beth1 = aleph1, or as c = aleph1.

(But I've seen Italian and Spanish texts using your definition.)
Jun 1 '07 #36

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: Dr. Richard E. Hawkins | last post by:
I've googled around, and asked everyone I know, but I still can't find any reference to anyone else having faced this particular IE bug with floats. I've put a page at...
14
by: Glen Able | last post by:
Should it be possible to create a custom class, 'Float', which would behave as a drop-in replacement for the builtin float type? As mentioned in another thread, I once tried this in rather a...
2
by: James Radke | last post by:
Hello, I have a vb.net windows application that is calling an older DLL provided by a third party. They supplied a VB 6 application that, when run on my systemn successfully passes data to the...
1
by: books1999 | last post by:
Hi there, I have a problem with this css/div and i cannot work it out. I would like either container to be able to push the background box to grow but in Firefox it overflows. Can someone find a...
9
by: Python.LeoJay | last post by:
Dear all, i need to parse billions of numbers from a file into float numbers for further calculation. i'm not satisfied with the speed of atof() function on my machine(i'm using visual c++ 6)....
5
by: kevinkilroy | last post by:
Hi, Am pretty new to CSS & Tried to use it...all is fine with IE but when view through firefox it goes horribly wrong. Please help! ...
23
by: Babak | last post by:
Hi Everyone, I've written a standard C code for a simple finite element analysis in MSVC++ . When I save the file as a cpp file, it compiles and runs perfectly, but when I save it as a c file,...
2
by: Pete Horrobin | last post by:
Sorry about the rather vague subject, I'm only guessing I have a "float" problem -- chances are it's something v-e-r-y simple (like me!). (This is also my first question to the group, so...
10
by: oktayarslan | last post by:
Hi all; I have a problem when inserting an element to a vector. All I want is reading some data from a file and putting them into a vector. But the program is crashing after pushing a data which...
6
by: pauldepstein | last post by:
Let double NR( double x, double(*)(const double&) f ) be the signature of a Newton-Raphson function NR. Here, f is a function which returns a double and accepts a const double&. The aim of...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.