468,490 Members | 2,606 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,490 developers. It's quick & easy.

float? double?

hi all...

I've readed some lines about the difference between float and double
data types... but, in the real world, which is the best? when should we
use float or double??

thanks
Erick

Aug 18 '06
60 6642
"Clark S. Cox III" <cl*******@gmail.comwrites:
CBFalconer wrote:
>Keith Thompson wrote:
.... snip ...
>><OT>Assuming a Unix-style time_t (a signed integer type with 1-second
resolution with 0 representing 1970-01-01 00:00:00 GMT), there's
plenty of time before 2038 to expand it to 64 bits; it's already
happened on many systems.</OT>

Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.

It would be an even better argument for using 1600-03-01.
I think that choosing a date when much of the world was still using
the Julian calendar would not be a good thing. (Britain and its
possessions, including what became the US, didn't switch until 1752.)

Choosing 1900-03-01 gives you a nearly 200-year range in which the
leap years are at regular 4-year intervals; 1904-01-01 might make some
calculations simpler. But following the Gregorian leap-year rules is
*trivial*, and choosing an epoch that avoids them over some finite
span just isn't worth the trouble.

This is all somewhat off-topic, of course, since the C standard
doesn't specify either the epoch or the resolution, or even the
representation of time_t. But the most widespread implementation is
as a signed integer representing seconds since 1970-01-01 00:00:00
GMT. If a future C standard were to tighten the specification, that
would not be an unreasonable basis for doing so.

David R. Tribble has a proposal for time support in C 200X at
<http://david.tribble.com/text/c0xlongtime.html>. It's been discussed
in comp.std.c (and any further discussion of it should probaby take
place there).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 21 '06 #51
Eric Sosman wrote:
>
CBFalconer wrote On 08/21/06 04:36,:
>Keith Thompson wrote:

... snip ...
>><OT>Assuming a Unix-style time_t (a signed integer type with 1-second
resolution with 0 representing 1970-01-01 00:00:00 GMT), there's
plenty of time before 2038 to expand it to 64 bits; it's already
happened on many systems.</OT>

Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.

If concerns about leap year are to govern the choice,
the zero point should be xxxx-03-01 00:00:00, where xxxx
is a multiple of 400.

However, leap year calculations are not so important
that they should govern such a choice. Everyone who's
anyone already knows that the Right Thing To Do is to
define the zero point as 1858-11-17 00:00:00.
Hear. The Beginning of Time for DEC VMS. When was the End of Time? :-)

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Aug 21 '06 #52
Eric Sosman wrote:
>
av wrote On 08/21/06 13:38,:
>On Sat, 19 Aug 2006 10:13:41 -0400, CBFalconer wrote:
>>>
You can often get away with much worse. 25 years ago I had a
system with 24 bit floats, which yielded 4.8 digits precision, but
was fast (for its day) and rounded properly. This was quite

i think it is not useful to round numbers
the only place where to round numbers is an iussie seems when input-
output

double d = sqrt(2.0);

Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?
av is a known troll, and firmly in my PLONK list.

--
Chuck F (cb********@yahoo.com) (cb********@maineline.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.netUSE maineline address!
Aug 21 '06 #53
>>There's hardly any application where an accuracy of 1 in 16 million is not
>>acceptable.

Two common exceptions to this are currency and time.

Accountants expect down-to-the-penny (or whatever the smallest unit
of currency is) accuracy no matter what. And governments spend
trillions of dollars a year.

If your time base is in the year 1AD, and you subtract two current-day
times (stored in floats) to get an interval, you can get rounding
error in excess of an hour. Even for POSIX time (epoch 1 Jan 1970),
you still have rounding errors in excess of 1 minute.
>>For instance if you are machining space shuttle parts it is
unlikely they go to a tolerance of more than about 1 in 10000.
The real problem is that errors can propagate. If you multiply by a million,
suddenly you only have an accuracy of 1 in 16.

If you had a precision of 1 in 16 million, and you multiply by a
million (an exact number), you still have 1 in 16 million. You
lose precision when you SUBTRACT nearly-equal numbers. If you
subtract two POSIX times about 1.1 billion seconds past the epoch,
but store these in floats before the subtraction, your result for
the difference is only accurate to within a minute. This stinks if
the real difference is supposed to be 5 seconds.

You're making all this up, aren't you? Posix time today is somewhere
around 1,156,103,121 seconds since the Epoch. We are therefore a little
over half way to the end of Posix time in early 2038. Total Posix
seconds are 2^31 or 2,147,483,648 seconds. I would not expect to treat
such a number with a lowly float with only a 24-bit mantissa.
You wouldn't since you are aware of the issues. Some people
unfamiliar with computer floating point but too familiar with math
would figure that if you can represent 1.0e+38 and 0.25 exactly,
you can also represent the sum of those two exactly.

Posix time is certainly an example of how precision of 1 in 16 million
is *NOT* good enough.
>I do point
out that double has a 53-bit mantissa and is very much up to the task.
When someone decides to represent time as the number of picoseconds
since the beginning of the universe (and it's his problem, not mine, to
prove that he accurately knows when that is) in 512 bits, when 53-bits
will not be up to the task.
>Aside: Why was time_t defined as 32-bit signed integer? What was
supposed to happen when time_t assumes LONG_MAX + 1 ? Why was there no
time to be considered before the Epoch. Arrogance of young men I assume.
One purpose of storing the time is file time stamps. It isn't that
surprising to not have files older than the creation of the system
on the system. time_t was really not designed to store historical
or genealogical dates (which are often dates with no times or time
zones attached: Christmas is on Dec. 25 regardless of time zone).

Why is the tm_year member of a struct tm not defined as long long
or intmax_t? And asctime() has a very bad Y10K problem (and Y0K
problem), even if you don't try to use it with time_t.
>The double type would have been a much better choice for time_t.
But it probably gives the most precision for a time you know least.
>We must try to remain clear ourselves about the difference between
accuracy and precision. It is the representation that offers the
precision, it is our (the programmer's) calculations that may provide
accuracy.
Aug 22 '06 #54
go***********@burditt.org (Gordon Burditt) writes:
>>>There's hardly any application where an accuracy of 1 in 16
million is not acceptable.
The above was posted by "Malcolm" <re*******@btinternet.com>.
>>Two common exceptions to this are currency and time.

Accountants expect down-to-the-penny (or whatever the smallest unit
of currency is) accuracy no matter what. And governments spend
trillions of dollars a year.

If your time base is in the year 1AD, and you subtract two current-day
times (stored in floats) to get an interval, you can get rounding
error in excess of an hour. Even for POSIX time (epoch 1 Jan 1970),
you still have rounding errors in excess of 1 minute.
The above was posted by go***********@burditt.org (Gordon Burditt).
>>>For instance if you are machining space shuttle parts it is
unlikely they go to a tolerance of more than about 1 in 10000.
The real problem is that errors can propagate. If you multiply by
a million, suddenly you only have an accuracy of 1 in 16.

If you had a precision of 1 in 16 million, and you multiply by a
million (an exact number), you still have 1 in 16 million. You
lose precision when you SUBTRACT nearly-equal numbers. If you
subtract two POSIX times about 1.1 billion seconds past the epoch,
but store these in floats before the subtraction, your result for
the difference is only accurate to within a minute. This stinks if
the real difference is supposed to be 5 seconds.

You're making all this up, aren't you? Posix time today is somewhere
around 1,156,103,121 seconds since the Epoch. We are therefore a little
over half way to the end of Posix time in early 2038. Total Posix
seconds are 2^31 or 2,147,483,648 seconds. I would not expect to treat
such a number with a lowly float with only a 24-bit mantissa.
The above was posted by Joe Wright <jo********@comcast.net>.
You wouldn't since you are aware of the issues. Some people
unfamiliar with computer floating point but too familiar with math
would figure that if you can represent 1.0e+38 and 0.25 exactly,
you can also represent the sum of those two exactly.

Posix time is certainly an example of how precision of 1 in 16 million
is *NOT* good enough.
The above was posated by go***********@burditt.org (Gordon Burditt).
You can tell this by the attribution line at the top of this post.

Gordon, please either stop snipping attribution lines, or stop posting
here. You have been told many many times how rude this is. Your
excuses for doing so are, at best, lame.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 22 '06 #55
Keith Thompson wrote:
"Clark S. Cox III" <cl*******@gmail.comwrites:
>CBFalconer wrote:
>>Keith Thompson wrote:
.... snip ...
<OT>Assuming a Unix-style time_t (a signed integer type with 1-second
resolution with 0 representing 1970-01-01 00:00:00 GMT), there's
plenty of time before 2038 to expand it to 64 bits; it's already
happened on many systems.</OT>
Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.
It would be an even better argument for using 1600-03-01.

I think that choosing a date when much of the world was still using
the Julian calendar would not be a good thing. (Britain and its
possessions, including what became the US, didn't switch until 1752.)

Choosing 1900-03-01 gives you a nearly 200-year range in which the
leap years are at regular 4-year intervals; 1904-01-01 might make some
calculations simpler. But following the Gregorian leap-year rules is
*trivial*, and choosing an epoch that avoids them over some finite
span just isn't worth the trouble.
My point was just that. Choosing an epoch to simplify leap year
calculations leads to absurd results. Maybe I should have inserted an
emoticon.

--
Clark S. Cox III
cl*******@gmail.com
Aug 22 '06 #56
av
On 21 Aug 2006 11:09:36 -0700, William Hughes wrote:
the same. see the answer to Sosman
Aug 22 '06 #57
av
On Mon, 21 Aug 2006 14:14:07 -0400, Eric Sosman wrote:
>av wrote On 08/21/06 13:38,:
>On Sat, 19 Aug 2006 10:13:41 -0400, CBFalconer wrote:
>>>
You can often get away with much worse. 25 years ago I had a
system with 24 bit floats, which yielded 4.8 digits precision, but
was fast (for its day) and rounded properly. This was quite

i think it is not useful to round numbers
the only place where to round numbers is an iussie seems when input-
output

double d = sqrt(2.0);

Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?
it is an input exaple, user say "d must be sqrt(2.0)"
and so
"round numbers is an iussie seems when input-output"
too
Aug 22 '06 #58

av wrote:
On Mon, 21 Aug 2006 14:14:07 -0400, Eric Sosman wrote:
av wrote On 08/21/06 13:38,:
On Sat, 19 Aug 2006 10:13:41 -0400, CBFalconer wrote:

You can often get away with much worse. 25 years ago I had a
system with 24 bit floats, which yielded 4.8 digits precision, but
was fast (for its day) and rounded properly. This was quite

i think it is not useful to round numbers
the only place where to round numbers is an iussie seems when input-
output
double d = sqrt(2.0);

Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?

it is an input exaple, user say "d must be sqrt(2.0)"
and so
"round numbers is an iussie seems when input-output"
too
That's right. Anything that might require rounding must be relegated
to
input. Any function that might lead to rounding must be banned. So no
sqrt, sin, log, division ...

-William Hughes

Aug 22 '06 #59
2006-08-21 <ec*********@news2.newsguy.com>,
Chris Torek wrote:
In article <H_******************************@comcast.com>
Joe Wright <jo********@comcast.netwrote:
>>The double type would have been a much better choice for time_t.

The original Unix time was a 16-bit type.
Incorrect. It was 32-bit but with substantially higher resolution (1/60
second instead of 1 second), which was the reason the epoch kept getting
moved. Before "long" was added, it was an array of two ints - ever
wonder why all the time_t functions unnecessarily use pointers?
Aug 22 '06 #60
On Sat, 19 Aug 2006 01:06:58 GMT, Keith Thompson <ks***@mib.org>
wrote:
jmcgill <jm*****@email.arizona.eduwrites:
jacob navia wrote:
double means DOUBLE PRECISION. Note the word precision here.
When you want more precision, use double precision, and if that
doesn't cut it use long double.
Precision means the number of significant digits you get
for the calculations. float gives you 6 digits precision,
(assuming IEEE 754) double gives you 15, and long double
more than that, using Intel/Amd implementation gives you 18.

I think the name "double" probably comes from Fortran, where "DOUBLE
PRECISION" is exactly twice the size of "FLOAT". (I'm not much of a
Fortran person, so I could easily be mistaken.)
<OTIn Fortran -- which at the time C was created was still uppercase
FORTRAN -- the type names are REAL and DOUBLE PRECISION. (Combination
mnemonic and flamebait: God is real, unless declared integer. <G>)

Double is indeed required to occupy twice as much space in storage as
single, and single float to occupy the same space as INTEGER (of which
standard FORTRAN had only one width, although INTEGER*2 INTEGER*4 etc.
were a fairly common extension). However both/all of these can have
unused bits, so double doesn't necessarily have twice the precision of
single, although it must be at least one bit more. And similarly it
needn't have a larger exponent range, although often it does.

- David.Thompson1 at worldnet.att.net
Aug 28 '06 #61

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

8 posts views Thread by Jonathan Fielder | last post: by
9 posts views Thread by Sisyphus | last post: by
13 posts views Thread by maadhuu | last post: by
6 posts views Thread by James Thurley | last post: by
5 posts views Thread by Kubik | last post: by
116 posts views Thread by Dilip | last post: by
13 posts views Thread by Shirsoft | last post: by
reply views Thread by NPC403 | last post: by
reply views Thread by theflame83 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.