hi all...
I've readed some lines about the difference between float and double
data types... but, in the real world, which is the best? when should we
use float or double??
thanks
Erick
Aug 18 '06
60 6642
"Clark S. Cox III" <cl*******@gmail.comwrites:
CBFalconer wrote:
>Keith Thompson wrote: .... snip ...
>><OT>Assuming a Unixstyle time_t (a signed integer type with 1second resolution with 0 representing 19700101 00:00:00 GMT), there's plenty of time before 2038 to expand it to 64 bits; it's already happened on many systems.</OT>
Having the epoch start in 1970, or even 1978 (Digital Research) is foolish, when 1968 or 1976 would simplify leap year calculations. This is also an argument for using 1900.
It would be an even better argument for using 16000301.
I think that choosing a date when much of the world was still using
the Julian calendar would not be a good thing. (Britain and its
possessions, including what became the US, didn't switch until 1752.)
Choosing 19000301 gives you a nearly 200year range in which the
leap years are at regular 4year intervals; 19040101 might make some
calculations simpler. But following the Gregorian leapyear rules is
*trivial*, and choosing an epoch that avoids them over some finite
span just isn't worth the trouble.
This is all somewhat offtopic, of course, since the C standard
doesn't specify either the epoch or the resolution, or even the
representation of time_t. But the most widespread implementation is
as a signed integer representing seconds since 19700101 00:00:00
GMT. If a future C standard were to tighten the specification, that
would not be an unreasonable basis for doing so.
David R. Tribble has a proposal for time support in C 200X at
<http://david.tribble.com/text/c0xlongtime.html>. It's been discussed
in comp.std.c (and any further discussion of it should probaby take
place there).

Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Eric Sosman wrote:
>
CBFalconer wrote On 08/21/06 04:36,:
>Keith Thompson wrote:
... snip ...
>><OT>Assuming a Unixstyle time_t (a signed integer type with 1second resolution with 0 representing 19700101 00:00:00 GMT), there's plenty of time before 2038 to expand it to 64 bits; it's already happened on many systems.</OT>
Having the epoch start in 1970, or even 1978 (Digital Research) is foolish, when 1968 or 1976 would simplify leap year calculations. This is also an argument for using 1900.
If concerns about leap year are to govern the choice,
the zero point should be xxxx0301 00:00:00, where xxxx
is a multiple of 400.
However, leap year calculations are not so important
that they should govern such a choice. Everyone who's
anyone already knows that the Right Thing To Do is to
define the zero point as 18581117 00:00:00.
Hear. The Beginning of Time for DEC VMS. When was the End of Time? :)

Joe Wright
"Everything should be made as simple as possible, but not simpler."
 Albert Einstein 
Eric Sosman wrote:
>
av wrote On 08/21/06 13:38,:
>On Sat, 19 Aug 2006 10:13:41 0400, CBFalconer wrote:
>>> You can often get away with much worse. 25 years ago I had a system with 24 bit floats, which yielded 4.8 digits precision, but was fast (for its day) and rounded properly. This was quite
i think it is not useful to round numbers the only place where to round numbers is an iussie seems when input output
double d = sqrt(2.0);
Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?
av is a known troll, and firmly in my PLONK list.

Chuck F (cb********@yahoo.com) (cb********@maineline.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.netUSE maineline address!
>>There's hardly any application where an accuracy of 1 in 16 million is not
>>acceptable.
Two common exceptions to this are currency and time.
Accountants expect downtothepenny (or whatever the smallest unit of currency is) accuracy no matter what. And governments spend trillions of dollars a year.
If your time base is in the year 1AD, and you subtract two currentday times (stored in floats) to get an interval, you can get rounding error in excess of an hour. Even for POSIX time (epoch 1 Jan 1970), you still have rounding errors in excess of 1 minute.
>>For instance if you are machining space shuttle parts it is unlikely they go to a tolerance of more than about 1 in 10000. The real problem is that errors can propagate. If you multiply by a million, suddenly you only have an accuracy of 1 in 16.
If you had a precision of 1 in 16 million, and you multiply by a million (an exact number), you still have 1 in 16 million. You lose precision when you SUBTRACT nearlyequal numbers. If you subtract two POSIX times about 1.1 billion seconds past the epoch, but store these in floats before the subtraction, your result for the difference is only accurate to within a minute. This stinks if the real difference is supposed to be 5 seconds. You're making all this up, aren't you? Posix time today is somewhere around 1,156,103,121 seconds since the Epoch. We are therefore a little over half way to the end of Posix time in early 2038. Total Posix seconds are 2^31 or 2,147,483,648 seconds. I would not expect to treat such a number with a lowly float with only a 24bit mantissa.
You wouldn't since you are aware of the issues. Some people
unfamiliar with computer floating point but too familiar with math
would figure that if you can represent 1.0e+38 and 0.25 exactly,
you can also represent the sum of those two exactly.
Posix time is certainly an example of how precision of 1 in 16 million
is *NOT* good enough.
>I do point out that double has a 53bit mantissa and is very much up to the task.
When someone decides to represent time as the number of picoseconds
since the beginning of the universe (and it's his problem, not mine, to
prove that he accurately knows when that is) in 512 bits, when 53bits
will not be up to the task.
>Aside: Why was time_t defined as 32bit signed integer? What was supposed to happen when time_t assumes LONG_MAX + 1 ? Why was there no time to be considered before the Epoch. Arrogance of young men I assume.
One purpose of storing the time is file time stamps. It isn't that
surprising to not have files older than the creation of the system
on the system. time_t was really not designed to store historical
or genealogical dates (which are often dates with no times or time
zones attached: Christmas is on Dec. 25 regardless of time zone).
Why is the tm_year member of a struct tm not defined as long long
or intmax_t? And asctime() has a very bad Y10K problem (and Y0K
problem), even if you don't try to use it with time_t.
>The double type would have been a much better choice for time_t.
But it probably gives the most precision for a time you know least.
>We must try to remain clear ourselves about the difference between accuracy and precision. It is the representation that offers the precision, it is our (the programmer's) calculations that may provide accuracy.
go***********@burditt.org (Gordon Burditt) writes:
>>>There's hardly any application where an accuracy of 1 in 16 million is not acceptable.
The above was posted by "Malcolm" <re*******@btinternet.com>.
>>Two common exceptions to this are currency and time.
Accountants expect downtothepenny (or whatever the smallest unit of currency is) accuracy no matter what. And governments spend trillions of dollars a year.
If your time base is in the year 1AD, and you subtract two currentday times (stored in floats) to get an interval, you can get rounding error in excess of an hour. Even for POSIX time (epoch 1 Jan 1970), you still have rounding errors in excess of 1 minute.
The above was posted by go***********@burditt.org (Gordon Burditt).
>>>For instance if you are machining space shuttle parts it is unlikely they go to a tolerance of more than about 1 in 10000. The real problem is that errors can propagate. If you multiply by a million, suddenly you only have an accuracy of 1 in 16.
If you had a precision of 1 in 16 million, and you multiply by a million (an exact number), you still have 1 in 16 million. You lose precision when you SUBTRACT nearlyequal numbers. If you subtract two POSIX times about 1.1 billion seconds past the epoch, but store these in floats before the subtraction, your result for the difference is only accurate to within a minute. This stinks if the real difference is supposed to be 5 seconds. You're making all this up, aren't you? Posix time today is somewhere around 1,156,103,121 seconds since the Epoch. We are therefore a little over half way to the end of Posix time in early 2038. Total Posix seconds are 2^31 or 2,147,483,648 seconds. I would not expect to treat such a number with a lowly float with only a 24bit mantissa.
The above was posted by Joe Wright <jo********@comcast.net>.
You wouldn't since you are aware of the issues. Some people
unfamiliar with computer floating point but too familiar with math
would figure that if you can represent 1.0e+38 and 0.25 exactly,
you can also represent the sum of those two exactly.
Posix time is certainly an example of how precision of 1 in 16 million
is *NOT* good enough.
The above was posated by go***********@burditt.org (Gordon Burditt).
You can tell this by the attribution line at the top of this post.
Gordon, please either stop snipping attribution lines, or stop posting
here. You have been told many many times how rude this is. Your
excuses for doing so are, at best, lame.

Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Keith Thompson wrote:
"Clark S. Cox III" <cl*******@gmail.comwrites:
>CBFalconer wrote:
>>Keith Thompson wrote: .... snip ... <OT>Assuming a Unixstyle time_t (a signed integer type with 1second resolution with 0 representing 19700101 00:00:00 GMT), there's plenty of time before 2038 to expand it to 64 bits; it's already happened on many systems.</OT> Having the epoch start in 1970, or even 1978 (Digital Research) is foolish, when 1968 or 1976 would simplify leap year calculations. This is also an argument for using 1900.
It would be an even better argument for using 16000301.
I think that choosing a date when much of the world was still using
the Julian calendar would not be a good thing. (Britain and its
possessions, including what became the US, didn't switch until 1752.)
Choosing 19000301 gives you a nearly 200year range in which the
leap years are at regular 4year intervals; 19040101 might make some
calculations simpler. But following the Gregorian leapyear rules is
*trivial*, and choosing an epoch that avoids them over some finite
span just isn't worth the trouble.
My point was just that. Choosing an epoch to simplify leap year
calculations leads to absurd results. Maybe I should have inserted an
emoticon.

Clark S. Cox III cl*******@gmail.com
On 21 Aug 2006 11:09:36 0700, William Hughes wrote:
the same. see the answer to Sosman
On Mon, 21 Aug 2006 14:14:07 0400, Eric Sosman wrote:
>av wrote On 08/21/06 13:38,:
>On Sat, 19 Aug 2006 10:13:41 0400, CBFalconer wrote:
>>> You can often get away with much worse. 25 years ago I had a system with 24 bit floats, which yielded 4.8 digits precision, but was fast (for its day) and rounded properly. This was quite
i think it is not useful to round numbers the only place where to round numbers is an iussie seems when input output
double d = sqrt(2.0);
Assuming the machine has a finite amount of memory, how do you propose to carry out this calculation without some kind of rounding?
it is an input exaple, user say "d must be sqrt(2.0)"
and so
"round numbers is an iussie seems when inputoutput"
too
av wrote:
On Mon, 21 Aug 2006 14:14:07 0400, Eric Sosman wrote:
av wrote On 08/21/06 13:38,:
On Sat, 19 Aug 2006 10:13:41 0400, CBFalconer wrote:
You can often get away with much worse. 25 years ago I had a system with 24 bit floats, which yielded 4.8 digits precision, but was fast (for its day) and rounded properly. This was quite
i think it is not useful to round numbers
the only place where to round numbers is an iussie seems when input
output
double d = sqrt(2.0);
Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?
it is an input exaple, user say "d must be sqrt(2.0)"
and so
"round numbers is an iussie seems when inputoutput"
too
That's right. Anything that might require rounding must be relegated
to
input. Any function that might lead to rounding must be banned. So no
sqrt, sin, log, division ...
William Hughes
20060821 <ec*********@news2.newsguy.com>,
Chris Torek wrote:
In article <H_******************************@comcast.com>
Joe Wright <jo********@comcast.netwrote:
>>The double type would have been a much better choice for time_t.
The original Unix time was a 16bit type.
Incorrect. It was 32bit but with substantially higher resolution (1/60
second instead of 1 second), which was the reason the epoch kept getting
moved. Before "long" was added, it was an array of two ints  ever
wonder why all the time_t functions unnecessarily use pointers?
On Sat, 19 Aug 2006 01:06:58 GMT, Keith Thompson <ks***@mib.org>
wrote:
jmcgill <jm*****@email.arizona.eduwrites:
jacob navia wrote:
double means DOUBLE PRECISION. Note the word precision here.
When you want more precision, use double precision, and if that
doesn't cut it use long double.
Precision means the number of significant digits you get
for the calculations. float gives you 6 digits precision,
(assuming IEEE 754) double gives you 15, and long double
more than that, using Intel/Amd implementation gives you 18.
I think the name "double" probably comes from Fortran, where "DOUBLE
PRECISION" is exactly twice the size of "FLOAT". (I'm not much of a
Fortran person, so I could easily be mistaken.)
<OTIn Fortran  which at the time C was created was still uppercase
FORTRAN  the type names are REAL and DOUBLE PRECISION. (Combination
mnemonic and flamebait: God is real, unless declared integer. <G>)
Double is indeed required to occupy twice as much space in storage as
single, and single float to occupy the same space as INTEGER (of which
standard FORTRAN had only one width, although INTEGER*2 INTEGER*4 etc.
were a fairly common extension). However both/all of these can have
unused bits, so double doesn't necessarily have twice the precision of
single, although it must be at least one bit more. And similarly it
needn't have a larger exponent range, although often it does.
 David.Thompson1 at worldnet.att.net This discussion thread is closed Replies have been disabled for this discussion. Similar topics
8 posts
views
Thread by Jonathan Fielder 
last post: by

9 posts
views
Thread by Sisyphus 
last post: by

13 posts
views
Thread by maadhuu 
last post: by

6 posts
views
Thread by James Thurley 
last post: by

5 posts
views
Thread by Kubik 
last post: by

6 posts
views
Thread by karthi 
last post: by

116 posts
views
Thread by Dilip 
last post: by

13 posts
views
Thread by Shirsoft 
last post: by

3 posts
views
Thread by Arnie 
last post: by
          