473,324 Members | 2,214 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,324 software developers and data experts.

float? double?

hi all...

I've readed some lines about the difference between float and double
data types... but, in the real world, which is the best? when should we
use float or double??

thanks
Erick

Aug 18 '06
60 7133
"Clark S. Cox III" <cl*******@gmail.comwrites:
CBFalconer wrote:
>Keith Thompson wrote:
.... snip ...
>><OT>Assuming a Unix-style time_t (a signed integer type with 1-second
resolution with 0 representing 1970-01-01 00:00:00 GMT), there's
plenty of time before 2038 to expand it to 64 bits; it's already
happened on many systems.</OT>

Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.

It would be an even better argument for using 1600-03-01.
I think that choosing a date when much of the world was still using
the Julian calendar would not be a good thing. (Britain and its
possessions, including what became the US, didn't switch until 1752.)

Choosing 1900-03-01 gives you a nearly 200-year range in which the
leap years are at regular 4-year intervals; 1904-01-01 might make some
calculations simpler. But following the Gregorian leap-year rules is
*trivial*, and choosing an epoch that avoids them over some finite
span just isn't worth the trouble.

This is all somewhat off-topic, of course, since the C standard
doesn't specify either the epoch or the resolution, or even the
representation of time_t. But the most widespread implementation is
as a signed integer representing seconds since 1970-01-01 00:00:00
GMT. If a future C standard were to tighten the specification, that
would not be an unreasonable basis for doing so.

David R. Tribble has a proposal for time support in C 200X at
<http://david.tribble.com/text/c0xlongtime.html>. It's been discussed
in comp.std.c (and any further discussion of it should probaby take
place there).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 21 '06 #51
Eric Sosman wrote:
>
CBFalconer wrote On 08/21/06 04:36,:
>Keith Thompson wrote:

... snip ...
>><OT>Assuming a Unix-style time_t (a signed integer type with 1-second
resolution with 0 representing 1970-01-01 00:00:00 GMT), there's
plenty of time before 2038 to expand it to 64 bits; it's already
happened on many systems.</OT>

Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.

If concerns about leap year are to govern the choice,
the zero point should be xxxx-03-01 00:00:00, where xxxx
is a multiple of 400.

However, leap year calculations are not so important
that they should govern such a choice. Everyone who's
anyone already knows that the Right Thing To Do is to
define the zero point as 1858-11-17 00:00:00.
Hear. The Beginning of Time for DEC VMS. When was the End of Time? :-)

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Aug 21 '06 #52
Eric Sosman wrote:
>
av wrote On 08/21/06 13:38,:
>On Sat, 19 Aug 2006 10:13:41 -0400, CBFalconer wrote:
>>>
You can often get away with much worse. 25 years ago I had a
system with 24 bit floats, which yielded 4.8 digits precision, but
was fast (for its day) and rounded properly. This was quite

i think it is not useful to round numbers
the only place where to round numbers is an iussie seems when input-
output

double d = sqrt(2.0);

Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?
av is a known troll, and firmly in my PLONK list.

--
Chuck F (cb********@yahoo.com) (cb********@maineline.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.netUSE maineline address!
Aug 21 '06 #53
>>There's hardly any application where an accuracy of 1 in 16 million is not
>>acceptable.

Two common exceptions to this are currency and time.

Accountants expect down-to-the-penny (or whatever the smallest unit
of currency is) accuracy no matter what. And governments spend
trillions of dollars a year.

If your time base is in the year 1AD, and you subtract two current-day
times (stored in floats) to get an interval, you can get rounding
error in excess of an hour. Even for POSIX time (epoch 1 Jan 1970),
you still have rounding errors in excess of 1 minute.
>>For instance if you are machining space shuttle parts it is
unlikely they go to a tolerance of more than about 1 in 10000.
The real problem is that errors can propagate. If you multiply by a million,
suddenly you only have an accuracy of 1 in 16.

If you had a precision of 1 in 16 million, and you multiply by a
million (an exact number), you still have 1 in 16 million. You
lose precision when you SUBTRACT nearly-equal numbers. If you
subtract two POSIX times about 1.1 billion seconds past the epoch,
but store these in floats before the subtraction, your result for
the difference is only accurate to within a minute. This stinks if
the real difference is supposed to be 5 seconds.

You're making all this up, aren't you? Posix time today is somewhere
around 1,156,103,121 seconds since the Epoch. We are therefore a little
over half way to the end of Posix time in early 2038. Total Posix
seconds are 2^31 or 2,147,483,648 seconds. I would not expect to treat
such a number with a lowly float with only a 24-bit mantissa.
You wouldn't since you are aware of the issues. Some people
unfamiliar with computer floating point but too familiar with math
would figure that if you can represent 1.0e+38 and 0.25 exactly,
you can also represent the sum of those two exactly.

Posix time is certainly an example of how precision of 1 in 16 million
is *NOT* good enough.
>I do point
out that double has a 53-bit mantissa and is very much up to the task.
When someone decides to represent time as the number of picoseconds
since the beginning of the universe (and it's his problem, not mine, to
prove that he accurately knows when that is) in 512 bits, when 53-bits
will not be up to the task.
>Aside: Why was time_t defined as 32-bit signed integer? What was
supposed to happen when time_t assumes LONG_MAX + 1 ? Why was there no
time to be considered before the Epoch. Arrogance of young men I assume.
One purpose of storing the time is file time stamps. It isn't that
surprising to not have files older than the creation of the system
on the system. time_t was really not designed to store historical
or genealogical dates (which are often dates with no times or time
zones attached: Christmas is on Dec. 25 regardless of time zone).

Why is the tm_year member of a struct tm not defined as long long
or intmax_t? And asctime() has a very bad Y10K problem (and Y0K
problem), even if you don't try to use it with time_t.
>The double type would have been a much better choice for time_t.
But it probably gives the most precision for a time you know least.
>We must try to remain clear ourselves about the difference between
accuracy and precision. It is the representation that offers the
precision, it is our (the programmer's) calculations that may provide
accuracy.
Aug 22 '06 #54
go***********@burditt.org (Gordon Burditt) writes:
>>>There's hardly any application where an accuracy of 1 in 16
million is not acceptable.
The above was posted by "Malcolm" <re*******@btinternet.com>.
>>Two common exceptions to this are currency and time.

Accountants expect down-to-the-penny (or whatever the smallest unit
of currency is) accuracy no matter what. And governments spend
trillions of dollars a year.

If your time base is in the year 1AD, and you subtract two current-day
times (stored in floats) to get an interval, you can get rounding
error in excess of an hour. Even for POSIX time (epoch 1 Jan 1970),
you still have rounding errors in excess of 1 minute.
The above was posted by go***********@burditt.org (Gordon Burditt).
>>>For instance if you are machining space shuttle parts it is
unlikely they go to a tolerance of more than about 1 in 10000.
The real problem is that errors can propagate. If you multiply by
a million, suddenly you only have an accuracy of 1 in 16.

If you had a precision of 1 in 16 million, and you multiply by a
million (an exact number), you still have 1 in 16 million. You
lose precision when you SUBTRACT nearly-equal numbers. If you
subtract two POSIX times about 1.1 billion seconds past the epoch,
but store these in floats before the subtraction, your result for
the difference is only accurate to within a minute. This stinks if
the real difference is supposed to be 5 seconds.

You're making all this up, aren't you? Posix time today is somewhere
around 1,156,103,121 seconds since the Epoch. We are therefore a little
over half way to the end of Posix time in early 2038. Total Posix
seconds are 2^31 or 2,147,483,648 seconds. I would not expect to treat
such a number with a lowly float with only a 24-bit mantissa.
The above was posted by Joe Wright <jo********@comcast.net>.
You wouldn't since you are aware of the issues. Some people
unfamiliar with computer floating point but too familiar with math
would figure that if you can represent 1.0e+38 and 0.25 exactly,
you can also represent the sum of those two exactly.

Posix time is certainly an example of how precision of 1 in 16 million
is *NOT* good enough.
The above was posated by go***********@burditt.org (Gordon Burditt).
You can tell this by the attribution line at the top of this post.

Gordon, please either stop snipping attribution lines, or stop posting
here. You have been told many many times how rude this is. Your
excuses for doing so are, at best, lame.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 22 '06 #55
Keith Thompson wrote:
"Clark S. Cox III" <cl*******@gmail.comwrites:
>CBFalconer wrote:
>>Keith Thompson wrote:
.... snip ...
<OT>Assuming a Unix-style time_t (a signed integer type with 1-second
resolution with 0 representing 1970-01-01 00:00:00 GMT), there's
plenty of time before 2038 to expand it to 64 bits; it's already
happened on many systems.</OT>
Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.
It would be an even better argument for using 1600-03-01.

I think that choosing a date when much of the world was still using
the Julian calendar would not be a good thing. (Britain and its
possessions, including what became the US, didn't switch until 1752.)

Choosing 1900-03-01 gives you a nearly 200-year range in which the
leap years are at regular 4-year intervals; 1904-01-01 might make some
calculations simpler. But following the Gregorian leap-year rules is
*trivial*, and choosing an epoch that avoids them over some finite
span just isn't worth the trouble.
My point was just that. Choosing an epoch to simplify leap year
calculations leads to absurd results. Maybe I should have inserted an
emoticon.

--
Clark S. Cox III
cl*******@gmail.com
Aug 22 '06 #56
av
On 21 Aug 2006 11:09:36 -0700, William Hughes wrote:
the same. see the answer to Sosman
Aug 22 '06 #57
av
On Mon, 21 Aug 2006 14:14:07 -0400, Eric Sosman wrote:
>av wrote On 08/21/06 13:38,:
>On Sat, 19 Aug 2006 10:13:41 -0400, CBFalconer wrote:
>>>
You can often get away with much worse. 25 years ago I had a
system with 24 bit floats, which yielded 4.8 digits precision, but
was fast (for its day) and rounded properly. This was quite

i think it is not useful to round numbers
the only place where to round numbers is an iussie seems when input-
output

double d = sqrt(2.0);

Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?
it is an input exaple, user say "d must be sqrt(2.0)"
and so
"round numbers is an iussie seems when input-output"
too
Aug 22 '06 #58

av wrote:
On Mon, 21 Aug 2006 14:14:07 -0400, Eric Sosman wrote:
av wrote On 08/21/06 13:38,:
On Sat, 19 Aug 2006 10:13:41 -0400, CBFalconer wrote:

You can often get away with much worse. 25 years ago I had a
system with 24 bit floats, which yielded 4.8 digits precision, but
was fast (for its day) and rounded properly. This was quite

i think it is not useful to round numbers
the only place where to round numbers is an iussie seems when input-
output
double d = sqrt(2.0);

Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?

it is an input exaple, user say "d must be sqrt(2.0)"
and so
"round numbers is an iussie seems when input-output"
too
That's right. Anything that might require rounding must be relegated
to
input. Any function that might lead to rounding must be banned. So no
sqrt, sin, log, division ...

-William Hughes

Aug 22 '06 #59
2006-08-21 <ec*********@news2.newsguy.com>,
Chris Torek wrote:
In article <H_******************************@comcast.com>
Joe Wright <jo********@comcast.netwrote:
>>The double type would have been a much better choice for time_t.

The original Unix time was a 16-bit type.
Incorrect. It was 32-bit but with substantially higher resolution (1/60
second instead of 1 second), which was the reason the epoch kept getting
moved. Before "long" was added, it was an array of two ints - ever
wonder why all the time_t functions unnecessarily use pointers?
Aug 22 '06 #60
On Sat, 19 Aug 2006 01:06:58 GMT, Keith Thompson <ks***@mib.org>
wrote:
jmcgill <jm*****@email.arizona.eduwrites:
jacob navia wrote:
double means DOUBLE PRECISION. Note the word precision here.
When you want more precision, use double precision, and if that
doesn't cut it use long double.
Precision means the number of significant digits you get
for the calculations. float gives you 6 digits precision,
(assuming IEEE 754) double gives you 15, and long double
more than that, using Intel/Amd implementation gives you 18.

I think the name "double" probably comes from Fortran, where "DOUBLE
PRECISION" is exactly twice the size of "FLOAT". (I'm not much of a
Fortran person, so I could easily be mistaken.)
<OTIn Fortran -- which at the time C was created was still uppercase
FORTRAN -- the type names are REAL and DOUBLE PRECISION. (Combination
mnemonic and flamebait: God is real, unless declared integer. <G>)

Double is indeed required to occupy twice as much space in storage as
single, and single float to occupy the same space as INTEGER (of which
standard FORTRAN had only one width, although INTEGER*2 INTEGER*4 etc.
were a fairly common extension). However both/all of these can have
unused bits, so double doesn't necessarily have twice the precision of
single, although it must be at least one bit more. And similarly it
needn't have a larger exponent range, although often it does.

- David.Thompson1 at worldnet.att.net
Aug 28 '06 #61

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

8
by: Jonathan Fielder | last post by:
Hi, I have a 32 bit integer value and I wish to find the single precision floating point value that is closest to but less than or equal to the integer. I also have a similar case where I need...
9
by: Sisyphus | last post by:
Hi, I have some software that does the following (in an attempt to determine whether the double x, can be represented just as accurately by a float): void test_it(double x) { float y = x;...
13
by: maadhuu | last post by:
hello , i would like to know as to why double is more efficient than float . thanking you, ranjan.
6
by: James Thurley | last post by:
According to the docs, floats are 32 bit and doubles are 64 bit. So using floats should be faster than using doubles on a 32 bit processor, and my tests confirm this. However, most of the Math...
5
by: Kubik | last post by:
Hi! Let's see, we got: float var=4.6f; //as we know 414/4.6 shoud be equal to 90 but Math.Ceiling(414/var) gives us 91 but (414/var).ToString() prints '90'.
6
by: karthi | last post by:
hi, I need user defined function that converts string to float in c. since the library function atof and strtod occupies large space in my processor memory I can't use it in my code. regards,...
116
by: Dilip | last post by:
Recently in our code, I ran into a situation where were stuffing a float inside a double. The precision was extended automatically because of that. To make a long story short, this caused...
13
by: Shirsoft | last post by:
I have a 32 bit intel and 64 bit AMD machine. There is a rounding error in the 8th digit. Unfortunately because of the algorithm we use, the errors percolate into higher digits. C++ code is...
3
by: Arnie | last post by:
Folks, We ran into a pretty significant performance penalty when casting floats. We've identified a code workaround that we wanted to pass along but also was wondering if others had experience...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.